0% found this document useful (0 votes)
35 views

An Introduction To Optimization With Applications In Machine Learning And Data Analytics Jeffrey Paul Wheeler pdf download

The document is an introduction to optimization, focusing on its applications in machine learning and data analytics. It aims to equip students with practical knowledge and skills across various fields, including mathematics, engineering, and computer science. The text balances theoretical concepts with practical applications, utilizing software tools like Excel and Python for problem-solving.

Uploaded by

dilesayuanbo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

An Introduction To Optimization With Applications In Machine Learning And Data Analytics Jeffrey Paul Wheeler pdf download

The document is an introduction to optimization, focusing on its applications in machine learning and data analytics. It aims to equip students with practical knowledge and skills across various fields, including mathematics, engineering, and computer science. The text balances theoretical concepts with practical applications, utilizing software tools like Excel and Python for problem-solving.

Uploaded by

dilesayuanbo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 85

An Introduction To Optimization With

Applications In Machine Learning And Data


Analytics Jeffrey Paul Wheeler download

https://ebookbell.com/product/an-introduction-to-optimization-
with-applications-in-machine-learning-and-data-analytics-jeffrey-
paul-wheeler-53329220

Explore and download more ebooks at ebookbell.com


Here are some recommended products that we believe you will be
interested in. You can click the link to download.

An Introduction To Optimization With Applications To Machine Learning


5th Edition 5th Edition Edwin K P Chong

https://ebookbell.com/product/an-introduction-to-optimization-with-
applications-to-machine-learning-5th-edition-5th-edition-edwin-k-p-
chong-215720954

An Introduction To Optimization On Smooth Manifolds Nicolas Boumal

https://ebookbell.com/product/an-introduction-to-optimization-on-
smooth-manifolds-nicolas-boumal-48155782

An Introduction To Optimization Edwin K P Chong Stanislaw H Zak Chong

https://ebookbell.com/product/an-introduction-to-optimization-edwin-k-
p-chong-stanislaw-h-zak-chong-22381640

An Introduction To Optimization Techniques 1st Edition Sharma

https://ebookbell.com/product/an-introduction-to-optimization-
techniques-1st-edition-sharma-23509948
An Introduction To Optimization Third Edition Edwin K P Chong

https://ebookbell.com/product/an-introduction-to-optimization-third-
edition-edwin-k-p-chong-4306834

An Introduction To Optimization 2nd Ed Edwin K P Chong Stanislaw H Zak

https://ebookbell.com/product/an-introduction-to-optimization-2nd-ed-
edwin-k-p-chong-stanislaw-h-zak-974346

An Introduction To Optimization 4th Edition Solution Manual Fourth


Edwin K P Chong

https://ebookbell.com/product/an-introduction-to-optimization-4th-
edition-solution-manual-fourth-edwin-k-p-chong-9996842

An Introduction To Nonlinear Optimization Theory Marius Durea Radu


Strugariu

https://ebookbell.com/product/an-introduction-to-nonlinear-
optimization-theory-marius-durea-radu-strugariu-51046154

An Introduction To Structural Optimization Solid Mechanics And Its


Applications 1st Edition Peter W Christensen

https://ebookbell.com/product/an-introduction-to-structural-
optimization-solid-mechanics-and-its-applications-1st-edition-peter-w-
christensen-2523464
An Introduction to
Optimization

The primary goal of this text is a practical one. Equipping students with enough knowledge
and creating an independent research platform, the author strives to prepare students for
professional careers. Providing students with a marketable skill set requires topics from
many areas of optimization. The initial goal of this text is to develop a marketable skill set
for mathematics majors as well as for students of engineering, computer science, econom-
ics, statistics, and business. Optimization reaches into many different fields.
This text provides a balance where one is needed. Mathematics optimization books are
often too heavy on theory without enough applications; texts aimed at business students
are often strong on applications, but weak on math. The book represents an attempt at
overcoming this imbalance for all students taking such a course.
The book contains many practical applications but also explains the mathematics behind
the techniques, including stating definitions and proving theorems. Optimization tech-
niques are at the heart of the first spam filters, are used in self-driving cars, play a great
role in machine learning, and can be used in such places as determining a batting order in a
Major League Baseball game. Additionally, optimization has seemingly limitless other ap-
plications in business and industry. In short, knowledge of this subject offers an individual
both a very marketable skill set for a wealth of jobs as well as useful tools for research in
many academic disciplines.
Many of the problems rely on using a computer. Microsoft’s Excel is most often used, as
this is common in business, but Python and other languages are considered. The consider-
ation of other programming languages permits experienced mathematics and engineering
students to use MATLAB or Mathematica, and the computer science students to write
their own programs in Java or Python.
Jeffrey Paul Wheeler earned his PhD in Combinatorial Number Theory from the University
of Memphis by extending what had been a conjecture of Erdős on the integers to finite groups.
He has published, given talks at numerous schools, and twice been a guest of Trinity College at
the University of Cambridge. He has taught mathematics at Miami University (Ohio), the Uni-
versity of Tennessee-Knoxville, the University of Memphis, Rhodes College, the University of
Pittsburgh, Carnegie Mellon University, and Duquesne University. He has received numerous
teaching awards and is currently in the Department of Mathematics at the University of Pitts-
burgh. He also occasionally teaches for Pitt’s Computer Science Department and the College
of Business Administration. Dr. Wheeler’s Optimization course was one of the original thirty
to participate in the Mathematical Association of America’s NSF-funded PIC Math program.
Textbooks in Mathematics
Series editors:
Al Boggess, Kenneth H. Rosen

An Introduction to Analysis, Third Edition


James R. Kirkwood

Multiplicative Differential Calculus


Svetlin Georgiev, Khaled Zennir

Applied Differential Equations


The Primary Course
Vladimir A. Dobrushkin

Introduction to Computational Mathematics: An Outline


William C. Bauldry

Mathematical Modeling the Life Sciences


Numerical Recipes in Python and MATLABTM
N. G. Cogan

Classical Analysis
An Approach through Problems
Hongwei Chen

Classical Vector Algebra


Vladimir Lepetic

Introduction to Number Theory


Mark Hunacek

Probability and Statistics for Engineering and the Sciences with Modeling using R
William P. Fox and Rodney X. Sturdivant

Computational Optimization: Success in Practice


Vladislav Bukshtynov

Computational Linear Algebra: with Applications and MATLAB• Computations


Robert E. White

Linear Algebra With Machine Learning and Data


Crista Arangala

Discrete Mathematics with Coding


Hugo D. Junghenn

Applied Mathematics for Scientists and Engineers


Youssef N. Raffoul

Graphs and Digraphs, 7th ed


Gary Chartrand, Heather Jordon, Vincent Vatter and Ping Zhang

An Introduction to Optimization: With Applications in Machine Learning and Data Analytics


Jeffrey Paul Wheeler

https://www.routledge.com/Textbooks-in-Mathematics/book-series/CANDHTEXBOOMTH
An Introduction to
Optimization
With Applications in Machine
Learning and Data Analytics

Jeffrey Paul Wheeler


MATLAB is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks
does not warrant the accuracy of the text or exercises in this book. This book’s use or discussion of
MATLAB software or related products does not constitute endorsement or sponsorship by The
MathWorks of a particular pedagogical approach or particular use of the MATLAB software.
First edition published 2024
by CRC Press
6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742

and by CRC Press


4 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN

CRC Press is an imprint of Taylor & Francis Group, LLC

© 2024 Taylor & Francis Group, LLC

Reasonable efforts have been made to publish reliable data and information, but the author and pub-
lisher cannot assume responsibility for the validity of all materials or the consequences of their use.
The authors and publishers have attempted to trace the copyright holders of all material reproduced
in this publication and apologize to copyright holders if permission to publish in this form has not
been obtained. If any copyright material has not been acknowledged please write and let us know so
we may rectify in any future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying, microfilming, and recording, or in any information stor-
age or retrieval system, without written permission from the publishers.

For permission to photocopy or use material electronically from this work, access www.copyright.com
or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923,
978-750-8400. For works that are not available on CCC please contact [email protected]

Trademark notice: Product or corporate names may be trademarks or registered trademarks and are
used only for identification and explanation without intent to infringe.

ISBN: 978-0-367-42550-0 (hbk)


ISBN: 978-1-032-61590-5 (pbk)
ISBN: 978-0-367-42551-7 (ebk)

DOI: 10.1201/9780367425517

Typeset in CMR10 font


by KnowledgeWorks Global Ltd.

Publisher’s note: This book has been prepared from camera-ready copy provided by the authors.
Contents

Acknowledgments xiii

List of Figures xv

List of Tables xix

List of Algorithms xxi

List of Notation xxiii

I Preliminary Matters 1
1 Preamble 3
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 About This Book . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.1 Presentation . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.2 Contents . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 One-Semester Course Material . . . . . . . . . . . . . . . . . 6
1.5 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . 6

2 The Language of Optimization 9


2.1 Basic Terms Defined . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 When a Max or Min Is Not in the Set . . . . . . . . . . . . . 10
2.3 Solving an Optimization Problem . . . . . . . . . . . . . . . 11
2.4 Algorithms and Heuristics . . . . . . . . . . . . . . . . . . . 12
2.5 Runtime of an Algorithm or a Heuristic . . . . . . . . . . . . 14
2.6 For Further Study . . . . . . . . . . . . . . . . . . . . . . . . 14
2.7 Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3 Computational Complexity 17
3.1 Arithmetic Complexity . . . . . . . . . . . . . . . . . . . . . 17
3.2 Asymptotic Notation . . . . . . . . . . . . . . . . . . . . . . 19
3.3 Intractability . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.4 Complexity Classes . . . . . . . . . . . . . . . . . . . . . . . 23
3.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 23

v
vi Contents

3.4.2 Time, Space, and Big O Notation . . . . . . . . . . . . 25


3.4.3 The Complexity Class P . . . . . . . . . . . . . . . . . 25
3.4.4 The Complexity Class NP . . . . . . . . . . . . . . . . 26
3.4.5 Utility of Complexity Classes . . . . . . . . . . . . . . 27
3.5 For Further Study . . . . . . . . . . . . . . . . . . . . . . . . 27
3.6 Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4 Algebra Review 29
4.1 Systems of Linear Inequalities in Two Variables – Geometric
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.1.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.2 Solving Systems of Linear Equations Using Linear Algebra . 33
4.2.1 Gauss-Jordan Elimination . . . . . . . . . . . . . . . . 33
4.2.2 Gaussian Elimination Compared with Gauss-Jordan
Elimination . . . . . . . . . . . . . . . . . . . . . . . . 35
4.3 Linear Algebra Basics . . . . . . . . . . . . . . . . . . . . . . 36
4.3.1 Matrices and Their Multiplication . . . . . . . . . . . 36
4.3.2 Identity Matrices, Inverses, and Determinants of
Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.3.3 Solving Systems of Linear Equations via Cramer’s Rule 44
4.3.4 Vector and Matrix Norms . . . . . . . . . . . . . . . . 45
4.3.5 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . 52
4.4 Matrix Properties Important to Optimization . . . . . . . . . 57
4.4.1 Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . 57
4.4.2 Unimodular Matrices . . . . . . . . . . . . . . . . . . . 58
4.5 Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

5 Matrix Factorization 66
5.1 LU Factorization . . . . . . . . . . . . . . . . . . . . . . . . 66
5.2 Cholesky Decomposition . . . . . . . . . . . . . . . . . . . . 69
5.3 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.4 Orthonormal Matrices . . . . . . . . . . . . . . . . . . . . . . 74
5.5 The Gram-Schmidt Process . . . . . . . . . . . . . . . . . . . 78
5.6 QR Factorization . . . . . . . . . . . . . . . . . . . . . . . . 80
5.7 Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.8 For Further Study . . . . . . . . . . . . . . . . . . . . . . . . 83
5.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

II Linear Programming 85
6 Linear Programming 87
6.1 A Geometric Approach to Linear Programming in Two
Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.1.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.1.2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.1.3 Keywords . . . . . . . . . . . . . . . . . . . . . . . . . 92
Contents vii

6.2 The Simplex Method: Max LP Problems with Constraints of


the Form ≤ . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 93
6.2.2 Slack Variables . . . . . . . . . . . . . . . . . . . . . . 93
6.2.3 The Method . . . . . . . . . . . . . . . . . . . . . . . 95
6.2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.2.5 Keywords . . . . . . . . . . . . . . . . . . . . . . . . . 100
6.3 The Dual: Minimization with Problem Constraints of the
Form ≥ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.3.1 How It Works . . . . . . . . . . . . . . . . . . . . . . . 101
6.3.2 Why It Works . . . . . . . . . . . . . . . . . . . . . . 102
6.3.3 Keywords . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.4 The Big M Method: Max/Min LP Problems with Varying
Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.4.1 Maximization Problems with the Big M Method . . . 103
6.4.2 Minimization Problems with the Big M Method . . . 106
6.5 Degeneracy and Cycling in the Simplex Method . . . . . . . 108
6.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

7 Sensitivity Analysis 112


7.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
7.2 An Excel Example . . . . . . . . . . . . . . . . . . . . . . . . 112
7.2.1 Solver’s Answer Report . . . . . . . . . . . . . . . . . 114
7.2.2 Solver’s Sensitivity Report . . . . . . . . . . . . . . . . 115
7.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

8 Integer Linear Programming 121


8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
8.2 Dakin’s Branch and Bound . . . . . . . . . . . . . . . . . . . 122
8.3 Gomory Cut-Planes . . . . . . . . . . . . . . . . . . . . . . . 128
8.4 For Further Study . . . . . . . . . . . . . . . . . . . . . . . . 132
8.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

III Nonlinear (Geometric) Programming 135


9 Calculus Review 137
9.1 Derivatives and Continuity . . . . . . . . . . . . . . . . . . . 137
9.2 Taylor Series for Functions of a Single Variable . . . . . . . . 141
9.3 Newton’s Method . . . . . . . . . . . . . . . . . . . . . . . . 144
9.4 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
9.5 Partial Derivatives . . . . . . . . . . . . . . . . . . . . . . . . 146
9.6 The Taylor Series of a Function of Two Variables . . . . . . 148
9.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
viii Contents

10 A Calculus Approach to Nonlinear Programming 153


10.1 Using Derivatives to Find Extrema of Functions of a Single
Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
10.2 Calculus and Extrema of Multivariable Functions . . . . . . 156
10.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

11 Constrained Nonlinear Programming: Lagrange Multipliers


and the KKT Conditions 162
11.1 Lagrange Multipliers . . . . . . . . . . . . . . . . . . . . . . 162
11.2 The KKT Conditions . . . . . . . . . . . . . . . . . . . . . . 164
11.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166

12 Optimization Involving Quadratic Forms 168


12.1 Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . 168
12.2 Definite and Semidefinite Matrices and Optimization . . . . 169
12.3 The Role of Eigenvalues in Optimization . . . . . . . . . . . 170
12.4 Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
12.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173

13 Iterative Methods 175


13.1 Newton’s Method for Optimization . . . . . . . . . . . . . . 175
13.1.1 Single-Variable Newton’s Method for Optimization . . 175
13.1.2 Multivariable Newton’s Method for Optimization . . . 176
13.2 Steepest Descent (or Gradient Descent) . . . . . . . . . . . . 181
13.2.1 Generalized Reduced Gradient . . . . . . . . . . . . . 184
13.3 Additional Geometric Programming Techniques . . . . . . . 185
13.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185

14 Derivative-Free Methods 187


14.1 The Arithmetic Mean-Geometric Mean Inequality (AGM) . . 187
14.2 Weight Finding Algorithm for the AGM . . . . . . . . . . . . 192
14.3 The AGM, Newton’s Method, and Reduced Gradient
Compared . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
14.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

15 Search Algorithms 197


15.1 Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . 197
15.2 Ant Foraging Optimization . . . . . . . . . . . . . . . . . . . 204

IV Convexity and the Fundamental Theorem of


Linear Programming 207
16 Important Sets for Optimization 209
16.1 Special Linear Combinations . . . . . . . . . . . . . . . . . . 209
16.2 Special Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
16.3 Special Properties of Sets . . . . . . . . . . . . . . . . . . . . 214
Contents ix

16.4 Special Objects . . . . . . . . . . . . . . . . . . . . . . . . . 218


16.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219

17 The Fundamental Theorem of Linear Programming 221


17.1 The Finite Basis Theorem . . . . . . . . . . . . . . . . . . . 221
17.2 The Fundamental Theorem of Linear Programming . . . . . 223
17.3 For Further Study . . . . . . . . . . . . . . . . . . . . . . . . 224
17.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224

18 Convex Functions 226


18.1 Convex Functions of a Single Variable . . . . . . . . . . . . . 226
18.2 Concave Functions . . . . . . . . . . . . . . . . . . . . . . . . 233
18.3 Graphs of Convex and Concave Functions . . . . . . . . . . . 234
18.4 Multivariable Convex Functions . . . . . . . . . . . . . . . . 237
18.5 Mathematical Results from Convexity . . . . . . . . . . . . . 244
18.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248

19 Convex Optimization 251


19.1 Convex Optimization and Applications . . . . . . . . . . . . 251
19.2 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
19.3 Subgradient Descent . . . . . . . . . . . . . . . . . . . . . . . 258
19.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262

V Combinatorial Optimization 265


20 An Introduction to Combinatorics 267
20.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
20.2 The Basic Tools of Counting . . . . . . . . . . . . . . . . . . 268
20.2.1 When We Add, Subtract, Multiply, or Divide . . . . . 268
20.2.2 Permutations and Combinations . . . . . . . . . . . . 270
20.3 The Binomial Theorem and Binomial Coefficients . . . . . . 273
20.3.1 Pascal’s Triangle . . . . . . . . . . . . . . . . . . . . . 273
20.3.2 Binomial Coefficients . . . . . . . . . . . . . . . . . . . 274
20.3.3 The Binomial Theorem . . . . . . . . . . . . . . . . . 274
20.3.4 Another Counting Argument . . . . . . . . . . . . . . 277
20.3.5 The Multinomial Theorem . . . . . . . . . . . . . . . . 277
20.4 Counting When Objects Are Indistinguishable . . . . . . . . 278
20.4.1 Permutations with Indistinguishable Objects . . . . . 278
20.4.2 Summary of Basic Counting Techniques . . . . . . . . 280
20.5 The Pigeonhole Principle . . . . . . . . . . . . . . . . . . . . 281
20.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
x Contents

21 An Introduction to Graph Theory 285


21.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 285
21.2 Special Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . 289
21.2.1 Empty Graphs and the Trivial Graph . . . . . . . . . 289
21.2.2 Walks, Trails, Paths, and Cycles . . . . . . . . . . . . 289
21.2.3 Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
21.2.4 Complete and Bipartite Graphs . . . . . . . . . . . . . 291
21.3 Vertex and Edge Cuts . . . . . . . . . . . . . . . . . . . . . . 293
21.3.1 Graph Connectivity . . . . . . . . . . . . . . . . . . . 294
21.3.2 Notation for Removing a Vertex or Edge from a Graph 294
21.3.3 Cut Vertices and Vertex Cuts . . . . . . . . . . . . . . 295
21.3.4 Edge Cuts and Bridges . . . . . . . . . . . . . . . . . 298
21.4 Some Useful and Interesting Results . . . . . . . . . . . . . . 299
21.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301

22 Network Flows 303


22.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 303
22.2 Maximum Flows and Cuts . . . . . . . . . . . . . . . . . . . 305
22.3 The Dinitz-Edmonds-Karp-Ford-Fulkerson Algorithm . . . . 309
22.4 Max Flow as a Linear Programming Problem . . . . . . . . . 318
22.5 Application to a Major League Baseball Pennant Race . . . 320
22.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323

23 Minimum-Weight Spanning Trees and Shortest Paths 325


23.1 Weighted Graphs and Spanning Trees . . . . . . . . . . . . . 325
23.2 Minimum-Weight Spanning Trees . . . . . . . . . . . . . . . 326
23.2.1 Kruskal’s Algorithm . . . . . . . . . . . . . . . . . . . 328
23.2.2 Prim’s Method . . . . . . . . . . . . . . . . . . . . . . 331
23.2.3 Kruskal’s and Prim’s Compared . . . . . . . . . . . . 333
23.3 Shortest Paths . . . . . . . . . . . . . . . . . . . . . . . . . . 333
23.3.1 Dijkstra’s Algorithm . . . . . . . . . . . . . . . . . . . 334
23.3.2 A Linear Programming Approach to Shortest Paths . 337
23.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340

24 Network Modeling and the Transshipment Problem 342


24.1 Introduction of the Problem . . . . . . . . . . . . . . . . . . 342
24.2 The Guarantee of Integer Solutions in Network Flow Problems 347
24.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350

25 The Traveling Salesperson Problem 351


25.1 History of the Traveling Salesperson Problem . . . . . . . . . 351
25.2 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
25.3 Heuristic Solutions . . . . . . . . . . . . . . . . . . . . . . . . 353
25.3.1 Nearest Neighbor . . . . . . . . . . . . . . . . . . . . . 354
25.3.2 Insertion Algorithms . . . . . . . . . . . . . . . . . . . 359
25.3.3 The Geometric Heuristic . . . . . . . . . . . . . . . . . 369
Contents xi

25.4 For Further Study . . . . . . . . . . . . . . . . . . . . . . . . 370


25.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371

VI Optimization for Data Analytics and Machine


Learning 373
26 Probability 375
26.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
26.2 Set Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
26.2.1 The Vocabulary of Sets and Sample Spaces . . . . . . 376
26.2.2 The Algebra of Sets . . . . . . . . . . . . . . . . . . . 377
26.3 Foundations of Probability . . . . . . . . . . . . . . . . . . . 379
26.3.1 Borel Sets . . . . . . . . . . . . . . . . . . . . . . . . . 379
26.3.2 The Axioms and Basic Properties of Probability . . . 379
26.4 Conditional Probability . . . . . . . . . . . . . . . . . . . . . 382
26.4.1 Naive Probability . . . . . . . . . . . . . . . . . . . . . 382
26.4.2 Conditional Probability . . . . . . . . . . . . . . . . . 383
26.4.3 Bayes’ Theorem . . . . . . . . . . . . . . . . . . . . . 384
26.4.4 Independence . . . . . . . . . . . . . . . . . . . . . . . 388
26.5 Random Variables and Distributions . . . . . . . . . . . . . . 389
26.5.1 Random Variables . . . . . . . . . . . . . . . . . . . . 389
26.5.2 Probability Mass and Probability Density Functions . 390
26.5.3 Some Discrete Random Variable Probability
Distributions . . . . . . . . . . . . . . . . . . . . . . . 394
26.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397

27 Regression Analysis via Least Squares 398


27.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
27.2 Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
27.3 Linear Least Squares . . . . . . . . . . . . . . . . . . . . . . 399
27.3.1 Pseudo-Inverse . . . . . . . . . . . . . . . . . . . . . . 401
27.3.2 Brief Discussion of Probabilistic Interpretation . . . . 402
27.4 Regularized Linear Least Squares . . . . . . . . . . . . . . . 403

28 Forecasting 404
28.1 Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
28.1.1 Exponential Smoothing . . . . . . . . . . . . . . . . . 405
28.1.2 Trends . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
28.1.3 Seasonality . . . . . . . . . . . . . . . . . . . . . . . . 406
28.2 Stationary Data and Differencing . . . . . . . . . . . . . . . 407
28.2.1 Autocorrelation . . . . . . . . . . . . . . . . . . . . . . 408
28.3 ARIMA Models . . . . . . . . . . . . . . . . . . . . . . . . . 409
28.3.1 Autoregressive Models . . . . . . . . . . . . . . . . . . 410
28.3.2 Moving Average Models . . . . . . . . . . . . . . . . . 411
28.3.3 ARIMA Model Structure . . . . . . . . . . . . . . . . 411
28.4 Partial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
xii Contents

28.4.1 Goodness-of-Fit Metrics . . . . . . . . . . . . . . . . . 413


28.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413

29 Introduction to Machine Learning 415


29.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
29.2 Nearest Neighbors . . . . . . . . . . . . . . . . . . . . . . . . 415
29.3 Support Vector Machines . . . . . . . . . . . . . . . . . . . . 416
29.4 Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . 417
29.4.1 Artificial Neural Networks . . . . . . . . . . . . . . . . 417
29.4.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . 418

A Techniques of Proof 420


A.1 Introduction to Propositional Logic . . . . . . . . . . . . . . 420
A.2 Direct Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
A.3 Indirect Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . 424
A.3.1 Proving the Contrapositive . . . . . . . . . . . . . . . 424
A.3.2 Proof by Contradiction . . . . . . . . . . . . . . . . . 425
A.4 The Principle of Mathematical Induction . . . . . . . . . . . 426
A.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429

B Useful Tools from Analysis and Topology 430


B.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
B.2 Useful Theorems . . . . . . . . . . . . . . . . . . . . . . . . . 432

Bibliography 433

Index 439
Acknowledgments

To love gone sour, suspicion, and bad debt.


– The Clarks

Also, to my immediate family for the support as I hid in the basement work-
ing on this book, and to my extended family – especially my parents – for
their hard work, sacrifice, and support through all the years in giving me a
great education and continually encouraging me along the way. You all have
optimized the quality of my life.

xiii
List of Figures

2.1 The graph of f (x) = x1 , where x > 0. . . . . . . . . . . . . . 10

3.1 The growth of functions. . . . . . . . . . . . . . . . . . . . . 22

4.1 Example 4.1.1 – bounded solution set. . . . . . . . . . . . . 31


4.2 Example 4.1.2 – unbounded solution set. . . . . . . . . . . . 31
4.3 Example 4.1.3 – bounded solution set. . . . . . . . . . . . . 32
4.4 Example 4.1.4 – empty solution set. . . . . . . . . . . . . . . 32

6.1 Feasible region for Lincoln Outdoors. . . . . . . . . . . . . . 89


6.2 Graphs of the objective function for Lincoln Outdoors. . . . 90
6.3 The multiple integer solutions for Lincoln Outdoors. . . . . 92

7.1 The Lincoln Outdoors problem in Excel. . . . . . . . . . . . 113


7.2 An Excel solution for Lincoln Outdoors. . . . . . . . . . . . 113
7.3 Options in Excel’s solution for Lincoln Outdoors. . . . . . . 114
7.4 Report options displayed in Excel’s solution for Lincoln
Outdoors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
7.5 The answer report for Lincoln Outdoors. . . . . . . . . . . . 115
7.6 The sensitivity report for Lincoln Outdoors. . . . . . . . . . 116
7.7 Solution changes for different A in the objective function P =
Ax1 + 90x2 with Lincoln Outdoors. . . . . . . . . . . . . . . 117
7.8 The limits report for Lincoln Outdoors. . . . . . . . . . . . . 118

8.1 Feasible region for Anna’s tables. . . . . . . . . . . . . . . . 125


8.2 Feasible region for Anna’s tables after Branch 1 cut. . . . . . 126
8.3 Feasible region for Anna’s tables after Branch 4 cut. . . . . . 126
8.4 Feasible region for Anna’s tables after Branch 5 cut. . . . . . 127

9.1 The discontinuous function in Example 9.1.3. . . . . . . . . 138


9.2 A case illustrating Rolle’s theorem. . . . . . . . . . . . . . . 139
9.3 Illustrating the mean value theorem. . . . . . . . . . . . . . 140
9.4 f (x) = x2 + 3 and its tangent line at x = 1. . . . . . . . . . 142
9.5 The vector ⟨3, 2⟩ drawn in standard position. . . . . . . . . . 145

10.1 f (x) = 14 x4 − 12 x2 + 1 in Example 10.1.6. . . . . . . . . . . . 155

xv
xvi List of Figures

10.2 Level curves p z = 1, 2, 3, 4 on the surface of


f (x, y) = x2 + y 2 . . . . . . . . . . . . . . . . . . . . . . . . 156
10.3 ∇f (1, 1) pointing in the direction of the greatest increase out
of (1, 1, 12 ) in Example 10.2.3. Here the y axis runs left-right
and the x axis is coming out of the page. . . . . . . . . . . . 157
10.4 A saddle point from Example 10.2.8. . . . . . . . . . . . . . 160

11.1 Minimizing the radius in Example 11.2.2. . . . . . . . . . . . 166

13.1 f (x) = 17 x7 − 3x5 + 22x3 − 80x in Example 13.1.2. . . . . . 177


13.2 f (x, y) = x2 + y 2 − 2x − 4y + 4 in Example 13.1.4. . . . . . 178
2 2
13.3 f (x, y) = e−(x +y ) in Example 13.1.6. . . . . . . . . . . . . 179

15.1 Maximum fitness over time by crossover and mutation


strategy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
15.2 Maximum fitness over time by selection strategy. . . . . . . 201

16.1 The affine hull of S in Example 16.2.2. . . . . . . . . . . . . 211


16.2 The convex hull of S in Example 16.2.2. . . . . . . . . . . . 212
16.3 Relationship among affine, conical, and convex hulls and sets. 215

17.1 The region F in Example 17.1.4. . . . . . . . . . . . . . . . . 222

18.1 f (x) = x2 with line segment from x = −1 to x = 2 in Example


18.1.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
18.2 f (x) = x2 with tangent line 4x − 4 at the point (2, 4). . . . . 229
18.3 f (x) = −x2 with line segment from x = −1 to x = 2 in
Example 18.2.2. . . . . . . . . . . . . . . . . . . . . . . . . . 234
18.4 The epigraph of f (x) = x4 − 3x3 − 3x2 + 7x + 6. . . . . . . 235
18.5 The epigraph of f (x) = sin x. . . . . . . . . . . . . . . . . . 235
18.6 The hypograph of f (x) = −x2 . . . . . . . . . . . . . . . . . . 236
18.7 f (x) = x1 2 + x2 2 . . . . . . . . . . . . . . . . . . . . . . . . . 237
18.8 f (x) = x1 2 + x2 2 and tangent hyperplane at (2, 1). . . . . . 240
18.9 Strictly convex h(x1 , x2 ) = 2x1 2 + x2 2 − ln x1 x2 from Example
18.4.11. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244

21.1 A cycle and Hamiltonian cycle. . . . . . . . . . . . . . . . . 290

22.1 A network with a capacity function. . . . . . . . . . . . . . . 304


22.2 A network with edge capacities and flows. . . . . . . . . . . 305
22.3 The cut U = {s, a} in the network N . . . . . . . . . . . . . . 306
22.4 A u − v semipath P from a network. . . . . . . . . . . . . . 310
22.5 An augmenting semipath sadf t. . . . . . . . . . . . . . . . . 311
22.6 After augmenting the flow on semipath sadf t. . . . . . . . . 311
22.7 Second run of DEKaFF results through augmenting the flow
on semipath sadgt. . . . . . . . . . . . . . . . . . . . . . . . 317
List of Figures xvii

22.8 Design of the network for the baseball elimination problem


(all edges are forward edges). . . . . . . . . . . . . . . . . . . 322
22.9 Determining if the Mets are eliminated in the 1992 NL East
Pennant race. . . . . . . . . . . . . . . . . . . . . . . . . . . 323

23.1 Finding a shortest path in D. . . . . . . . . . . . . . . . . . 334


23.2 Flow constraints (f ) for a shortest path in D. . . . . . . . . 338
23.3 The Excel setup for an LP solution for finding the distance in
Example 23.3.4. . . . . . . . . . . . . . . . . . . . . . . . . . 338
23.4 Using Solver for an LP solution for finding the distance in
Example 23.3.4. . . . . . . . . . . . . . . . . . . . . . . . . . 339
23.5 Excel’s solution for an LP modeling of distance in Example
23.3.4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
23.6 Weighted graph G for Exercises 23.2, 23.3, 23.4, 23.5. . . . . 340
23.7 Weighted graph H for Exercises 23.6, 23.7, 23.8, 23.9. . . . . 341

24.1 The Excel setup for Jamie’s SunLov’d Organic Oranges. . . 345
24.2 Excel’s SUMPRODUCT in Jamie’s SunLov’d Organic
Oranges. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
24.3 Constraints in Jamie’s SunLov’d Organic Oranges. . . . . . 346
24.4 Optimal distribution for Jamie’s SunLov’d Organic Oranges. 347

25.1 Hamilton’s Icosian game. . . . . . . . . . . . . . . . . . . . . 352


25.2 Sample start. . . . . . . . . . . . . . . . . . . . . . . . . . . 352
25.3 The platonic solid: Dodecahedron. . . . . . . . . . . . . . . . 352
25.4 A weighted K5 . . . . . . . . . . . . . . . . . . . . . . . . . . 357
25.5 Choices leaving A. . . . . . . . . . . . . . . . . . . . . . . . . 357
25.6 First edge selection. . . . . . . . . . . . . . . . . . . . . . . . 357
25.7 Choices leaving D. . . . . . . . . . . . . . . . . . . . . . . . 358
25.8 Second edge selection. . . . . . . . . . . . . . . . . . . . . . . 358
25.9 Reaching vertex E. . . . . . . . . . . . . . . . . . . . . . . . 358
25.10 Resulting tour. . . . . . . . . . . . . . . . . . . . . . . . . . . 358
25.11 Leaving the start E. . . . . . . . . . . . . . . . . . . . . . . 359
25.12 2nd step out of E. . . . . . . . . . . . . . . . . . . . . . . . . 359
25.13 Nearest neighbor algorithm’s tour for Example 25.3.2, but
starting out of E. . . . . . . . . . . . . . . . . . . . . . . . . 359
25.14 The weighted future world graph with start at R in Example
25.3.3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
25.15 Future world closest insertion iteration 2 in Example 25.3.3. 361
25.16 Future world closest insertion iteration 3 in Example 25.3.3. 362
25.17 Future world closest insertion iteration 4 in Example 25.3.3. 363
25.18 Future world closest insertion final tour in Example 25.3.3. . 364
25.19 Weighted future world graph with start at R and 1st insertion
in Example 25.3.4. . . . . . . . . . . . . . . . . . . . . . . . 365
25.20 Future world cheapest insertion iteration 2 in Example 25.3.4. 366
xviii List of Figures

25.21 Future world cheapest insertion iteration 3 in Example 25.3.4. 368


25.22 Future world cheapest insertion iteration 4 in Example 25.3.4. 368
25.23 Future world cheapest insertion final tour in Example 25.3.4. 369
25.24 Weighted octahedral graph in Exercise 25.7. . . . . . . . . . 372

B.1 Point A inside a region R, C outside R, and B on the boundary


of R. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
List of Tables

6.1 Manufacturing Data for Lincoln Outdoors in Example 6.1.1 88


6.2 P (x1 , x2 ) Evaluated at Corner Points . . . . . . . . . . . . . 91
6.3 Summary of Applying the Simplex Method to LP Problems 108

7.1 Manufacturing Data for Lincoln Outdoors in Example 6.1.1 113

8.1 LP Branches of Dankin’s Method for Example 8.2.2, ACHF


Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

9.1 Iterative Values of Newton’s Method for x4 −x2 −2 with x0 = 2


(Truncated) . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

13.1 Obtaining Critical Numbers via Newton’s Method in Example


13.1.2 (Truncated) . . . . . . . . . . . . . . . . . . . . . . . . 176
13.2 Obtaining Critical Numbers via Newton’s Method in Example
13.1.6 (Truncated) . . . . . . . . . . . . . . . . . . . . . . . . 180
13.3 Obtaining Critical Numbers via Steepest Descent in Example
13.2.2 (Truncated) . . . . . . . . . . . . . . . . . . . . . . . . 184

15.1 Initial Population to Begin Evolutionary Programming . . . 198


15.2 Crossover and Mutation in the Process of Evolutionary
Programming . . . . . . . . . . . . . . . . . . . . . . . . . . 198
15.3 Population with Rank and Selection Probability in
Evolutionary Programming . . . . . . . . . . . . . . . . . . . 199
15.4 Population After One Round of Selection in Evolutionary
Programming . . . . . . . . . . . . . . . . . . . . . . . . . . 199
15.5 Population of Polynomials . . . . . . . . . . . . . . . . . . . 203
15.6 Sample of Evolved Polynomials . . . . . . . . . . . . . . . . 204

20.1 Summary of Permutations and Combinations . . . . . . . . 280

22.1 Possible Edge Cuts in the Network N from Figure 22.2 . . . 308
22.2 Using DEKaFF to Find a First Augmenting Semipath in N
in Figure 22.2 . . . . . . . . . . . . . . . . . . . . . . . . . . 312
22.3 Vertex Labels from the First Run through Algorithm 22.3.1 315
22.4 Original Flow, Capacity and Augmentations on Edges in N
from the First Run Through Algorithm 22.3.1 . . . . . . . . 315

xix
xx List of Tables

22.5 Using DEKaFF to Find a Second Augmenting Semipath in N


in Figure 22.6 . . . . . . . . . . . . . . . . . . . . . . . . . . 316
22.6 Vertex Labels from the Second Run Through
Algorithm 22.3.1 . . . . . . . . . . . . . . . . . . . . . . . . . 316
22.7 Starting Flow, Capacity and Augmentations on Edges in N
from the Second Run Through Algorithm 22.3.1 . . . . . . . 317
22.8 Third Search for an Augmenting Semipath in N in
Figure 22.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318

23.1 Edges and Their Weights from Example 23.1.2 . . . . . . . . 326


23.2 Distances (in Miles) Between Sites at Delos Resort in Example
23.2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327

25.1 Number of Distinct Tours in a Complete Graph with n


Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
25.2 Distances (in Miles) Between Themed Parks in Future World
in Example 25.3.1 . . . . . . . . . . . . . . . . . . . . . . . . 355

A.1 Truth Table for Negation of a Proposition . . . . . . . . . . 421


A.2 Truth Table for Conjunction Operator on Propositions . . . 421
A.3 Truth Table for the Disjunction Operator on Propositions . 421
A.4 Truth Table for Conditional Statements . . . . . . . . . . . . 422
A.5 Proof That a Conditional Statement Is Logically Equivalent
to Its Contrapositive . . . . . . . . . . . . . . . . . . . . . . 423
A.6 Truth Table for the Biconditional . . . . . . . . . . . . . . . 423
List of Algorithms

7.2.1 Finding Additional Non-Degenerate LP Solutions Using


Solver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
8.2.1 Dakin’s Branch and Bound Algorithm for ILP. . . . . . . . 122
8.3.1 Gomory Cuts for ILP. . . . . . . . . . . . . . . . . . . . . . 131
13.2.1 Gradient Descent for Nonlinear Programming. . . . . . . . 181
19.3.1 Subgradient Descent for (19.13) . . . . . . . . . . . . . . . . 259
19.3.2 Subgradient Descent for Lasso. . . . . . . . . . . . . . . . . 261
22.3.1 The Dinitz-Edmonds-Karp-Ford-Fulkerson (DEKaFF)
Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
23.2.1 Kruskal’s Algorithm (1956) for Finding a Minimum Weight
Spanning Tree. . . . . . . . . . . . . . . . . . . . . . . . . . 328
23.2.2 Prim’s Method (1957). . . . . . . . . . . . . . . . . . . . . . 331
23.3.1 Dijkstra’s Algorithm to Obtain Shortest Paths. . . . . . . . 334
25.3.1 Nearest Neighbor Heuristic for Finding a (Local) Minimum
Weight Tour. . . . . . . . . . . . . . . . . . . . . . . . . . . 355
25.3.2 Closest Insertion Heuristic for Finding a (Local) Minimum
Weight Tour. . . . . . . . . . . . . . . . . . . . . . . . . . . 360
25.3.3 Cheapest Insertion Heuristic for Finding a (Local) Minimum
Weight Tour. . . . . . . . . . . . . . . . . . . . . . . . . . . 364

xxi
List of Notation

inf(S) infimum of S (greatest lower


bound) . . . . . . . . . . . . . 11
sup(S) supremum of S (least upper
bound) . . . . . . . . . . . . . 11
O(g) g is an asymptotic upper bound
(‘Big O notation’ ) . . . . . . 19
AT transpose of a matrix A . . . 37
Rm×n the collection of matrices with
entries from R with m rows
and n columns . . . . . . . . 37
||x|| magnitude of vector x . . . . 38
Aij ij th minor matrix of A . . . . 41
A′ij ij th cofactor of matrix of A . 42
det(A) determinant of matrix A . . . 42
Ai matrix formed by replacing ith
column of A with b in Ax = b 44
||v||1 ℓ1 vector norm . . . . . . . . 46
||v||2 ℓ2 or Euclidean vector norm . 46
||v||∞ ℓ∞ vector norm . . . . . . . . 46
||A||1 ℓ1 matrix norm . . . . . . . . 47
||A||F Frobenius matrix norm . . . . 47
||A||∞ ℓ∞ matrix norm . . . . . . . 47
κ(A) condition number of a matrix
A. . . . . . . . . . . . . . . . 50
Hn nth Hilbert matrix . . . . . . 51
span(v1 , v2 , . . . vn ) the set of all linear combina-
tions of {v1 , v2 , . . . vn } . . . 53
dim(V ) dimension of a vector space V 53
col(A) column space of a matrix A . 54
row(A) row space of a matrix A . . . 54
null(A) null space of a matrix A . . . 54
rank(A) rank of matrix A = dim(row(A))
= dim(col(A)) . . . . . . . . 55
nullity(A) the dimension of the null space
of A . . . . . . . . . . . . . . 55

xxiii
xxiv List of Notation

GLn (Z) general linear group of matri-


ces . . . . . . . . . . . . . . . 59
Tn (x) nth order Taylor polynomial 141
Rn (x) remainder of Tn (x) . . . . . 142
∂f (x,y) ∂
fx = fx (x, y) = ∂x = ∂x f (x, y) partial derivative of f(x,y) with
respect to x . . . . . . . . . 146
∂f (x,y) ∂
fy = fy (x, y) = ∂y = ∂y f (x, y) partial derivative of f(x,y) with
respect to y . . . . . . . . . 146
∇f (x, y) gradient of f(x,y) . . . . . . . 147
Hf Hessian of f (x1 , x2 , . . . , xn ) . 147
L(S) linear hull (or linear span) of a
set S . . . . . . . . . . . . . . 210
af f (S) affine hull of a set S . . . . . 210
coni(S) conical hull of a set S . . . . 210
conv(S) convex hull of a set S . . . . 210
H(c, b) hyperplane . . . . . . . . . . 218
A+B sumset . . . . . . . . . . . . . 221
epi(f ) epigraph of f (x); i.e. all points
above the graph of f (X) . . . 235
hypo(f ) hypograph of f (x); i.e. all
points below the graph of f (X) 236
||x||p p norm . . . . . . . . . . . . 246
R++ the set of positive real numbers
(0, ∞) . . . . . . . . . . . . . 248
R+ the set of nonnegative real
numbers [0, ∞) . . . . . . . . 248
n! factorial function n! = n(n −
1)(n − 2) · · · 3 · 2 · 1 with 0! := 1 268
P (n, r) the number of r-permutations
from a set of n objects without
repetition . . . . . . . . . . . 270
C(n, r) the number of r-combinations
from a set of n objects without
repetition . . . . . . . . . . . 271
n

r binomial coefficient = C(n,r) 274
n

n1 ,n2 ,...,nm multinomial coefficient . . . . 277
⌈r⌉ ceiling of r; the smallest inte-
ger k ≥ r . . . . . . . . . . . 282
deg− (u) in-degree of a vertex u . . . . 287
deg+ (u) out-degree of a vertex u . . . 287
Pn path of order n . . . . . . . . 289
Cn cycle of order n . . . . . . . . 289
Kn complete graph of order n . . 291
Ks,t complete bipartite graph with
partite sets of order s and t . 292
List of Notation xxv

κ(G) (vertex) connectivity of a graph


G. . . . . . . . . . . . . . . . 296
λ(G) edge connectivity of a graph G 298
⌊r⌋ floor of r; the greatest integer
k≤r. . . . . . . . . . . . . . 299
f + (u) total flow out of vertex u . . 304
f − (u) total flow into vertex u . . . . 304
val(f ) value of the flow f in a network 305
[U, U ] cut in a network . . . . . . . 305
cap(K) capacity of a cut K in a net-
work . . . . . . . . . . . . . . 306
f + (U ) total flow out of the vertices in
the set U . . . . . . . . . . . 306
f + (U ) total flow into the vertices in
the set U . . . . . . . . . . . 306
d(u, v) distance between vertices u
and v in a graph G . . . . . . 333
s∈S s is an element of S . . . . . . 376
∅ empty or null set . . . . . . . 377
A∪B union of sets A and B . . . . 377
A∩B intersection of sets A and B . 377
S c = S̄ complement of set S . . . . . 378
¬p negation of the proposition p 421
p∧q conjunction of the propositions
p and q . . . . . . . . . . . . 421
p∨q disjunction of the propositions
p and q . . . . . . . . . . . . 421
p→q the conditional statement p
implies q . . . . . . . . . . . . 422
p ⇐⇒ q biconditional (p → q) ∧ (q →
p) . . . . . . . . . . . . . . . 423
||x − y|| Euclidean distance . . . . . . 430
Nϵ (x) ϵ-neighborhood . . . . . . . . 430
int(R) interior of R . . . . . . . . . . 430
ext(R) exterior of R . . . . . . . . . 430
Part I

Preliminary Matters
1
Preamble

1.1 Introduction
As a subject, optimization can be admired because it reaches into many dif-
ferent fields. It borrows tools from statistics, computer science, calculus (anal-
ysis), numerical analysis, graph theory, and combinatorics as well as other
areas of mathematics. It has applications in economics, computer science and
numerous other disciplines as well as being incredibly useful outside academics
(this is an understatement!). Its techniques were at the heart of the first spam
filters, are used in self-driving cars, play a great role in machine learning and
can be used in such places as determining a batting order in a Major League
Baseball game. Additionally, it has seemingly limitless other applications in
business and industry. In short, knowledge of this subject offers an individual
both a very marketable skill set for a wealth of jobs as well as useful tools for
research in many academic disciplines.

1.2 Software
Even though much of this text has problems that rely on using a computer, I
have stayed away from emphasizing any one particular software. Microsoft’s
Excel is most often used as this is common in business, but Python and other
languages are considered. There are two reasons for this:
1. Software changes.
2. In a typical Introduction to Optimization class in the Mathematics Depart-
ment at the University of Pittsburgh, I get students from Mathematics,
Engineering, Economics, Computer Science, Statistics, and Business and
these students come to the class with different computer experience. I do
assign problems that involve using a computer, but I never require using
a particular software and this has been very rewarding; especially during
student presentations. The reason for this is we all get to experience the

DOI: 10.1201/9780367425517-1 3
4 Preamble

Mathematics and Engineering students using MatLab or Mathematica,


the Economics and Business majors using Excel, and the Computer Sci-
ence students writing their own programs in Java or Python. In short, we
all see different ways to do similar problems and are each richer for seeing
the different processes.

1.3 About This Book


1.3.1 Presentation
This book is a sampling of optimization techniques and only touches on the
surface of each. Most of the chapters in this book could be developed into
their own texts, but rather than be an expert exposition on a single topic,
my colleagues and I have sought to create a buffet from which students and
faculty may sample what they wish. Additionally, we have attempted to offer
a healthy combination of applications of techniques as well as some of the
underlying mathematics of the techniques. This latter goal may have caused
us to lose some readers: “Oh, no, this is going to be dry, boring, and soulless. I
just want to know how to solve problems”. If this is you, please be patient with
us and join us on our journey. We believe that even the smallest understanding
of the theory enables one to be a better problem solver and, more importantly,
provides one with the tools to effectively diagnose what is not working and
which direction to head when the technique fails to work in a particular setting.
Others will see this as a poser for a proper mathematics text. There is some
truth to this assessment and to those individuals I recommend any one of the
many excellent mathematical texts referenced in this work. To the many that
may regard the majority of the presentations in this work as not proper or
formal enough, I quote a New Yorker article on the superb mathematician
Alexander Grothendieck:
“Grothendieck argued that mathematicians hide all of the discovery pro-
cess, and make it appear smooth and deductive. He said that, because of this,
the creative side of math is totally misunderstood.” [28]

1.3.2 Contents
The structure of the text is such that most chapters and even some sections
can be read independently. A refresher, for example, on Taylor’s Theorem for
multivariable functions or a crash course on matrix factorization can be easily
done by finding the appropriate section in the text.
This text grew out of notes for the class Mathematical Models for Con-
sultants I taught three times for the Tepper School of Business at Carnegie
About This Book 5

Mellon University, Applied Optimization and Simulation, I have taught for the
Katz College of Business Administration at the University of Pittsburgh, and
my Introduction to Optimization class offered regularly in the Department
of Mathematics at the University of Pittsburgh. I have taught undergradu-
ate and graduate versions of the class more than a dozen times so far for
our department and have even had colleagues attend the course. In a typical
semester of my math class, I try to cover Linear Programming, Integer Linear
Programming, multiple Geometric Programming techniques, the Fundamental
Theorem of Linear Programming, transshipment problems, minimum-weight
spanning trees, shortest paths, and the Traveling Salesperson Problem as well
as some other topics based upon student interest. My goal for the math class
at Pitt is to provide a skill set so that our majors can get a job after gradu-
ating, but the course – as previously stated – has ended up attracting many
students with diverse backgrounds including Engineering, Computer Science,
Statistics, Economics, and Business. This mix of student interests explains the
structure of the text: a smorgasbord of topics gently introduced with deeper
matters addressed as needed. Students have appreciated sampling different
dishes from the buffet, and the ones that wanted to dig further were asked to
do a little research on their own and give a short presentation. Some extra
credit was used as the carrot to motivate them, but the true reward was much
greater. All of us in the audience were treated with learning something new,
but the student presenter’s true reward was the realization that they had the
ability to learn on their own (and show an employer that they can give a great
presentation). The educator in me has found this to be the most rewarding
of all the classes I teach; feeling that same satisfaction of watching my child
happily ride a bike for the first time.
As such, I would change nothing about the structure of this text. It prob-
ably has too much mathematics for some and not enough mathematics for
others, but that is exactly where I want the text to be. I have taught college
mathematics for over 30 years and taught a wide range of courses at differ-
ent levels. I receive many thank you cards, but I get the most from students
that took this course; usually because of the job they have secured because
of what they learned in this class. I have also had many students continue to
graduate school in a wide range of areas, but none more than in some version
of Optimization. Numerous students have obtained interesting jobs as well,
including working on self-driving cars, doing analysis for a professional sports
team, and being a contributing member of the discussion on how to distribute
the first COVID vaccine during the pandemic. In short, this approach to the
material has worked very well and given the subject’s utility, it is the right
time for an undergraduate-level survey text in the material. I hope you enjoy
the journey through the material as much as I and my students have.
6 Preamble

1.4 One-Semester Course Material


Much of this book is not covered in a typical one-semester course that I teach.
In a typical one-semester course, I cover

• Linear Programming (Chapter 6)


• Integer Linear Programming (Chapter 8)
• Geometric Programming, specifically Chapters 11, 14, and 13
• affine, conical, and convex sets as well as the Fundamental Theorem of
Linear Programming (Chapters 16 and 17)
• an introduction to Graph Theory (the first two sections of Chapter 21)
• minimum weight spanning trees, shortest paths, networks, and transship-
ment problems, as well as the Traveling Salesperson Problem (Chapters
23, 24, and 25)
and we will mention
• complexity (Chapter 3) and
• sensitivity analysis (Chapter 7)
and spend some time in these chapters if we need to. We also explore other
topics if time permits.
I never cover the Algebra, Matrix Factorization, Calculus, or Combina-
torics chapters, but often many of my students need a refresher of these top-
ics or a short introduction. Note also that though matrix factorization and
matrix norms other than the Euclidean norm are not used later in the text,
though they are important matters to consider if one wants to discuss using a
computer to solve important problems. Sometimes, also, these topics come up
in class discussions and many students need to know this material for what
analytical work they have done after taking the class. In a very real sense this
book is less of a textbook and more of a handbook of optimization techniques;
the mathematics, computer science, and statistics behind them; and essential
background material. As my class draws not just math majors but students
from across campus, and having resources of necessary review materials has
been key to many of my students’ success.

1.5 Acknowledgments
This book would not have been possible without significant contributions from
talented professionals with whom I have had the privilege to work in some ca-
pacity. I am quite pleased to share that most on this list are former optimiza-
tion students, and all of the contributors have written from their professional
Acknowledgments 7

expertise. Arranged by chapter, contributors of all or most of the material in


particular chapters to the text are:
• Complexity Classes – Graham Zug (Software Engineer, Longpath Tech-
nologies)
• Genetic Algorithms – Andy Walsh, Ph.D. (Chief Science Officer, Health
Monitoring)
• Convex Optimization – Jourdain Lamperski, Ph.D. (Pitt Department of
Industrial Engineering)
• Traveling Salesperson Problem – Corinne Brucato Bauman (Assistant Pro-
fessor, Allegheny Campus, Community College of Allegheny County)
• Probability – Joseph Datz (former analyst for the Pittsburgh Pirates; In-
stitute Grant, Research, and Assessment Coordinator, Marywood Univer-
sity) and Joseph “Nico” Gabriel (Research Analyst 2, Skaggs School of
Pharmacy and Pharmaceutical Sciences at the University California San
Diego).
• Regression Analysis via Least Squares – John McKay (Research Scientist
at Amazon Transport Science) with contributions from Suren Jayasuria
(Assistant Professor Arts, Media and Engineering Departments, Arizona
State University)
• Forecasting – Joseph “Nico” Gabriel (Research Analyst 2, Skaggs School
of Pharmacy and Pharmaceutical Sciences at the University of California,
San Diego).
• Intro to Machine Learning – Suren Jayasuria, Ph.D. (Assistant Professor
Arts, Media and Engineering Departments, Arizona State University) with
contributions from John McKay (Research Scientist at Amazon Transport
Science).
Memphis student and former AMS Senior Editor Avery Carr shared his
expert LATEX editing skills and saved me from agonizing over details I would
have to work to understand. Avery also served as a guest editor for the won-
derful 100th Anniversary Edition ΠΜΕ Problems Edition. Pitt student Evan
Hyzer also edited and solved many LATEX mysteries for me. Special apprecia-
tion is to be extended to Mohd Shabir Sharif for his extended edits increasing
the quality of the text.
Distinguished University Professor Tom Hales of the Department of Math-
ematics at the University of Pittsburgh was kind enough to read an early
version of the text and offer incredibly helpful feedback.
Additionally, Joseph Datz contributed a large number of wonderful exer-
cises throughout the various topics in the text. Former student Graham Zug
contributed homework problems and also kindly proofread. The contributors
are all former students except Andy Walsh and John McKay (who was a
student at Pitt, but never took any of my classes).
My wife and children are to be thanked for tending to home matters and
allowing me to disappear to my basement lair undisturbed for hours on end,
many days and nights in a row.
8 Preamble

Those deserving my greatest appreciation, though, are the many students


that endured the evolution of the notes that eventually became this book.
They also had to withstand all the classes where I often poorly pieced together
seemingly unrelated material. I have been blessed with terrific students who
have been open to discussion and exploration as well as being eager to learn.
They have molded this material, contributed to its content, and made the
journey to this point most enjoyable. Thank you all.

– Jeffrey Paul Wheeler, University of Pittsburgh


2
The Language of Optimization

2.1 Basic Terms Defined


It is most likely the case that anyone reading a book such as this is famil-
iar with basic definitions of what we are studying, yet it is worthwhile in a
mathematical context to offer formal definitions.
Definition 2.1.1 (Maximizer, Maximum). Let f : D → R where the domain
D of f is some subset of the real numbers1
• global (or absolute) maximizer of f (x) over D if f (x∗ ) ≥ f (x) for all
x ∈ D;
• strict global (or strict absolute) maximizer of f (x) over D if f (x∗ ) > f (x)
for all x ∈ D with x ̸= x∗ ;
• local (or relative) maximizer of f (x) if there exists some positive number
ϵ such that f (x∗ ) ≥ f (x) for all x, where x∗ − ϵ < x < x∗ + ϵ (i.e. in some
neighborhood of x∗ );
• strict local (or strict relative) maximizer of f (x) if there exists some pos-
itive number ϵ such that f (x∗ ) > f (x) for all x, where x∗ − ϵ < x < x∗ + ϵ
with x ≠ x∗ .
The f (x∗ ) in the above is, respectively, the global (or absolute) maximum,
strict global (or absolute) maximum, local (or relative) maximum, or strict
local (or relative) maximum of f (x) over D.
It is important to understand the difference between a maximizer and a
maximum.
Highlight 2.1.2. The maximum f (x∗ ) is the optimal value of f which, if
the optimal value exists, is unique. The maximizer x∗ is the location of the
optimal value, which is not necessarily unique.
A slight change in detail will lead to another important concept:
Definition 2.1.3 (Minimizer, Minimum). Let f : D → R where D ⊆ R. A
point x∗ in D is said to be a
1 It should be noted that we have no need to restrict ourselves to the reals and could

offer the definition in a more abstract field.

DOI: 10.1201/9780367425517-2 9
10 The Language of Optimization
2.5

2.0

1.5

1.0

0.5

1 2 3 4 5

FIGURE 2.1
The graph of f (x) = x1 , where x > 0.

• global (or absolute) minimizer of f (x) over D if f (x∗ ) ≤ f (x) for all
x ∈ D;
• strict global (or absolute) minimizer of f (x) over D if f (x∗ ) < f (x) for
all x ∈ D with x ̸= x∗ ;
• local (or relative) minimizer of f (x) if there exists some positive number
ϵ such that f (x∗ ) ≤ f (x) for all x, where x∗ − ϵ ≤ x ≤ x∗ + ϵ;
• strict local (or relative) minimizer of f (x) if there exists some positive
number ϵ such that f (x∗ ) < f (x) for all x, where x∗ − ϵ < x < x∗ + ϵ with
x≠ x∗ .
The f (x∗ ) in the above is, respectively, the global (or absolute) minimum),
strict global minimum, local (or relative) minimum, or strict local (or rela-
tive) minimum of f (x) over D.
Note that the plural form of maximum is maxima and that the plural form
of minimum is minima. Together the local and global maxima and minima of
a function f (x) are referred to as extreme values or extrema of f (x). A single
maximum or minimum of f (x) is called an extreme value or an extremum of
the function.
We also note that the stated definitions of maximum and minimum are for
functions of a single variable, but the definitions2 are the same for a function
of n variables except that D ⊆ Rn and x∗ = ⟨x∗1 , . . . , x∗n ⟩ would replace x∗ .

2.2 When a Max or Min Is Not in the Set


Consider the function f (x) = x1 where x > 0. Certainly f (x) is never 0 nor
is it ever negative (the graph of f (x) is in Figure 2.1); thus for all x > 0,
f (x) > m where m is any nonpositive real number. This leads to the following
collection of definitions:
2 There is some concern with how the ϵ interplays with the x , . . . , x , but the overall
1 n
idea is the same.
Solving an Optimization Problem 11

Definition 2.2.1 (Upper and Lower Bounds). Let f be a function mapping


from a set D onto a set R where R is a subset of the real numbers. If there
exists a real number M such that f (x) ≤ M for all x in D, then f is said to
be bounded from above. Likewise, if there exists a real number m such that
f (x) ≥ m for all x in D, then f is said to be bounded from below. M is called
an upper bound of f whereas m is called a lower bound of f .
Example 2.2.2. The function f (x) = x1 is bounded below by m = 0 as well
as by m = −1. The function is unbounded from above.
Note that if a function is both bounded above and bounded below, then
the function is said to be a bounded function; that is
Definition 2.2.3 (Bounded Function). If there exists a constant M such
that |f (x)| ≤ M for all x in the domain of f , then f is said to be a bounded
function. If no such M exists, then f is said to be unbounded.
Example 2.2.4. Since | sin x| ≤ 1 for any real number x, sin x is a bounded
function.
Let us now reconsider f in Example 2.2.2 and observe that f : D → R
where D = (0, ∞) = R. As previously observed, f (x) > 0 for all x in D, but
0 is not in R. As we can find x that get us as arbitrarily close to 0 as we like,
f does not have a minimum, but 0 still plays a special role.
Definition 2.2.5 (Infimum, Supremum). Let S be a nonempty subset of R.
Then b is said to be the infimum of S if b ≤ s for all s ∈ S and b ≥ m for any
lower bound m of S. The infimum of a set is the greatest lower bound of the
set and is denoted b = inf(S). Likewise, a is said to be the supremum of S if
a ≥ s for all s ∈ S and a ≤ M for any upper bound M of S. The supremum
of a set is the least upper bound of the set and is denoted a = sup(S).
It will come as no surprise that the infimum is also often called the greatest
lower bound (glb), and the supremum is referred to as the least upper bound
(lub). When an infimum or supremum exists, it is unique. As well, the plural
of supremum is suprema and infimum has infima as its plural.
Example 2.2.6. For f in Example 2.2.2, min f does not exist, but for the
codomain R+ (the set of positive real numbers), inf R = 0.
Thus we see that the infimum (when it exists) can play a role similar to
a minimum when a minimum does not exist. An analogous statement can be
said of maxima and suprema.

2.3 Solving an Optimization Problem


By “solving” algebraic equations like
2x2 + 2x + 5 = 9 (2.1)
12 The Language of Optimization

we mean “finding the particular values of x that satisfy equation 2.1” (they
are −2 and 1). In another circumstance, we may be interested in what a lower
bound of the polynomial 2x2 + 2x + 5 is (this solution is 9/2 or anything
smaller).
But when solving an optimization problem, we always mean a little more
than just some numeric value. For example, consider the classic algebra prob-
lem of a farmer having 1000 feet of fence and wanting to know what is the
biggest area he can enclose with a rectangular pen for his livestock if he builds
the pen adjacent to his barn (big enough that he only needs fence on three
sides). If we label the side parallel to the barn y and the other two sides x,
then the mathematical model of our problem is

Maximize A = A(x, y) = xy (2.2)


Subject to y + 2x = 1000 (2.3)
with x, y ≥ 0. (2.4)
As our goal is to maximize the area, the function representing it, A(x, y),
is called the objective function. As well, the amount of available fence puts a
restriction on the problem, so the corresponding equation y + 2x = 1000 is
called a constraint. As well, we naturally have the nonnegativity constraints
that x, y ≥ 0.
Using the constraint to eliminate a variable, the problem simplifies to
Maximize A(x) = 1000x − 2x2 . (2.5)
The maximum of this function is 125,000 square feet, and though our
farmer will appreciate this knowledge, it is certain he also would like to know
what dimensions he needs to make the pen in order to achieve having this
maximum area. Thus, by a solution to an optimization question, we do not
just mean the optimal value of the objective function but also the values
of the variables that give the extreme value. Thus we report the answer as
max A = 125,000 square feet, which occurs when x = 250 feet and y = 500
feet. We summarize the point of this example in the following Highlight:
Highlight 2.3.1. A solution to an optimization problem is
1. the optimal value of the objective function together with
2. all possible feasible values of the decision variable(s) that yield the optimal
objective value.

2.4 Algorithms and Heuristics


By an algorithm we mean a finite procedure applied to an input with well-
defined steps that are repeated to obtain a desired outcome. For example,
Algorithms and Heuristics 13

consider washing your hair. First you wet your hair, then you apply the sham-
poo and lather, and lastly you rinse. This process may be repeated as many
times as you wish to obtain the desired level of cleanliness (read your shampoo
bottle; it may have an algorithm written on it). In some sense, an algorithm
is a recipe that is repeated.
You may have noticed that we have not offered a formal definition of an
algorithm. We are going to avoid unnecessary formality and potential disputes
and not offer one all the while noting (modifying Justice Potter Stewart’s
words in Jacobellis v. Ohio:) “I may not know how what the definition of
an algorithm is, but I know one when I see it” (Justice Stewart was not
addressing algorithms; decency forbids me addressing the matter of that case).
It is worthwhile to note that the authoritative text on algorithms – Algorithms
[11] by Thomas H. Cormen, Charles E. Leiserson, Ronald Rivest, and Clifford
Stein – as well does not define the term algorithm anywhere in its 1312 pages.
Algorithms have been around for a long time. Perhaps in grade school you
learned the Sieve of Eratosthenes (circa 3rd century BCE) to find primes.
Given a finite list of integers, one circles 2 and crosses out all other multiples
of 2. We then proceed to the next available integer, 3, keep it, and cross out
all other multiples of 3. We repeat until every integer in our list is circled or
crossed out, and what remains are the primes that were in our list of numbers.
Algorithms will play a major role in techniques we study iterative methods
and combinatorial optimization.
The word algorithm has a fascinating origin. It comes from the La-
tinized (“Algorithmi”) version of the Persian name Muh.ammad ibn Mūsā
al-Khwārizmı̄ whose early 9th century CE book Al-kitāb al-mukhtas.ar fī h.isāb
al-ğabr wa’l-muqābala (“The Compendious Book on Calculation by Comple-
tion and Balancing”) is the first known systematic treatment of algebra as
an independent subject. Unlike other early works presenting specific problems
and their solution, Al-Khwārizmı̄’s work presents general solution techniques
for first- and second-order equations, including completing the square. Al-
Khwārizmı̄ can be regarded as the father of algebra, and it is from his text
we get the term “algebra” (interestingly, it is also from his name the Spanish
and Portuguese get their words for “digit”; see [64]).
Algorithms may produce a globally optimal solution, as we will see in
the Simplex Method to solve Linear Programming problems and as well in
Kruskal’s Algorithm or Prim’s Method to find minimum weight spanning trees
in a graph. On the other hand, an algorithm may not give a solution but under
the right conditions give a good approximation as in Newton’s Method.
A heuristic is a slightly different monster. A dictionary from before the
days of everyday people being familiar with computers would report that
“heuristic” is an adjective meaning “enabling a person to discover or learn
something for themselves” [15] or “by trial and error” [16]. These days, the
word is also regarded as a noun and is most likely shortened from “a heuristic
method”. When using it as a noun, we mean by heuristic a technique that
is employed when no method of obtaining a solution (either global or local)
14 The Language of Optimization

is known or a known technique takes too long. It is, in a very true sense, an
“educated guess”. Consider the Traveling Salesperson Problem (TSP) which
is introduced in Chapter 25. A salesperson needs to visit a collection of cities
and would like to know how to plan her route to minimize distance driven.
Unfortunately, there does not yet exist a deterministic-polynomial time al-
gorithm to solve this3 , nor is it known that it is impossible for one to exist
(P = N P anyone?), so she can instead do what seems like a good idea: drive
to the nearest city and when done with her business there, drive to the nearest
city not yet visited, etc. (this is the Nearest Neighbor Heuristic 4 that we will
see later).

2.5 Runtime of an Algorithm or a Heuristic


A very important matter we will need to consider as we solve our problems is
how long it will take a computer to do the work for us. This can be measured
in different ways, either time elapsed or the number of operations a computer
must perform. Seldom do we use time as the standard in this regard as pro-
cessor speeds vary and get faster. The standard is to count operations the
computer must do, but even this is not precise as sometimes we may count
only arithmetic operations performed, but other times we also include calls
to memory, etc. This apparent discrepancy is not a large concern as our goal
when determining the runtime or computational complexity, or simply com-
plexity, of an algorithm, heuristic, or computer program is to approximate
the amount of effort a computer must put forth. The purpose of these calcu-
lations is to compare the runtime efficiency of a given program, algorithm, or
heuristic to other known techniques.
Complexity is worthy of its own chapter and is addressed in Chapter 3.

2.6 For Further Study


Parts of these texts have excellent presentations on what we have considered
and can be used to deepen one’s understanding of the material presented in
this chapter:
3 A brute force algorithm that tests all the paths will give the correct answer and

terminate in a finite amount of time. Unfortunately, there are (n − 1)!/2 possible tours
(routes) on n cities, so with 10 cities there are 181,440 possible tours and 20 cities have
60,822,550,204,416,000 possible tours. Hence, though a brute force approach works, we may
not live long enough to see the end of the algorithm.
4 We are referencing specifically the algorithm for “solving” the TSP and not the unre-

lated algorithm in Machine Learning.


Keywords 15

• An Introduction to Algorithms, 3rd edition, Thomas H. Cormen, Charles


E. Leiserson, Ronald L. Rivest, and Clifford Stein; MIT Press (2009).
(This is the go-to text for many experts when it comes to the study of
algorithms.)
• Graphs, Algorithms, and Optimization, 2nd edition, William Kocay, Don-
ald L. Kreher, CRC Press (2017)
• Mathematical Programming An Introduction to Optimization, Melvyn
Jeter, CRC Press (1986)
• The Mathematics of Nonlinear Programming, A.L. Peressini, F.E. Sullivan,
J.J. Uhl Jr., Springer (1991)
This list is not, of course, an exhaustive list of excellent sources for a
general overview of optimization.

2.7 Keywords
(strict) global or absolute maximizer/minimizer, (strict) local or relative max-
imizer/minimizer, maximum, minimum, infimum, supremum, solution to an
optimization problem, algorithm, heuristic, runtime, (computational) com-
plexity.

2.8 Exercises
Exercise 2.1. State the maximum, minimum, infimum, and supremum (if
they exist) of each of the following sets:
i) A = {8, 6, 7, 5, 3, 0, 9},
ii) B = [a, b), where a, b ∈ R,
iii) C = the range of f (x) = 1/(1 − x), where x ̸= 1,
iv) D = the range of g(x) = 1/(1 − x)2 , where x ̸= 1,
n
v) E = {1 + (−1)n }, where n is a positive integer,
vi) F = the set of prime numbers.
Exercise 2.2. Let f : Rn → R and x∗ = ⟨x∗1 , . . . , x∗n ⟩ ∈ Rn . Show f (x∗ ) is a
maximum of f if and only if −f (x∗ ) is a minimum of −f .
Exercise 2.3. Suppose s1 and s2 are suprema of some set S ⊂ R. Prove
s1 = s2 , thus establishing that the supremum of a set is unique (obviously, a
very similar proof shows that, if it exists, the infimum of a set is also unique).
16 The Language of Optimization

Exercise 2.4. Show if S ⊂ R is a nonempty, closed, and bounded set, then


sup(S) and inf(S) both belong to S.

Exercise 2.5. Let f : (0, ∞) → R by x 7→ ln x (i.e., f (x) = ln x). Prove


that f is monotonically increasing continuous bijection. [Note: this exercise
assumes some familiarity with topics in Calculus/Elementary Real Analysis.]
Exercise 2.6. Let f be a positive-valued function; that is, its image is a subset
of (0, ∞) with D(f ) ⊆ R. Prove that f (x∗ ) = maxx∈D(f ) {f (x)} if and only if
ln f (x∗ ) = maxx∈D(f ) {ln f (x)} (i.e. the max of f and ln f occur at the same
locations). You may use Exercise 2.5.
3
Computational Complexity

3.1 Arithmetic Complexity


Many optimization techniques rely on algorithms – especially using a
computer – and, as such, it is natural to consider how long an iterative pro-
cess may take. As processors’ speeds vary and technology improves, time is
not a good candidate for a metric on “how long”. Instead, one approach is to
count the number of operations necessary to complete the algorithm. This is
illustrated in the following example:
Consider using Gauss-Jordan elimination to solve a system of n equations
in n variables as in the following 3 × 3 case with variables x1 , x2 , and x3 (for
a refresher on this technique, see Section 4.2).

   
−2 3 5 7 −2 3 5 7
2R1 +R2 →R2
 4 −3 −8 −14  −−−−−−−−→  0 3 2 0  (3.1)
3R1 +R3 →R3
6 0 −7 −15 0 9 8 6
 
−2 3 5 7
−3R +R →R3
−−−−2−−−3−−−→  0 3 2 0  (3.2)
0 0 2 6

We may continue row operations to get the matrix in reduced row echelon
form, but this is computationally expensive1 , so we instead back substitute:

2x3 = 6, so x3 = 3; (3.3)
3x2 + 2(3) = 0, thus x2 = −2; and (3.4)
−2x1 + 3(−2) + 5(3) = 7, hence x1 = 1. (3.5)
The process of reducing the matrix but stopping short of reaching reduced row
echelon form and using back substitution is usually referred to as Gaussian
elimination.
1 A good lesson to carry with us as we explore the topics in this text is that it is not

always best for a computer to do a problem the same way you and I would solve it on paper.
This matter is briefly discussed at the beginning of Section 5.

DOI: 10.1201/9780367425517-3 17
18 Computational Complexity

Counting operations at each step of the Gaussian elimination in our


example on 3 variables we have:
step multiplications additions process
3.1 2(3 + 1) 2(3 + 1) elimination
3.2 1(2 + 1) 1(2 + 1) elimination
3.3 1 0 back substitution
3.4 2 1 back substitution
3.5 3 2 back substitution
If we consider a system with n variables, then the total number of opera-
tions in Gaussian elimination, G(n), is
G(n) := # operations
= # elim mult + # elim add + # back sub mult + # back sub add
(3.6)
Xn n
X n
X n
X
= (i − 1)(i + 1) + (i − 1)(i + 1) + i+ (i − 1) (3.7)
i=1 i=1 i=1 i=1
n
X n
X n
X
=2 (i2 − 1) + 2 i− 1 (3.8)
i=1 i=1 i=1
n(n + 1)(2n + 1) n(n + 1)
=2 − 2n + −n (3.9)
6 2
(4n3 + 6n2 + 2n) − 12n + (3n2 + 3n) − 6n
= (3.10)
6
4n3 + 9n2 − 13n 2n3
= or roughly . (3.11)
6 3
For growing values of n we have
2 3
n # operations 3n % error
1 0 0.666666 33.3333
2 7 5.333333 23.8095
3 25 18.000000 28.0000
4 58 42.666666 26.4367
5 110 83.333333 24.2424
10 795 666.666666 16.1425
20 5890 5333.333333 9.4510
30 19285 18000.000000 6.6632
40 44980 42666.666666 5.1430
50 86975 83333.333333 4.1870
100 681450 666666.666666 2.1693
500 83707250 83333333.333333 0.4466
103 666816645000 666666666.666666 0.2241
104 666816645000 666666666666.666666 0.0224
105 666681666450000 666666666666666.666666 0.0022
106 666668166664500000 666666666666666666.6666 0.0002
Asymptotic Notation 19

The polynomial in 3.11 gives the number of arithmetic operations required


in n variables with n unknowns. As we see in the table, as n grows large
4n3 +9n2 −13n 3

6 is approximated nicely by 2n3 thus we say that the arithmetic


3
complexity of Gaussian elimination is of the order 2n3 .
It is important to note that there is much more a computer is doing than
just arithmetic operations when it does a calculation. One very important
process we have ignored in our example is calls to memory and these can
be very expensive. Including all that is involved in memory makes a runtime
assessment much more difficult and often this component is ignored. The
purpose of these calculations is not to get a precise measurement of how long
it will take a computer to complete an algorithm, but rather to get a rough
idea of all that is involved so that we can compare the algorithm to other
techniques that accomplish the same task and therfore have some means to
compare which is more efficient. A thorough treatment of all this can be found
in the excellent textbook [11].

3.2 Asymptotic Notation


3
As we saw in the example in the previous section, 2n3 is a very good approxi-
3 2
−n
mation for 4n +9n6 as n grows large. This agrees with our intuition that as
n gets big, the only term that really matters in the polynomial is the leading
term. This idea is encapsulated in asymptotic notation (think of “asymptotic”
as a synonym for “long-run behavior”) and, in particular for our purposes, big
O notation.
Definition 3.2.1 (Big O Notation). Let g be a real-valued function (though
this definition also holds for complex-valued functions). Then

O(g) := {f | there exist positive constants C, N such that 0 ≤ |f (x)| ≤ C|g(x)|


for all x ≥ N }.

Thus O(g) is a family of functions F for which a constant times |g(x)| is


eventually an upper bound for all f ∈ F . More formally, f being O(g) means
̸ 0 and the limit exists, limx→∞ |f (x)/g(x)| = C or 0 (if
that as long as g(x) =
g is too big) where C is some positive constant.
4n3 +9n2 −n 3
Example 3.2.2. Show that 6 is O( 2n3 ) where n ∈ N.
20 Computational Complexity

Solution. For n ≥ 1 (thus N = 1),


4n3 + 9n2 − n
6
4n3 9n2 n
≤ + + by the Triangle Inequality (Theorem B.2.3)
6 6 6
(3.12)
4n3 9n3 n3
≤ + + since n ≥ 1 (3.13)
6 6 6
14n3
= (3.14)
6
7 2n3
= · (3.15)
2 3
4n3 +9n2 −n 3
establishing 6 is O( 2n3 ) where C = 7/2. Notice that
 3
4n + 9n2 − n
  3
2n
lim / =1 (3.16)
n→∞ 6 3

Regarding our work in Example 3.2.2 one usually would not include the
3 2
−n
constant 2/3 but rather report the answer as 4n +9n 6 is O(n3 ). This is,
of course, because the constant does not matter in big O notation. Some
Numerical Analysis texts use this problem as an example, though, and include
the constant 2/3 when reporting the approximate number of 2 . We have kept
the constant to be consistent with those texts and though technically correct,
including the constant in the O(·) can viewed as bad form.
It is important to realize that Big O notation gives an asymptotic (long
run) upper bound on a function. It should also be noted that when using big
O notation, O(g(x)) is a set and, as such, one should write “h(x) ∈ O(g(x))”.
Note, though, that it is standard practice to abuse the notation and state
“h(x) = O(g(x))” or “h(x) is O(g(x))”.
One further observation before some examples. We have shown that the
number of arithmetic operations in performing Gaussian elimination to solve
3 2
−n
a linear system in n variables is G(n) = 4n +9n6 and that this function is
2 2
O( 3 n ). Furthering our work in Example 3.2.2 by picking up in 3.15 we have

7 2n3 7
= n3 (3.17)
2 3 3
< n4 for n ≥ 3. (3.18)

2 The reason for this is that using Cholesky Decomposition (Chapter 5) to solve a system
3
of linear equations is O( n3 ); i.e. twice as fast as Gaussian elimination.
Intractability 21

Thus, not only is G(n) = O( 32 n3 ), G(n) = O(n4 ). In fact,

Observation 3.2.3. Let x be a positive real number and suppose f (x) is


O(xk ). Then for any l > k, f (x) is O(xl ).
We now consider a few more important examples before moving on.
Example 3.2.4. Let n ∈ Z+ . Then
n terms
z }| {
1 + 2 + 3 + · · · + n ≤ n + n + n + · · · + n = n2 (3.19)

and taking C = N = 1 we see that the sum of the first n positive integers is
O(n2 ).
Example 3.2.5. Let n ∈ Z+ . Then
n factors

n! := n(n − 1)(n − 2) · · · 3 · 2 · 1 ≤ n · n · n · · · · · n = nn
z }| {
(3.20)

and taking C = N = 1 we see that n! is O(nn ).


Example 3.2.6. Show that for n ∈ N, f (n) = nk+1 is not O(nk ) for any
nonnegative integer k.
Solution. Let us assume for contradiction that the statement is true, namely
there exist positive constants C and N such that nk+1 ≤ Cnk for all n ≥ N .
Thus for all n ≥ N , n ≤ C, which is absurd, since n is a natural number and
therefore unbounded. Thus nk+1 cannot be O(nk ). ■
Growth of basic “orders” of functions are shown in Figure 3.1. Note that
though n! is defined for nonnegative integers, Exercises 9.1 and 9.2 show how
to extend the factorial function to the real numbers.
Before concluding this section, we mention that other asymptotic notation
exists, for example: o(g(x)), ω(g(x)), Ω(g(x)), Θ(g(x)), etc., but we do not
consider them here3 .

3.3 Intractability
Some problems we will encounter will have solutions that can be reached in
theory, but take too much time in practice, are said to be intractable. Con-
versely, any problem that can be solved in practice is said to be tractable; that
is, “easily worked” or “easily handled or controlled” [16]. The bounds between
3 The interested reader is encouraged to read the appropriate sections in [11] or [48].
22 Computational Complexity
80

x!=O(x^x)

60

40 2^x

x^2

xlog(x)
20 x

log(x)

c=O(1)

1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5

FIGURE 3.1
The growth of functions.

these two are not clearly defined and depend on the situation. Though the dis-
cipline lacks a precise definition of both tractable and intractable, their usage
is standard and necessary for many situations encountered in Optimization.
For an example, let us revisit using Gauss-Jordan elimination as was con-
sidered in Section 3.1. Oak Ridge National Laboratory unveiled in 2018 its
supercomputer Summit capable of 122.3 petaflops (122.3 × 1015 ) calculations
per second. Without worrying about the details, let us assume a good PC can
do 100,000,000,000 = 1011 calculations per second (this is a little generous).
By our work in Section 3.1, to perform Gaussian elimination on a matrix with
106 rows (i.e. a system of equations with 106 variables) it would take Summit
2
(106 )3 /122.3 × 1015 ≈ 5.45 seconds. (3.21)
3
On a good PC this would take
 
2
(106 )3 /1011 /86400 seconds per day ≈ 77 days. (3.22)
3

Note that these calculation are not exact as we have not consider calls to
memory, etc., but they do illustrate the point.
Spending 77 days to solve a problem is a nuisance, but not an insur-
mountable situation. Depending on the practice one would have to decide if
this amount of time makes the problem intractable or not. But Gauss-Jordan
elimination is a polynomial time algorithm, so to better illustrate this point
let us now assume we have a program that runs in exponential time; say one
that has as its runtime 2n . For n = 100 (considerably less than 1,000,000) this
Complexity Classes 23

program would run on Summit for

2100 /122.3 × 1015 /31536000 seconds per year ≈ 3.3 years!



(3.23)

We will not even consider how long this would take on a good PC. If we
consider a system with n = 103 variables, the runtime on Summit becomes
2.7 × 10276 years which is 2 × 10266 times the age of the universe.
We will close this section by noting that most computer security depends
on intractability. The most used public key encryption scheme is known as
RSA encryption. This encryption scheme uses a 400 digit number that is
known to be the product of two primes and it works well since currently
factoring algorithms for a number this large are intractable4 . The intractability
of this problem will most likely change with quantum computing.

3.4 Complexity Classes


3.4.1 Introduction
It would be helpful to have a metric by which to classify the computational
difficulty of a problem. One such tool is the complexity class. A complexity
class is a set of problems that can be solved using some model of computation
(often this model of computation is given a limited amount of space, time, or
some other resource to work out the problem). A model of computation is any
method of computing an output given an input.
One of the most useful models of computation is the Turing machine.
For the sake of our discussion we can consider a “Turing machine” as any
algorithm that follows the following method:

0. Initialize an infinite string of “blank symbols”. Each symbol can be referred


to by its position, starting at position 0. This string of symbols is called
the tape of the Turing machine.
1. Write a given finite string of symbols, s, to the tape starting at position
0. Each symbol in s must be taken from a given finite set of symbols,
A, known as the alphabet of the Turing machine. A contains the blank
symbol. We often restrict s so that it cannot contain the blank symbol.
2. Choose an arbitrary non-negative integer k and read the kth symbol of the
tape. This step is referred to as “moving the head of the Turing Machine
to position k”.
4 The nature of primes is also involved here. Although they are the building blocks of all

integers, we know little about them, especially how many there are in (large) intervals and
where they reside.
Random documents with unrelated
content Scribd suggests to you:
8.—Beside No. 7, but more in the center of the back of the head.
Whenever this area is properly developed, it shows that the
possessor would make an admirable husband or wife. He or she
would be devoted, loyal and attentive.
If the area is over-developed, the possessor has a jealous
disposition; if under-developed, he or she is fickle and apt to flirt
with others.
9.—Beside No. 8, in the center of the back of the head, low down.
Should this area be well developed, it shows that the possessor has
a proper love and regard for children and that he thinks no person
has experienced the fullest joys of life who has not become a parent.
If this area is over-developed, the possessor thinks so much of
children that he spoils them; if it is under-developed, he is of the
type that "cannot stand them at any price."
HOW ASTROLOGY DECIDES YOUR
DESTINY
Astrology is one of the oldest sciences in the world. It is said to have
originated with the Egyptians, almost at the very beginning of time.
Indeed, it is almost impossible to trace a period when this science
was not practiced.
There is nothing new under the sun, and its close followers will
scarcely allow any errors in its deductions. They go so far as to
declare it to be an exact science, a term which means that
everything can be reasoned out and proved; nothing is left to
guesswork.
Such sciences are Mathematics, Algebra, and Geometry. We need
not believe that Astrology is all this, but certainly some very startling
and accurate predictions have been made by astrologers.
However, as in all other methods of fortunetelling attempted by us
mortals, it is far from infallible. So long as we do not take it to be
exact and sure, we shall get plenty of amusement and interest from
its study, with the exciting feeling all the time at the back of our
minds that "it might come true."
Here is a list giving you the names and meanings given to planets by
astrologers.

Name. Approximate meaning given by Astrologers.

Mars. Strength.
Venus. Beauty.
Mercury. Capacity for adapting oneself.
Uranus. Improvement.
Sun. Life.
Jupiter. Freedom and growth.
Saturn. Diminished—shrinking—lack of growth.
Neptune. Able to receive—receptive.
Earth. Physical—not spiritual.
The Moon. Feeling.

The main idea at the back of astrology is that the planets (or starry
bodies which revolve round the sun) each have a strong and varying
influence upon the minds of human beings.
THE ZODIAC.—Of course when the planets revolve round the sun
they travel through a course or path. The Zodiac is the name given
by astronomers to the boundary which encloses this course or path
in the sky.
The signs of the Zodiac are the spaces into which the Zodiac is
divided.
Here are the signs of the Zodiac arranged in order to show which
signs are opposite to each other.

Aries. facing Libra.


Taurus. Scorpio.
Gemini. Sagittarius.
Cancer. Capricorn.
Leo. Aquarius.
Virgo. Pisces.

Now each sign has a planet which is said to rule it; this is called the
ruling planet. It is from the nature of this planet that the probable
character and fate of the individual are told. It is not necessary to
know the whys and wherefores of this, if you have not studied
astronomy it will only serve to muddle you, and if, on the other
hand, you do understand astronomy you will not need any
explanation. We will just say what does happen, and that will tell
you all you need in these first steps.
Well, we all know that the earth revolves upon its axis once in every
24 hours. Now, according to astronomers, this causes one of the
Zodiac signs to appear in the eastern sky, where it remains for two
hours. We have said that each sign has a planet ruling it, so the sign
that appears on the sky at the time of birth decides what planet that
person is born under or is influenced by.
Let us suppose for a moment that you were born when the sign
Libra was rising, as the saying is. The planet which rules Libra is
Venus, so the person born at that time would be a Venus type, i.e., a
person having the influence of Venus upon him.
In addition to the main ruling planet, astrologers will tell you that
there are other "neighboring" planets—we will call them neighboring
because it is a simple term—which also have their effect upon us.
Astrologers call this one planet being "in aspect" with another. For
instance, you might have the planet Mars in aspect with (or
influenced by) the planet Saturn; you would then be dealing with a
very strong character.
The qualities of Mars which give the fighter and the pushing type, or
in excess the bully, will be well steadied by the qualities of Saturn,
which by themselves give coldness and, in excess, lack of feeling.
The two together result in a character remarkable for its steadiness
combined with its never-wearying energy and good balance.
So you see, we seldom find pure types (i.e., qualities of Mars, or
other planets by themselves), and it is very fortunate that this is so;
we should get a very one-sided world if we did.
Now we come to that part of Astrology which really interests most
people; here will be shown the birth-dates for each month in the
year and the probable characters of persons born at that special
time. You may ask why the characters are given and why not the
fate or future of the person concerned. The reason is this: you can
be pretty sure that what you read of an individual's character will
give you a sound idea of what in all probability his future will be.
After all, the carving out of our lives is in our own hands. We are the
masters of our fate, or as the song has it, "Captain of our Soul."
However, if we believe astrologers, there is a way to tell the times of
our lives when matters should go smoothly or the reverse. The most
favorable times for speculating with money, starting in business, in
fact, the most and least favorable periods of our lives can, according
to astrology, be worked out by what is known as the Horoscope.
Now this Horoscope is in reality a chart of your life. The rocky waters
are shown, and the barrier reefs which each of us must avoid
through our life, so you will see a use in the study of astrology. It
would seem to be Nature's warning to us all of the necessity for
effort, effort and again effort.
Here are the birth dates and characteristics of persons born between
the dates mentioned. Since astrology is not infallible, do not take all
these characteristics too seriously.
You will notice that each date is taken from about the 20th of one
month to the 20th of the next month.

WHEN WERE YOU BORN?


Dec. 22nd to Jan. 20th.
People born during this period have considerable mental ability and a
keen business instinct. They are fond of the imaginative arts. They
are proud; they like their own way and they see that they get it.
Generally speaking, they are better fitted to lead than to follow
others.
However, they do not take kindly to changes of any kind, and are
annoyed by newfangled ideas. They do not want the advice of other
people and often resent it. They do not strike out in new directions
and they avoid taking risks. They lack "push."
To these people, we say:
Don't wait for opportunities—make them.
Don't let your pride persuade you to keep on the wrong road rather
than turn back.
Don't be afraid of admitting and correcting a mistake.
Don't run away from trouble; meet it with a bold front.

Jan. 21st to Feb. 19th.


People born during this period have a strong sense of duty. They
have a kindly disposition and are inclined to be affectionate. They
refuse to think ill of anyone until the bad qualities are proved. Being
straightforward themselves, they imagine everyone else is the same
and, on this account, they are likely to suffer some bitter
experiences.
However, they lack a proper regard for their own welfare. They are a
little too confiding and they are not adaptable. Once they make up
their minds on a matter, it is almost impossible to persuade them to
change it.
To these people we say:
Don't brood over troubles. Face the facts, fight them out, and then,
forget all about them.
Don't be guided by impulses.
Don't neglect the financial side of things, if you want to succeed.

Feb. 20th to March 20th.


People born during this period are just in their dealings, and would
not injure another willingly. Their code of honor is a strict one. They
are industrious and persistent. They endeavor to perform their share
in making the world a better and a happier place.
However, they are too cautious and do not take sufficient risks to
make life a complete success. Too often, they ask themselves
whether they should go ahead with a project and, while they are
hesitating, the opportune moment flies away.
To these people, we say:
Don't listen to the voice of despair.
Don't be downhearted, if you don't see, at first, the way to do a
thing.
Don't think in small things. Think large.

March 21st to April 19th.


People born during this period are thoughtful. They are artistic, are
fond of the fine arts, and like all that is beautiful. They are self-willed
and rebel when others try to drive them. They do not take much
notice of convention, and the way of the world means nothing to
them.
However, they are apt to shrink from disagreeable work, and
everything sordid disgusts them. They are too sensitive and take
offense too readily.
To these people, we say:
Don't set yourself against the world: you will lose if you do.
Don't tire of your task before it is done.
Don't be too thin-skinned.
Don't forget that it takes all sorts of people to make up the world.

April 20th to May 20th.


People born during this period possess a warm and generous heart.
They are good workers and display a genuine interest in everything
they undertake. They possess the kind of mind that seems to act
instinctively and which does not depend so much on real reason.
They are lavish in gifts and kindness.
However, they are liable to rush to extremes, and they lack balance.
Consequently, they are easily misled.
To these people, we say:
Don't get excited unnecessarily.
Don't be too easily persuaded.
Don't allow your emotions to master you.

May 21st to June 21st.


People born during this period are ambitious and they aspire to very
high things. They are sensitive and sympathetic. They have lively
imaginations and they are given to building castles in the air. They
are naturally eloquent and are never at a loss for something to say.
However, they are rarely content with things as they find them.
Consequently, they grumble a great deal. They do not weigh up the
"pros and cons" before deciding on a matter; and they jump to
conclusions.
To these people, we say:
Don't be discouraged too quickly.
Dream if you like, but don't neglect to translate your dreams into
realities.
Don't be too enthusiastic.
Don't forget that work rather than plans win a home.

June 22nd to July 22nd.


People born during this period are highly generous and they make
sacrifices in order to help others. They do nothing in a half-hearted
way, whether it is work or play. They are persevering and the home
is put before anything else.
However, they dislike changes which mean an alteration in domestic
life and they are a trifle old-fashioned in some of their beliefs. A little
flattery or persuasion is apt to lead them astray, and their better
judgment is rapidly overborne by a strong personality.
To these people, we say:
Don't dash headlong into anything.
Don't be irritable under contradiction.
Don't let your emotions run away with you.
Don't spoil your chances for a little show of love.

July 23rd to August 21st.


People born during this period easily adapt themselves to
circumstances, and they are considered "jolly good company." They
have "push" and enterprise in a marked degree. They are
affectionate, generous and highly capable.
However, they lack a certain amount of self-control and they are not
always dependable. They frequently forget promises, and they are
often late in keeping appointments. In money affairs, they are likely
to overlook their obligations.
To these people, we say:
Don't let your emotions sweep you off your feet.
Don't become downcast too easily.
Don't be obstinate.
Don't make up your mind in a hurry.

August 22nd to Sept. 22nd


People born during this month are well equipped for the battle of
life, and they have several qualities which should bring them
success. They are not easily flurried, and they know how to stand
firm in an emergency. They are quick in perceiving the correct thing
to do, no matter what it is. They are capable, dependable and
thorough.
However, they are prone to be too independent, and they are apt to
disregard good advice, preferring their own judgment. They are not
quick in making friends because they are too wrapped up in
themselves.
To these people, we say:
Don't take a plunge before reckoning up everything first.
Don't forget that there are two sides to every question. There is
yours and the other man's.
Don't fall into the habit of doing tomorrow what should be done
today.

Sept. 23rd to Oct. 23rd.


People born during this month are far-seeing and have excellent
judgment. They have a passion for "finding out" things, and they
want to know about everything that happens. Consequently, they
are intelligent. They make delightful companions.
However, they are bad losers, and they often let themselves get out
of hand. This seriously hurts their vanity, as they are exceedingly
desirous of creating a good impression.
To these people, we say:
Don't speak until you have thought twice.
Don't be obstinate. Admit you are wrong when you know you are.
Don't abuse your opponent.

Oct. 24th to Nov. 22nd.


People born during this month possess great ambition, and are
persevering. They are full of energy and passionate spirit. One rebuff
does not stop them; they return to the fray again and again, until
they have conquered. They are precise in their actions, neat,
methodical and tidy.
However, they are domineering, and endeavor to impose their will on
others. They lack discrimination and, once they conceive a hatred,
there is nothing which can dispel it.
To these people, we say:
Don't domineer.
Don't do things when you feel resentful.
Don't forget that prim and proper things sometimes defeat their own
ends.

Nov. 23rd to Dec. 21st.


People born during this month are, usually, virile and full of go and
enterprise. They have more will power than the average and know
how to surmount obstacles. Nothing comes amiss to them, and they
are self-reliant.
However, they are inclined to quarrel with those who offer advice.
They carry independence too far, and they often speak without
realizing the significance of their words. They seldom confide in
others.
To these people, we say:
Don't act or speak and then think. Think first.
Don't be obstinate and think you are being determined.
Don't be headstrong and disregard advice that is disinterested.
Don't be carried away by fickle fancies.
YOUR CHILD'S OCCUPATION
DECIDED BY THE STARS
It is a well-known fact that every human being is considerably
influenced, as far as character and capabilities are concerned, by the
time of the year in which he or she was born. That being so, it
follows that the occupation best suited to any particular individual is,
in a measure, related to his or her birth-date.
Parents who are anxious to do the best for their children should take
note of these conditions; they may be helpful in keeping round pegs
out of square holes. Below, we offer suggestions which have proved
of use in thousands of cases, where doubt had previously existed.
The information may be used in this way: Suppose a child is about
to leave school and is ready to make his or her entry into the world
of work. In a number of cases, the child has a very definite idea of
what he or she wants to do. If the work is reasonably suited to the
child's temperament, station in life, and so on, it is much the best
plan to allow him or her to follow the particular bent. It is just as
well to note whether the chosen occupation fits in with the work
which we list below for his or her individual birth-date. If it
approximates to some occupation which we mention, well and good.
Let the child go ahead, there is every chance of success. But, if it is
quite alien to anything which is given in the list, caution is needed.
We do not say that the child's ambition should be checked and that
he or she should be put to a job of our selection, but we do say that
caution ought to be exercised. We are perfectly ready to admit that
the stars and the birth-date are not the only factors which count.
Environment, upbringing, the father's occupation, and other things
must influence the child. All these influences should be weighed and
carefully considered.
But where astrology and the stars can give most help is in the case
of a boy or girl who has no formulated idea as to what he or she
wants to become. Thousands of children reach the school-leaving
age without showing the slightest inkling for any particular job. To
the parents of such children, we say, consult the lists set out below,
seeing that they are based on astrological teachings. Go over the
selected occupations carefully, discuss them with the child, explain
what they offer in terms of money, work, hours, etc., and watch the
effect they have on the child. In this way, it will soon be possible to
gain an idea as to what occupation should be eventually decided on.
Here are the occupations suitable for each person:
CAPRICORN BORN (Dec. 22nd to Jan. 20th).—Since people born in
this period have considerable mental ability, it follows that they do
well in most of the professions, since they can pass the necessary
examinations and become well qualified. Thus, they ought to do
satisfactorily in medicine, the law, dentistry, the scholastic profession
and similar occupations. The fact that they do not care to take risks
unfits them for many business openings, but where aspirations are
not high, they do well as clerks and in filling posts which consist of
routine work. Girls, especially, should seek work which is connected
with the imaginative arts.
AQUARIAN BORN (Jan. 21st to Feb. 19th).—Boys display a good deal
of interest in occupations which require the use of their hands. This
makes them capable in many engineering posts, in wireless, in
cabinet-making and similar jobs. They are not good at creating or
inventing in connection with these industries, however. There is the
roving disposition implanted in these boys and many of them think
that the pilot's job on an air liner could not be equalled.
Girls are, also, interested in working with their hands: thus they are
fitted for dressmaking, the millinery trade, for dealing with arts and
crafts supplies, etc. A certain number are eminently suited to
secretarial work.
PISCEAN BORN (Feb. 20th to March 20th).—Children born in this
period have a love for the sea and, therefore, the boys find
congenial work as ship's mates, stewards, marine engineers, etc.,
while girls are suitable for stewardesses and other jobs filled by
women on ocean-going vessels.
In addition boys and girls are both fitted to all kinds of work in
shops, chain stores, etc., but they are not at their best when
managing their own businesses. They require authority behind them.
A few Pisceans have artistic ability which should lead them to do
splendidly as authors, painters, musicians, etc.
ARIES BORN (March 21st to April 19th).—The Aries child is often a
problem, for certain of them have a rooted objection to anything in
the nature of routine work. They chafe at going and coming at the
same hour each day, and of doing the same work year after year. It
is not that they are lazy, but that their nature refuses to be driven by
set rules. With such children, it is wisest to interest them in
whatever they fancy, until the time comes when they launch out on
some brilliant scheme of their own. Aries men are the ones that fill
unusual, out-of-the-way posts.
Where this rooted objection does not exist, the children are good in
almost any position which permits of movement, as travellers, for
instance.
TAURIAN BORN (April 20th to May 20th).—As a rule, children who
are Taurians are very successful. They do not mind hard work and
they have a "flair" for doing the right thing, without knowing why.
They have a head for figures and money, and thus do well in banks
and stockbroker's offices. They take kindly to long training, which
enables them to succeed in law and medicine.
Both boys and girls are good with their hands. This makes them
successful in a large number of occupations, as widely diverse as
engineering and tailoring, or hairdressing and piano playing.
GEMINI BORN (May 21st to June 21st).—Gemini children show a
good deal of ambition, and their chief fault is that they object to
beginning at the bottom of the ladder. Perhaps this is useful, in a
way, as it goads them on to climbing upwards. They have a good
deal of vision. Thus they make excellent newspaper men and
women. They do well in new trades, notably in radio and the motor
world. Also, they ought to make a success in certain branches of
aviation. Their eloquence fits them admirably for travellers, and they
would make their mark in any business which, eventually, gave them
work of an imaginative nature. In a general way, they find interest in
theatrical work, in literary activities and in architecture. All Gemini
people have a streak in their natures which causes them to seek
unnecessary changes.
CANCER BORN (June 22nd to July 22nd).—Children born during this
period are usually "workers." They will plod, they do not mind long
hours, and they will set themselves to difficult jobs, if told to get on
with them. As a rule, they should be set to something which enables
them to work "on their own." They much prefer this to being a small
peg in a large machine. They are suited to small businesses and
agencies. A mail-order business might fit in with their requirements.
Girls would do well as private teachers, running small schools of
their own. They are, also, suited to the drapery trade.
LEO BORN (July 23rd to August 21st).—Those who are born during
this period succeed best in what might be called "clean" occupations.
The boys do not want to put on overalls and become grimy, and the
girls prefer work that enables them to be always neat and tidy. Both
of them show aptitude in marketing such things as jewelry, drugs,
books and clothes, but they do not want to be concerned with
making them. They are not so much interested in vending the
necessaries of life as the luxuries. Thus, motor cars, victrolas,
cameras, sports requisites, etc., attract them.
They are not much suited to clerical work, but a good number find
an outlet for their ambitions in the theatrical and literary world, while
a few make good dentists, radiologists and medical practitioners.
VIRGO BORN (Aug. 22nd to Sept. 22nd).—These children are
capable, but their great failing is that, once they find a fairly suitable
post, they will not look for anything better. They prefer to hold on to
a moderate certainty than to risk a little for a great success.
Consequently, Virgo-born are found living on salaries just sufficient
to keep them from want.
They are eminently suited to clerical work of the higher types, such
as in banks, insurance companies, stockbrokers' offices, etc. They
make good company secretaries, excellent journalists, fairly good
actors and actresses, and the girls do well as teachers.
LIBRA BORN (Sept. 23rd to October 23rd).—Children of this period
do not mind hard work, but they hate monotony, especially if it is at
all sordid. They have good judgment, a quality which fits them for
such diverse occupations as medicine and the drama, the law and
dressmaking. No special trades or professions can be singled out for
them; but, as long as they are set to work in a direction which
provides them with an outlet for a nicely balanced judgment and a
capacity for what might be termed the detective instinct, they should
succeed admirably.
SCORPIO BORN (Oct. 24th to Nov. 22nd).—There is an abundance of
ambition in these children, and they seek position rather than
money. Thus, the boys do well in the Navy and the Army, and, in a
less degree, in the Air Force. The Church holds out good openings
for many of them, and the Mercantile Marine interest not a few.
Medicine attracts both boys and girls, and so does the stage.
Anything to do with chemicals seems to influence many of the boys.
Scorpio-born children are often heard to say that they want to make
a name for themselves.
SAGITTARIAN BORN (Nov. 23rd to Dec. 21st).—Children of this
period are fond of animals; thus they are suited to become
veterinary surgeons, horse-dealers, farmers and even jockeys. One
section of them, having excessive will power and plenty of self-
reliance, makes a type of individual who seeks publicity in the
political world. All are capable in business, especially in the executive
branches. Not a few men become company promoters, chairmen
and directors. The girls make excellent teachers and welfare
workers.
WHAT ARE YOUR HOBBIES?
According to your Zodiac sign you have a disposition for certain
hobbies. You may not necessarily have these hobbies but your
inclinations lie towards them.
CAPRICORN BORN.—Gardening. Nature Study. Rambles in the
countryside. Making things of almost any kind. Chemistry. Physics.
AQUARIAN BORN.—Aviation, ranging from actual flying to making
aeroplane models. Gliding. Constructing all kinds of articles. Painting
pictures. Drawing. Needlework.
PISCES BORN.—Traveling, especially by sea. Photography.
Constructing and using wireless apparatus. Making electrical
apparatus. Theater-going and amateur theatricals. Arts and crafts
(girls).
ARIES BORN.—Traveling, touring. Anything connected with motor
cars. Sight-seeing. Making things. Reading. Arts and crafts (girls).
TAURUS BORN.—Constructive hobbies, from wireless to the building
of houses. Walking. Golf. Swimming. Collecting antiques.
GEMINI BORN.—Likely to be interested in inventions. Good at
solving puzzles. Football. Tennis. Nature rambling. Girls have a bent
for household duties, such as cooking, needlework, etc.
CANCER BORN.—Interested in the wonders of the world. Anxious to
see things and people. Music. Reading. Collecting antiques. Almost
any outdoor game. Girls are fond of needlework of the finer kinds.
LEO BORN.—Hobbies allied to the daily work. Intellectual reading,
especially anything bearing on historical matters. Going about. Golf.
Swimming. Making things of an artistic nature.
VIRGO BORN.—Indoor games. Making and repairing household
articles. Good at manual activities, from playing the piano to
constructing toys. Prefers to be amused indoors than out in the
open.
LIBRA BORN.—Doing things to keep the home ship-shape. Football.
Cricket. Photography. Reading. Wireless. Needlework and knitting
(girls).
SCORPIO BORN.—Scientific recreations of all kinds. Keeping pets.
Nature rambling. Girls take a keen interest in household duties. Card
playing. Seeing people. Dabbling in mysterious matters, such as
thought-reading, table-rapping, seances, etc.
SAGITTARIAN BORN.—Hobbies of an intellectual character. Walking.
Outdoor sports. Boxing. Nature study. Keeping pets. Reading.
WHAT IS YOUR LUCKY NUMBER?
Once more from the rising sun of the East further marvelous
theories have reached us through the paths of the ages. To many of
our prosaic Western minds, maybe not unnaturally, these ideas will
at first sight appear almost ridiculous. However, do not condemn
numerical mysteries unheard, for no Manual of Fortunetelling would
be complete should it not include a talk on this most arresting
subject.
Students of numbers, as do astrologers and students of palmistry,
declare that there is no such thing as luck or chance in the world.
They also state that we are strongly but not inevitably influenced by
certain powerful laws of Nature.
Number science is certainly unknown to the great majority of us, but
there are some superstitions which are based on evil numbers; these
superstitions we treat with great respect. Very few of us really care
to sit down thirteen at table, while I have known a man go sad and
smokeless rather than be the third to light his cigarette off one
match!
Fortunetelling by numbers is allied to astrology very closely indeed.
Let us now take each day of the week individually and see what
information we can get from it. You will find that very useful as a
check upon your other forms of fortunetelling.

ON WHAT DAY WERE YOU BORN?


If, as I suggested, we take the days of the week we shall find that
they in turn are influenced by the order in which they are found, or
by the number which is theirs. For instance, Sunday being the first
day, is influenced by No. 1, and Friday, being the sixth day takes No.
6 as its ruling number.
According to the ancients each number has its corresponding planet;
here is a little table showing the planet representing and ruling over
each number.

No. 0. Represented by Space.


No. 1. Represented by The Sun.
No. 2. Represented by The Moon
No. 3. Represented by Mars.
No. 4. Represented by Mercury.
No. 5. Represented by Jupiter.
No. 6. Represented by Venus.
No. 7. Represented by Saturn.
No. 8. Represented by Uranus.
No. 9. Represented by Neptune.

Taking each day of the week in order, we find the following


characteristics.

TABLE OF DAYS IN WEEK


No. 1 (Sunday).—You will see by your table that this day takes the
Sun for its ruler—Sun-day. It is a fortunate day; persons born on a
Sunday have a brave and honest influence on them. They will be
optimistic, but not foolishly so, while at the same time they have
great pride in the reputation of themselves and their families. If they
have any fault it is, maybe, that this pride is felt, a little too strongly;
they may be inclined to take themselves rather too seriously.
However, I repeat, this is an excellent day.
No. 2 (Monday).—This day is the Moon-day. The lesson for Monday
men to learn is steadiness. They are too easily influenced and are
blown hither and thither upon life's winds. They adapt themselves
well to change of place, circumstances, scene, and frequently follow
the sea. They have plenty of imagination in their natures, and should
cultivate common sense.
No. 3 (Tuesday).—The day of Mars (French—Mardi). Frequently the
engineers of the world. An ambitious go-ahead day is Tuesday.
These Tuesday folk are the explorers, the men who emigrate, and
the earnest patriots of life. Soldiers, workers at the furnace among
other workers, are found among those born on Tuesday. Their
womenfolk are inclined to be rather shrewish and domineering. They
are not naturally good managers, and should cultivate this quality
because they are always rare workers.
No. 4 (Wednesday).—The table tells us that these are the
Mercurians. The men are quick at calculating figures, and always
capable and thoughtful workers. Mercury, as its name implies, gives
quickness, with business trading capacity. The women appear not to
be so favorably influenced, they must guard against grumbling and
gossip; then they may do well enough.
No. 5 (Thursday).—Under the planet of Jupiter, these Thursday
people have many good qualities. They are liberal and good natured,
but have one vice—the outcome of their virtue. They are inclined to
be too liberal with themselves, which is extravagance. Given an idea
they can turn it to good account, but do not, as a rule, originate
ideas. Statesmen are here found; let these Jupiterians beware of a
love of display and what is commonly known as side. Then they are
very excellent people indeed.
No. 6 (Friday).—Look at the table—see Venus is the planet of Friday.
This accounts for many things. Here we see the typical Venus type.
Gay, light-hearted, with no thought of the morrow, they flit happily
through life like a gilded butterfly upon the wing. If they lack taste
they over-dress. Their good qualities are their charming
personalities, pleasing manners, and a quick command of music and
art. They should beware of being only butterflies, and should
cultivate strength of character. They should also obtain by hook or
by crook a liking for hard work; it will serve them in good stead.
No. 7 (Saturday).—Saturday, as its name tells us, has sad Saturn for
its planet. Here we have the exact opposite to the persons
mentioned who were born on a Friday. Saturday people miss half the
joy of living by their cold and calculating natures. Careful with
money, they are patient workers, they must beware of being miserly,
and should certainly cultivate their missing sense of humor. The
good qualities in these people are their sincerely earnest outlook and
their capacity for an almost endless grind of hard work. Their
womenfolk frequently make old maids and should practice sweet
temper and a kindly feeling towards the rest of the household.

YOUR OWN NUMBER


But there is much more in the science of numbers than that which
can be gleaned from the days of the week. There is your own
personal number, the number which influences you and your actions
more than any other. If you know your number, think how you can
use it for good and avoid others for ill! The finding of your number is
a simple matter when you have mastered the elements of
numerology, which is the science of numbers.
Let us explain how your own number is found. First, write down your
birth-date, the day of the month, the month itself and the year.
Thus, three items are required. Take first the day of the month. If it
consists of one figure, leave it. If it consists of two, add them
together, and, if the answer comes to two figures, add them
together. All this may appear a little involved, but it is not, as one or
two examples will show.
Suppose you were born on the 9th of the month, then 9 is the
number you want.
But, suppose it was the 16th, then six and one make seven.
Therefore 7 is the required number.
Again, if you were born on the 29th, then nine and two make
eleven, but as eleven consists of two figures, you must add them
together, and they make 2.
So much for the day of the month, now for the month itself. January
stands for one, February for 2, and so on, to December for 12. The
numbers of the months from January to September can stand as
they are, but October November and December, being 10, 11 and
12, must be added up, as already described. Thus October is one,
November is two and December three.
Thirdly, the number of the year must be considered. Say you were
born in 1910. These figures add up to eleven, and eleven, being
double figures, adds up to 2. Therefore 1910 is equivalent to 2.
Work out your figures here.
You have now obtained three separate figures, add them together
and if they come to a one-figure number, that is the number which
you require. On the other hand, if it is a double-figured amount, add
the two figures as before, until you arrive at a single-figured
amount. Then that is the number you require.
So as to make the whole thing perfectly clear, we will take a
complete example and work it out, exactly as you must work out
your own birth-date.

Example.—12th September, 1913.


12 = 1 + 2 = 3
September is the 9th month = 9
1913 = 1 + 9 + 1 + 3 = 14 = 1 + 4 = 5
3 + 9 + 5 = 17 = 1 + 7 = 8
Therefore, the personal number of anyone born on 12th September,
1913, is 8. Eight should guide and influence all his or her actions.
We are not going to pretend that benefits will accrue on every
occasion that the personal number is observed, but we are going to
say that we have noted some marvelous pieces of good fortune
when it has.
When you have found your personal number, there are several ways
in which you can use it. Suppose your number is the one just found,
eight; then you can conclude that the eighth day of any month will
be a propitious one for you. But that is not the only one. The 17th is
equally good, because one plus seven gives eight. Moreover, the
26th is in a similar position. Two and six make eight.
Yet another way to use your personal number arises when you want
to know whether some important step should be taken on a definite
day. What is the particular day? Add up its numerological values,
exactly as you did with your birthday, and if it resolves itself into the
same number as your personal number, you may go ahead with
cheerfulness. Put forth your best effort, and, on the day, you will
have ample chances of success.

THE NUMBER OF YOUR NAME


Numerology permits of still another step. Take your own name and
see what number it is equal to. You will be able to do this in the
following way: A stands for one, B for two, C for three, and so on.
When you reach I, which is 9, commence again and give J the value
of one, then continue. To make all this clear, we will set out the
values of the complete alphabet:
1 = A J S
2 = B K T
3 = C L U
4 = D M V
5 = E N W
6 = F O X
7 = G P Y
8 = H Q Z
9 = I R —
Thus, suppose your name is Joan Shirley, the letters resolve
themselves into the following numbers:—
J O A N S H I R L E Y
1 + 6 + 1 + 5 + 1 + 8 + 9 + 9 + 3
+ 5 + 7 = 55
55 = 5 + 5 = 10 = 1 + 0 = 1
From all that we have said, it will be clear that the birthdate may be
used for finding the personal number, or the letters of the name may
be used. On rare occasions, the two ways will provide the same
number. When this is the case, great faith should be placed in that
number. But, when the two ways give different numbers, what?
Does one disprove the other? No. You simply have two numbers
favorable to you. The birthdate number is the more definite and
reliable because your very existence is based on it.
A word at the end. Married ladies must use their maiden name for
finding the name number.

DO YOU KNOW THAT


Odd Numbers have always been credited with mystic powers capable
of influencing the destinies of people; and a curious survival of the
idea is to be found in the fact that countrywomen, without knowing
why, put an odd number of eggs under their hens in the belief that
otherwise no chickens will be hatched?
In addition, we have noticed that books of sweepstake tickets
generally have the odd-numbered tickets withdrawn from them
before the even-numbered ones.
Number Three.—This number comes in for a considerable share of
popularity, even from mythological times, when there were the three
fates and the three graces. Shakespeare introduced three witches in
"Macbeth." In nursery rhymes, we have the three blind mice. In
public-house signs, we frequently come across the numeral "three,"
and, of course, pawnbrokers have three brass balls.
Number Seven.—Seven is deemed extremely lucky, it being the
perfect or mystic number which runs the entire scheme of the
Universe in matters physical and spiritual. Man's life is popularly
divided into seven ages: the product of seven and nine—sixty-three
—was regarded as the grand climacteric, and the age was
considered as a most important stage of life.
The seventh son of a seventh son, according to Highland belief,
possesses the gift of second sight, and the power of healing the sick.
Many people believe that a cycle of seven years of misfortune is
likely to be succeeded by another of prosperity.
Number Nine is credited with mystic properties, good and bad. A
piece of wool with nine knots tied in it is a well-known charm for a
sprained ankle. The cat o'nine tails is a form of punishment not to be
taken lightly.
Number Thirteen.—Of this number, everybody can supply instances
when it has brought bad luck. But it may be cheering to mention
that, in certain parts of the world, thirteen is regarded in quite a
favorable light. Whether it is good or bad is a matter for each
individual to decide.
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookbell.com

You might also like