0% found this document useful (0 votes)
5 views91 pages

Optimization On Solution Sets Of Common Fixed Point Problems Springer Optimization And Its Applications 178 Alexander J Zaslavski pdf download

The document discusses the book 'Optimization on Solution Sets of Common Fixed Point Problems' by Alexander J. Zaslavski, which focuses on the subgradient projection algorithm for minimizing convex functions in the presence of computational errors. It outlines the book's aim to provide approximate solutions to optimization problems involving fixed point and convex feasibility problems, detailing various algorithms and their applications. The book is part of the Springer Optimization and Its Applications series, emphasizing the interdisciplinary nature of optimization in various fields.

Uploaded by

eyfamorga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views91 pages

Optimization On Solution Sets Of Common Fixed Point Problems Springer Optimization And Its Applications 178 Alexander J Zaslavski pdf download

The document discusses the book 'Optimization on Solution Sets of Common Fixed Point Problems' by Alexander J. Zaslavski, which focuses on the subgradient projection algorithm for minimizing convex functions in the presence of computational errors. It outlines the book's aim to provide approximate solutions to optimization problems involving fixed point and convex feasibility problems, detailing various algorithms and their applications. The book is part of the Springer Optimization and Its Applications series, emphasizing the interdisciplinary nature of optimization in various fields.

Uploaded by

eyfamorga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 91

Optimization On Solution Sets Of Common Fixed

Point Problems Springer Optimization And Its


Applications 178 Alexander J Zaslavski download

https://ebookbell.com/product/optimization-on-solution-sets-of-
common-fixed-point-problems-springer-optimization-and-its-
applications-178-alexander-j-zaslavski-51984346

Explore and download more ebooks at ebookbell.com


Here are some recommended products that we believe you will be
interested in. You can click the link to download.

Building Thermomodernisation Solution Based On The Multiobjective


Optimisation Method Magorzata Basiska

https://ebookbell.com/product/building-thermomodernisation-solution-
based-on-the-multiobjective-optimisation-method-magorzata-
basiska-10746832

Optimization On Metric And Normed Spaces 1st Edition Alexander J


Zaslavski Auth

https://ebookbell.com/product/optimization-on-metric-and-normed-
spaces-1st-edition-alexander-j-zaslavski-auth-1480430

An Introduction To Optimization On Smooth Manifolds Nicolas Boumal

https://ebookbell.com/product/an-introduction-to-optimization-on-
smooth-manifolds-nicolas-boumal-48155782

Populationbased Optimization On Riemannian Manifolds Robert Simon Fong

https://ebookbell.com/product/populationbased-optimization-on-
riemannian-manifolds-robert-simon-fong-43259660
Probabilistic Combinatorial Optimization On Graphs Ccile Murat

https://ebookbell.com/product/probabilistic-combinatorial-
optimization-on-graphs-ccile-murat-879880

Seo For Beginners 2020 Learn Search Engine Optimization On Google


Using The Best Secrets And Strategies To Rank Your Website First Get
New Customers And Growth Your Business Allan Kennedy

https://ebookbell.com/product/seo-for-beginners-2020-learn-search-
engine-optimization-on-google-using-the-best-secrets-and-strategies-
to-rank-your-website-first-get-new-customers-and-growth-your-business-
allan-kennedy-37285322

Optimization Algorithms On Matrix Manifolds Course Book Pa Absil R


Mahony Rodolphe Sepulchre

https://ebookbell.com/product/optimization-algorithms-on-matrix-
manifolds-course-book-pa-absil-r-mahony-rodolphe-sepulchre-51950524

Optimization Algorithms On Matrix Manifolds Illustrated Edition Pa


Absil

https://ebookbell.com/product/optimization-algorithms-on-matrix-
manifolds-illustrated-edition-pa-absil-1075844

Mathematical Methods On Optimization In Transportation Systems 1st


Edition Leena Suhl

https://ebookbell.com/product/mathematical-methods-on-optimization-in-
transportation-systems-1st-edition-leena-suhl-4595844
Springer Optimization and Its Applications 178

Alexander J. Zaslavski

Optimization
on Solution Sets
of Common Fixed
Point Problems
Springer Optimization and Its Applications

Volume 178

Series Editors
Panos M. Pardalos , University of Florida
My T. Thai , University of Florida

Honorary Editor
Ding-Zhu Du, University of Texas at Dallas

Advisory Editors
Roman V. Belavkin, Middlesex University
John R. Birge, University of Chicago
Sergiy Butenko, Texas A&M University
Vipin Kumar, University of Minnesota
Anna Nagurney, University of Massachusetts Amherst
Jun Pei, Hefei University of Technology
Oleg Prokopyev, University of Pittsburgh
Steffen Rebennack, Karlsruhe Institute of Technology
Mauricio Resende, Amazon
Tamás Terlaky, Lehigh University
Van Vu, Yale University
Michael N. Vrahatis, University of Patras
Guoliang Xue, Arizona State University
Yinyu Ye, Stanford University
Aims and Scope
Optimization has continued to expand in all directions at an astonishing rate. New
algorithmic and theoretical techniques are continually developing and the diffusion
into other disciplines is proceeding at a rapid pace, with a spot light on machine
learning, artificial intelligence, and quantum computing. Our knowledge of all
aspects of the field has grown even more profound. At the same time, one of the
most striking trends in optimization is the constantly increasing emphasis on the
interdisciplinary nature of the field. Optimization has been a basic tool in areas
not limited to applied mathematics, engineering, medicine, economics, computer
science, operations research, and other sciences.

The series Springer Optimization and Its Applications (SOIA) aims to


publish state-of-the-art expository works (monographs, contributed volumes,
textbooks, handbooks) that focus on theory, methods, and applications of
optimization. Topics covered include, but are not limited to, nonlinear optimization,
combinatorial optimization, continuous optimization, stochastic optimization,
Bayesian optimization, optimal control, discrete optimization, multi-objective
optimization, and more. New to the series portfolio include Works at the
intersection of optimization and machine learning, artificial intelligence, and
quantum computing.

Volumes from this series are indexed by Web of Science, zbMATH, Mathematical
Reviews, and SCOPUS.

More information about this series at http://www.springer.com/series/7393


Alexander J. Zaslavski

Optimization on Solution
Sets of Common Fixed Point
Problems
Alexander J. Zaslavski
Department of Mathematics
Technion – Israel Institute of Technology
Haifa, Haifa, Israel

ISSN 1931-6828 ISSN 1931-6836 (electronic)


Springer Optimization and Its Applications
ISBN 978-3-030-78848-3 ISBN 978-3-030-78849-0 (eBook)
https://doi.org/10.1007/978-3-030-78849-0

Mathematics Subject Classification: 49M37, 65K05, 90C25, 90C30

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland
AG 2021
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse
of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

In this book, we study the subgradient projection algorithm and its modifications
for minimization of convex functions on solution sets of common fixed point
problems and on solution sets of convex feasibility problems, under the presence
of computational errors. Usually the problem, studied in the literature, is described
by an objective function and a set of feasible points. For this algorithm, each
iteration consists of two steps. The first step is a calculation of a subgradient of
the objective function, while in the second one, we calculate a projection on the
feasible set. In each of these two steps there is a computational error. In general,
these two computational errors are different. In our recent research presented in
[93, 95, 96] we show that the algorithm generates a good approximate solution, if
all the computational errors are bounded from above by a small positive constant.
Moreover, if we know computational errors for the two steps of our algorithm,
we find out what an approximate solution can be obtained and how many iterates
one needs for this. It should be mentioned that in [93, 95] analogous results were
obtained for many others important algorithms in optimization and in the game
theory.
In our study in [93, 95] we considered optimization problems defined on a set of
feasibility points, which is given explicitly as the fixed point set of an operator. It was
used the fact that we can calculate a projection operator on a set of feasibility points
with small computational errors. Of course, this is possible only when the feasibility
set is simple, like a simplex or a half-space. In practice, the situation is more
complicated. In real world applications, the feasibility set is an intersection of a finite
family of simple closed convex sets. Calculating the projection on their intersection
is impossible, and instead of this, one has to work with projections on these
simple sets which determine the feasibility set as their intersection, considering the
products of these projections (the iterative algorithm), their convex combinations
(the Cimmino algorithm), and a more recent and advanced dynamic string-averaging
algorithm which was first introduced by Y. Censor, T. Elfving, and G. T. Herman
in [23] for solving a convex feasibility problem, when a given collection of sets
is divided into blocks and the algorithms operate in such a manner that all the
blocks are processed in parallel. In our book [94] we studied approximate solutions

v
vi Preface

of common fixed point problems for a finite family of operators and approximate
solutions of convex feasibility problems, taking into account computational errors.
The goal was to find a point which is close enough for each element of a given finite
family of sets. Optimization problems were not considered in [94]. In the present
book, we deal with a problem, which is much more difficult and complicated than
the problems studied in [93–95]: to find a point which is close enough for each
element of a given finite family of sets and such that the value of a given objective
function at this point is close to the infimum of this function on the feasibility set.
In this book our goal is to find approximate minimizers of convex functions
on solution sets of common fixed point problems and on solution sets of convex
feasibility problems, under the presence of computational errors. We show that our
algorithms generate a good approximate solution, if all the computational errors are
bounded from above by a small positive constant. If we know computational errors
for our algorithm, we find out what an approximate solution can be obtained and
how many iterates one needs for this.
Analysis of the behavior of an optimization algorithm is based on the choice of
an appropriate estimation which holds for each of its iterations. First, this estimation
holds for an initial iteration. It is shown that if the estimation is true for a current
iteration t, then it is also true for the next iteration t + 1. Thus, we conclude that
the estimation is true for all iterations of the algorithm. Using this estimation, it is
shown that after a certain number of iterations, we obtain an approximate solution
of our problems. In [93, 95], an estimation was used which allows us after a certain
number of iterations to obtain a point where the value of given objective function is
close to the infimum of this function on the feasibility set. We should not worry that
this point is close to the feasibility set, because this is guaranteed by the algorithm.
In [94], where the feasibility set is an intersection of a finite family of sets, we use
another estimation. This estimation allows us after a certain number of iterations
to obtain a point which is close to every element of the family of sets. Here, we
have to find another estimation. Using this new estimation it is shown that after a
certain number of iterations we obtain a point which is close to every element of the
family of sets, whose intersection is our feasibility set, and where the value of given
objective function is close to the infimum of this function on the feasibility set.
It should be mentioned that the subgradient projection algorithm is used for many
problems arising in real world applications. The results of our book allow us to
use this algorithm for problems with complicated sets of feasible points arising
in engineering and, in particular, in computed tomography and radiation therapy
planning.
The book contains 10 chapters. Chapter 1 is an introduction. In Chapter 2, we
consider a minimization of a convex function on a common fixed point set of a finite
family of quasinonexpansive mappings in a Hilbert space. We use the Cimmino
subgradient algorithm, the iterative subgradient algorithm, and the dynamic string-
averaging subgradient algorithm. In Chapter 3, we consider a minimization of a
convex function on an intersection of two sets in a Hilbert space. One of them
is a common fixed point set of a finite family of quasi-nonexpansive mappings
while the second one is a common zero point set of finite family of maximal
Preface vii

monotone operators. We use the Cimmino proximal point subgradient algorithm, the
iterative proximal point subgradient algorithm, and the dynamic string-averaging
proximal point subgradient algorithm and show that each of them generates a
good approximate solution. In Chapters 4–6, we study a minimization of a convex
function on a solution set of a convex feasibility problem in a general Hilbert space.
The solution set is an intersection of a finite family of convex closed sets such that
every set is a collection of points where the values of the corresponding convex
constraint function does not exceed zero. In Chapter 4, we study the Cimmino
subgradient projection algorithm, in Chapter 5 we analyze the iterative subgradient
projection algorithm, while in Chapter 6 the dynamic string-averaging subgradient
projection algorithm is discussed. In Chapters 7 and 8, we study minimization
problems with smooth objective functions using a fixed point gradient projection
algorithm and a Cimmino gradient projection algorithm, respectively. In Chapter 9,
we study the convergence of the projected subgradient method for a class of
constrained optimization problems in a Hilbert space. For this class of problems,
an objective function is assumed to be convex, but a set of admissible points is
not necessarily convex. Our goal is to obtain an -approximate solution in the
presence of computational errors, where  is a given positive number. An extension
of the projected subgradient method for zero-sum games with two players under the
presence of computational errors is given in Chapter 10.
All the results presented in the book are new. The author believes that this
book will be useful for researches interested in the optimization theory and its
applications.

Rishon LeZion, Israel Alexander J. Zaslavski


February 25, 2021
Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Subgradient Projection Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Fixed Point Subgradient Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Proximal Point Subgradient Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.4 Cimmino Subgradient Projection Algorithm . . . . . . . . . . . . . . . . . . . . . . . 17
1.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2 Fixed Point Subgradient Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.1 Common Fixed Point Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2 The Cimmino Subgradient Algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.3 Two Auxiliary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.4 The First Result for the Cimmino Subgradient Algorithm . . . . . . . . . 37
2.5 The Second Result for the Cimmino Subgradient Algorithm . . . . . . 44
2.6 The Iterative Subgradient Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.7 Auxiliary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.8 Convergence Results for the Iterative Subgradient Algorithm . . . . . 58
2.9 Dynamic String-Averaging Subgradient Algorithm . . . . . . . . . . . . . . . . 76
2.10 Auxiliary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
2.11 The First Theorem for the Dynamic String-Averaging
Subgradient Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.12 The Second Theorem for the Dynamic String-Averaging
Subgradient Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3 Proximal Point Subgradient Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.1 Preliminaries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.2 Auxiliary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
3.3 The First Result for the Cimmino Proximal Point
Subgradient Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
3.4 The Second Result for the Cimmino Proximal Point
Subgradient Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
3.5 The Iterative Proximal Point Subgradient Algorithm. . . . . . . . . . . . . . . 126

ix
x Contents

3.6 The First Theorem for the Iterative Proximal Point


Subgradient Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
3.7 The Second Theorem for the Iterative Proximal Point
Subgradient Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
3.8 Dynamic String-Averaging Proximal Point Subgradient
Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
3.9 Auxiliary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
3.10 The First Theorem for the DSA Proximal Point
Subgradient Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
3.11 The Second Theorem for the DSA Subgradient Algorithm . . . . . . . . 164
4 Cimmino Subgradient Projection Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
4.1 Preliminaries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
4.2 Cimmino Subgradient Projection Algorithm . . . . . . . . . . . . . . . . . . . . . . . 177
4.3 Two Convergence Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
4.4 The Third and Fourth Convergence Results. . . . . . . . . . . . . . . . . . . . . . . . . 202
5 Iterative Subgradient Projection Algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
5.1 Preliminaries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
5.2 Auxiliary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
5.3 The Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
6 Dynamic String-Averaging Subgradient Projection Algorithm . . . . . . . 243
6.1 Preliminaries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
6.2 The Basic Auxiliary Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
6.3 The Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
7 Fixed Point Gradient Projection Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
7.1 Preliminaries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
7.2 The Basic Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
7.3 An optimization problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
8 Cimmino Gradient Projection Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
8.1 Preliminaries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
8.2 Cimmino Type Gradient Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
8.3 The Basic Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
8.4 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
9 A Class of Nonsmooth Convex Optimization Problems . . . . . . . . . . . . . . . . 311
9.1 Preliminaries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
9.2 Approximate Solutions of Problems on Bounded Sets . . . . . . . . . . . . . 314
9.3 Approximate Solutions of Problems on Unbounded Sets . . . . . . . . . . 317
9.4 Convergence to the Set of Minimizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
9.5 Auxiliary Results on the Convergence of Infinite Products . . . . . . . . 323
9.6 Auxiliary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
9.7 Proof of Theorem 9.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
9.8 Proof of Theorem 9.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
9.9 Proof of Theorem 9.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Contents xi

9.10 Proof of Theorem 9.4 ................................................ 354


9.11 Proof of Theorem 9.5 ................................................ 359
9.12 Proof of Theorem 9.6 ................................................ 368
9.13 Proof of Theorem 9.7 ................................................ 375
9.14 Proof of Theorem 9.8 ................................................ 382
9.15 Proof of Theorem 9.9 ................................................ 390
10 Zero-Sum Games with Two Players . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
10.1 Preliminaries and an Auxiliary Result. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
10.2 Zero-Sum Games on Bounded Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
10.3 The First Main Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
10.4 The Second Main Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
Chapter 1
Introduction

In this book we study optimization on solution sets of common fixed point problems.
Our goal is to obtain a good approximate solution of the problem in the presence
of computational errors. We show that an algorithm generates a good approximate
solution, if the sequence of computational errors is bounded from above by a small
constant. Moreover, if we known computational errors for our algorithm, we find
out what an approximate solution can be obtained and how many iterates one needs
for this. In this section we discuss algorithms which are studied in the book.

1.1 Subgradient Projection Method

In this book we use the following notation. For every z ∈ R 1 denote by z the
largest integer which does not exceed z:

z = max{i ∈ R 1 : i is an integer and i ≤ z}.

For every nonempty set D, every function f : D → R 1 and every nonempty set
C ⊂ D we set

inf(f, C) = inf{f (x) : x ∈ C}

and

argmin(f, C) = argmin{f (x) : x ∈ C} = {x ∈ C : f (x) = inf(f, C)}.

Let X be a Hilbert space equipped with an inner product denoted by ·, · which
induces a complete norm · . For each x ∈ X and each r > 0 set

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 1


A. J. Zaslavski, Optimization on Solution Sets of Common Fixed Point Problems,
Springer Optimization and Its Applications 178,
https://doi.org/10.1007/978-3-030-78849-0_1
2 1 Introduction

BX (x, r) = {y ∈ X : x − y ≤ r}

and set

B(x, r) = BX (x, r)

if the space X is understood.


For each x ∈ X and each nonempty set E ⊂ X set

d(x, E) = inf{ x − y : y ∈ E}.

For each nonempty open convex set U ⊂ X and each convex function f : U → R 1 ,
for every x ∈ U set

∂f (x) = {l ∈ X : f (y) − f (x) ≥ l, y − x for all y ∈ U }

which is called the subdifferential of the function f at the point x [61, 62, 75].
Denote by Card(A) the cardinality of a set A. We suppose that the sum over an
empty set is zero.
In this book we study the subgradient algorithm and its modifications for
minimization of convex functions, under the presence of computational errors. It
should be mentioned that the subgradient projection algorithm is one of the most
important tools in the optimization theory [1, 14, 15, 22, 31, 37, 46, 48, 50, 52–
54, 65, 72, 73, 86, 90, 93, 97], nonlinear analysis [11, 16, 17, 40, 51, 68, 78, 83, 91, 92]
and their applications. Usually the problem, studied in the literature, is described by
an objective function and a set of feasible points. For this algorithm each iteration
consists of two steps. The first step is a calculation of a subgradient of the objective
function while in the second one we calculate a projection on the feasible set.
In each of these two steps there is a computational error. In general, these two
computational errors are different. In our recent research [93, 95, 96] we show
that the algorithm generate a good approximate solution, if all the computational
errors are bounded from above by a small positive constant. Moreover, if we known
computational errors for the two steps of our algorithm, we find out what an
approximate solution can be obtained and how many iterates one needs for this.
It should be mentioned that in [93, 95] analogous results were obtained for many
others important algorithms in optimization and in the game theory.
We use the subgradient projection algorithm for constrained minimization
problems in Hilbert spaces equipped with an inner product denoted by ·, · which
induces a complete norm · . It should be mentioned that optimization problems
in infinite-dimensional Banach and Hilbert spaces are studied in [2, 3, 7, 24, 33, 64]
while the subgradient projection algorithm is analyzed in [3, 12, 36, 44, 47, 57, 67,
74, 79, 81, 82].
Let C be a nonempty closed convex subset of X, U be an open convex subset of
X such that C ⊂ U and let f : U → R 1 be a convex function.
Suppose that there exist L > 0, M0 > 0 such that
1.1 Subgradient Projection Method 3

C ⊂ BX (0, M0 ),

|f (x) − f (y)| ≤ L x − y for all x, y ∈ U.

It is not difficult to see that for each x ∈ U ,

∅ = ∂f (x) ⊂ BX (0, L).

For every nonempty closed convex set D ⊂ X and every x ∈ X there is a unique
point PD (x) ∈ D satisfying

x − PD (x) = inf{ x − y : y ∈ D}.

We consider the minimization problem

f (z) → min, z ∈ C.

Suppose that {ak }∞


k=0 ⊂ (0, ∞). Let us describe our algorithm.
Subgradient Projection Algorithm
Initialization: select an arbitrary x0 ∈ U .
Iterative step: given a current iteration vector xt ∈ U calculate

ξt ∈ ∂f (xt )

and the next iteration vector xt+1 = PC (xt − at ξt ).


In [93] we study this algorithm under the presence of computational errors.
Namely, in [93] we suppose that δ ∈ (0, 1] is a computational error produced by
our computer system, and study the following algorithm.
Subgradient Projection Algorithm with Computational Errors
Initialization: select an arbitrary x0 ∈ U .
Iterative step: given a current iteration vector xt ∈ U calculate

ξt ∈ ∂f (xt ) + BX (0, δ)

and the next iteration vector xt+1 ∈ U such that

xt+1 − PC (xt − at ξt ) ≤ δ.

In Chapter 2 of [95] we consider more complicated, but more realistic, version


of this algorithm. Clearly, for the algorithm each iteration consists of two steps. The
first step is a calculation of a subgradient of the objective function f while in the
second one we calculate a projection on the set C. In each of these two steps there
is a computational error produced by our computer system. In general, these two
4 1 Introduction

computational errors are different. This fact is taken into account in the following
projection algorithm studied in Chapter 2 of [95].
Suppose that {ak }∞k=0 ⊂ (0, ∞) and δf , δC ∈ (0, 1].
Initialization: select an arbitrary x0 ∈ U .
Iterative step: given a current iteration vector xt ∈ U calculate

ξt ∈ ∂f (xt ) + BX (0, δf )

and the next iteration vector xt+1 ∈ U such that

xt+1 − PC (xt − at ξt ) ≤ δC .

Note that in practice for some problems the set C is simple but the function f is
complicated. In this case δC is essentially smaller than δf . On the other hand, there
are cases when f is simple but the set C is complicated and therefore δf is much
smaller than δC .
In Chapter 2 of [95] we proved the following result (see Theorem 2.4).
Theorem 1.1 Let δf , δC ∈ (0, 1], {ak }∞
k=0 ⊂ (0, ∞) and let

x∗ ∈ C

satisfy

f (x∗ ) ≤ f (x) for all x ∈ C.

Assume that {xt }∞ ∞


t=0 ⊂ U , {ξt }t=0 ⊂ X,

x0 ≤ M0 + 1

and that for each integer t ≥ 0,

ξt ∈ ∂f (xt ) + BX (0, δf )

and

xt+1 − PC (xt − at ξt ) ≤ δC .

Then for each natural number T ,


T
at (f (xt ) − f (x∗ ))
t=0

≤ 2−1 x∗ − x0 2
+ δC (T + 1)(4M0 + 1)
1.2 Fixed Point Subgradient Algorithms 5


T 
T
+δf (2M0 + 1) at + 2−1 (L + 1)2 at2 .
t=0 t=0

Moreover, for each natural number T ,


T 
T
f (( at )−1 at xt ) − f (x∗ ), min{f (xt ) : t = 0, . . . , T } − f (x∗ )
t=0 t=0


T 
T
≤ 2−1 ( at )−1 x∗ − x0 2
+( at )−1 δC (T + 1)(4M0 + 1)
t=0 t=0


T 
T
+δf (2M0 + 1) + 2−1 ( at )−1 (L + 1)2 at2 .
t=0 t=0

Theorem 1.1 a generalization of Theorem 2.4 of [93] proved in the case when
δf = δC .
We are interested  in an optimal choice of at , t = 0, 1, . . . . Let T be a natural
T
number and AT = t=0 at be given. It is shown in [95] that the best choice is
at = (T + 1)−1 AT , t = 0, . . . , T .
Let T be a natural number and at = a > 0, t = 0, . . . , T . It is shown in [95] that
the best choice of a is

a = (2δC (4M0 + 1))1/2 (L + 1)−1 .

Now we can think about the best choice of T . It is not difficult to see that it should
be at the same order as δC−1 .
In [96] we generalize the results obtained in [95] for the subgradient projection
algorithm in the case when instead of the projection operator on C it is used a quasi-
nonexpansive retraction on C.

1.2 Fixed Point Subgradient Algorithms

In the previous section we study the subgradient projection algorithm for minimiza-
tion of a convex function f on a convex closed set C in a Hilbert space. In our
analysis it was used the fact that we can calculate a projection operator PC with
small computational errors. Of course, this is possible only when the C is simple,
like a simplex or a half-space. In practice the situation is more complicated. In real
world applications the set C is an intersection of a finite family of simple closed
convex sets Ci , i = 1, . . . , Cm . To calculate the mapping PC is impossible and
instead of it one has to work with projections PCi , i = 1, . . . , m on the simple sets
6 1 Introduction


C1 , . . . , Cm considering the products m i=1 PCi (the iterative algorithm), convex
combination of PCi , i = 1, . . . , m (the Cimmino algorithm) and a more recent
and advanced dynamic string-averaging algorithm. The dynamic string-averaging
methods were were first introduced by Y. Censor, T. Elfving, and G. T. Herman
in [23] for solving a convex feasibility problem, when a given collection of sets
is divided into blocks and the algorithms operate in such a manner that all the
blocks are processed in parallel. Iterative methods for solving common fixed point
problems is a special case of dynamic string-averaging methods with only one block.
Iterative methods and dynamic string-averaging methods are important tools for
solving convex feasibility problems and common fixed point problems in a Hilbert
space [5, 6, 8, 16, 17, 19–21, 25, 27, 28, 30, 45, 66, 76, 77, 84, 85, 94].
In Chapter 2 of the book we consider a minimization of a convex function on
a common fixed point set of a finite family of quasi-nonexpansive mappings in a
Hilbert space. Our goal is to obtain a good approximate solution of the problem in
the presence of computational errors. We use the Cimmino subgradient algorithm,
the iterative subgradient algorithm and the dynamic string-averaging subgradient
algorithm and show that each of them generates a good approximate solution, if
the sequence of computational errors is bounded from above by a small constant.
Moreover, if we known computational errors for our algorithm, we find out what an
approximate solution can be obtained and how many iterates one needs for this.
Let (X, ·, · ) be a Hilbert space with an inner product ·, · which induces a
complete norm · .
Suppose that m is a natural number, c̄ ∈ (0, 1], Pi : X → X, i = 1, . . . , m, for
every integer i ∈ {1, . . . , m},

Fix(Pi ) := {z ∈ X : Pi (z) = z} = ∅

and that the inequality

z−x 2
≥ z − Pi (x) 2
+ c̄ x − Pi (x) 2

holds for every for every integer i ∈ {1, . . . , m}, every point x ∈ X and every point
z ∈ Fix(Pi ). Set

F = ∩m
i=1 Fix(Pi ).

For every positive number  and every integer i ∈ {1, . . . , m} set

F (Pi ) = {x ∈ X : x − Pi (x) ≤ },

F̃ (Pi ) = F (Pi ) + B(0, ),

F = ∩m
i=1 F (Pi ),
1.2 Fixed Point Subgradient Algorithms 7

F̃ = ∩m
i=1 F̃ (Pi )

and

 = F + B(0, ).
F

A point belonging to the set F is a solution of our common fixed point problem
while a point which belongs to the set F̃ is its -approximate solution.
Let M∗ > 0 satisfy

F ∩ B(0, M∗ ).

and let f : X → R 1 be a convex continuous function. In Chapter 2 we consider the


minimization problem

f (x) → min, x ∈ F.

Assume that

inf(f, F ) = inf(f, F ∩ B(0, M∗ )).

Fix α > 0. Let us describe our first algorithm.


Cimmino Subgradient Algorithm
Initialization: select an arbitrary x0 ∈ X.
Iterative step: given a current iteration vector xk ∈ X calculate

lk ∈ ∂f (xk ),

pick wk+1 = (wk+1 (1), . . . , wk+1 (m)) ∈ R m such that

wk+1 (i) ≥ 0, i = 1, . . . , m,


m
wk+1 (i) = 1
i=1

and define the next iteration vector


m
xk+1 = wk+1 (i)Pi (xk − αlk ).
i=1

In Chapter 2 this algorithm is studied under the presence of computational errors


and two convergence results are obtained. Fix
8 1 Introduction

Δ ∈ (0, m−1 ).

We suppose that δf ∈ (0, 1] is a computational error produced by our computer


system, when we calculate a subgradient of the objective function f while δp ∈
[0, 1] is a computational error produced by our computer system, when we calculate
the operators Pi , i = 1, . . . , m. Let α > 0 be a step size.
Cimmino Subgradient Algorithm with Computational Errors
Initialization: select an arbitrary x0 ∈ X.
Iterative step: given a current iteration vector xk ∈ X calculate

ξk ∈ ∂f (xk ) + B(0, δf ),

pick wk+1 = (wk+1 (1), . . . , wk+1 (m)) ∈ R m such that

wk+1 (i) ≥ Δ, i = 1, . . . , m,


m
wk+1 (i) = 1,
i=1

calculate

yk,i ∈ B(Pi (xk − αξk ), δp ), i = 1, . . . , m

and the next iteration vector and xt+1 ∈ X such that


m
xt+1 − wt+1 (i)yt,i ≤ δp .
i=1

In this algorithm, as well for other algorithms considered in the book, we assume
that the step size does not depend on the number of iterative step k. The same
analysis can be done when step sizes depend on k. On the other hand, as it was
shown in [93, 95], in the case of computational errors the best choice of step sizes
is step sizes which do not depend on iterative step numbers.
In the following result obtained in Chapter 2 (Theorem 2.9) we assume the
objective function f satisfies a coercivity growth condition.
Theorem 1.2 Let the function f be Lipschitz on bounded subsets of X,

lim f (x) = ∞,
x →∞

M ≥ 2M∗ + 8, L0 ≥ 1,

M1 > sup{|f (u)| : u ∈ B(0, M∗ + 4)} + 4,


1.2 Fixed Point Subgradient Algorithms 9

f (u) > M1 + 4 for all u ∈ X \ B(0, 2−1 M),

|f (z1 ) − f (z2 )| ≤ L0 z1 − z2 for all z1 , z2 ∈ B(0, 3M + 4),

δf , δp ∈ [0, 1], α > 0 satisfy

α ≤ L−2
0 , α ≥ δf (6M + L0 + 2), α ≥ 2δp (6M + 2), (1.1)

T be a natural number and let

γT = max{α(L0 + 1), (Δc̄)−1/2 (4M 2 T −1 + α(L0 + 1)(12M + 4))1/2 + δp }.

T −1
Assume that {xt }Tt=0 ⊂ X, {ξt }t=0 ⊂ X,

(wt (1), . . . , wt (m)) ∈ R m , t = 1, . . . , T ,


m
wt (i) = 1, t = 1, . . . , T ,
i=1

wt (i) ≥ Δ, i = 1, . . . , m, t = 1, . . . , T ,

x0 ∈ B(0, M)

and that for all integers t ∈ {0, . . . , T − 1},

B(ξt , δf ) ∩ ∂f (xt ) = ∅,

yt,i ∈ B(Pi (xt − αξt ), δp ), i = 1, . . . , m,


m
xt+1 − wt+1 (i)yt,i ≤ δp .
i=1

Then

xt ≤ 2M + M∗ , t = 0, . . . , T

and


m
min{max{Δc̄ xt − αξt − yt,i 2
− α(L0 + 1)(12M + 4),
i=0

2α(f (xt ) − inf(f, F )) − 4δp (6M + 3) − α 2 L20 − 2α(6M + L1 + 1)} :


10 1 Introduction

t = 0, . . . , T − 1} ≤ 4M 2 T −1 .

Moreover, if t ∈ {0, . . . , T − 1} and


m
max{Δc̄ xt − αξt − yt,i 2
− α(L0 + 1)(12M + 4),
i=0

2α(f (xt ) − inf(f, F )) − 4δp (6M + 3)

− α 2 L20 − 2αδf (6M + L0 + 1)} ≤ 4M 2 T −1 , (1.2)

then

f (xt ) ≤ inf(f, F ) + 2M 2 (T α)−1

+ 2α −1 δp (6M + 3) + 2−1 αL20 + δf (6M + L0 + 3) (1.3)

and

γT .
xt ∈ F

In Chapter 2 we also obtain an extension of this result (Theorem 2.10) when


instead of assuming that f satisfies the growth condition we suppose that there
exists r0 ∈ (0, 1] such that the set Fr0 is bounded.
In Theorem 1.2 the computational errors δf , δp are fixed. Assume that they are
positive. Let us choose α, T . First, we choose α in order to minimize the right-hand
side of (1.3). Since T can be an arbitrary large we need to minimize the function

2α −1 δp (6M + 3) + 2−1 αL20 , α > 0.

Its minimizer is

α = 2L−1
0 (δp (6M + 3))
1/2
.

Since α satisfies (1.1) we obtain the following restrictions on δf , δp :

δf ≤ 2L−1 (6M + L0 + 2)−1 ,


1/2
0 δp (6M + 3)
1/2

δp ≤ 4−1 L−2 −1
0 (6M + 3) .

In this case

γT = max{2L−1
0 (δp (6M + 3))
1/2
(L0 + 1),
1.3 Proximal Point Subgradient Algorithm 11

(Δc̄)−1/2 (4M 2 T −1 + 2L−1


0 (δp (6M + 3))
1/2
(L0 + 1)(12M + 4))1/2 + δp }.

We choose T with the same order as δp−1 . For example, T = δp−1 . In this case in
view of Theorem 1.2, there exists t ∈ {0, . . . , T − 1} such that then

1/2
f (xt ) ≤ inf(f, F ) + c1 δp + δf (6M + L0 + 3)

and


xt ∈ F 1/4
c 2 δp

where c1 , c2 are positive constants which depend on M, L0 , Δ, c̄.


Let us explain how we can obtain t satisfying (1.2). Set


m
E = {t ∈ {0, . . . , T − 1} : Δc̄ xt − αξt − yt,i 2

i=0

≤ α(L0 + 1)(12M + 4) + 4M 2 T α −1 }

and find t∗ ∈ E such that f (xt∗ ) ≤ f (xt ) for all t ∈ E. This t satisfies (1.2).
In Chapter 2 we also establish analogs of Theorem 1.2 for the iterative subgradi-
ent algorithm and the dynamic string-averaging subgradient algorithm.

1.3 Proximal Point Subgradient Algorithm

In Chapter 3 we consider a minimization of a convex function on an intersection of


two sets in a Hilbert space. One of them is a common fixed point set of a finite family
of quasi-nonexpansive mappings while the second one is a common zero point
set of finite family of maximal monotone operators. Our goal is to obtain a good
approximate solution of the problem in the presence of computational errors. We
use the Cimmino proximal point subgradient algorithm, the iterative proximal point
subgradient algorithm and the dynamic string-averaging proximal point subgradient
algorithm and show that each of them generates a good approximate solution, if
the sequence of computational errors is bounded from above by a small constant.
Moreover, if we known computational errors for our algorithm, we find out what an
approximate solution can be obtained and how many iterates one needs for this.
Let (X, ·, · ) be a Hilbert space with an inner product ·, · which induces a
complete norm · .
A multifunction T : X → 2X is called a monotone operator if and only if

z − z , w − w  ≥ 0 ∀z, z , w, w  ∈ X
12 1 Introduction

such that w ∈ T (z) and w  ∈ T (z ).

It is called maximal monotone if, in addition, the graph

{(z, w) ∈ X × X : w ∈ T (z)}

is not properly contained in the graph of any other monotone operator T  : X → 2X .


A fundamental problem consists in determining an element z such that 0 ∈ T (z). For
example, if T is the subdifferential ∂f of a lower semicontinuous convex function
f : X → (−∞, ∞], which is not identically infinity, then T is maximal monotone
(see [60, 63]), and the relation 0 ∈ T (z) means that z is a minimizer of f .
Let T : X → 2X be a maximal monotone operator. The proximal point algorithm
generates, for any given sequence of positive real numbers and any starting point in
the space, a sequence of points and the goal is to show the convergence of this
sequence. Note that in a general infinite-dimensional Hilbert space this convergence
is usually weak. The proximal algorithm for solving the inclusion 0 ∈ T (z) is based
on the fact established by Minty [59], who showed that, for each z ∈ X and each
c > 0, there is a unique u ∈ X such that

z ∈ (I + cT )(u),

where I : X → X is the identity operator (I x = x for all x ∈ X).


The operator

Pc,T := (I + cT )−1

is therefore single-valued from all of X onto X (where c is any positive number). It


is also nonexpansive:

Pc,T (z) − Pc,T (z ) ≤ z − z for all z, z ∈ X

and

Pc,T (z) = z if and only if 0 ∈ T (z).

Following the terminology of Moreau [63] Pc,T is called the proximal mapping
associated with cT .
The proximal point algorithm generates, for any given sequence {ck }∞k=0 of
positive real numbers and any starting point z0 ∈ X, a sequence {zk }∞
k=0 ⊂ X,
where

zk+1 := Pck ,T (zk ), k = 0, 1, . . .

It is not difficult to see that the


1.3 Proximal Point Subgradient Algorithm 13

graph(T ) := {(x, w) ∈ X × X : w ∈ T (x)}

is closed in the norm topology of X × X.


Set

F (T ) = {z ∈ X : 0 ∈ T (z)}.

Proximal point method is an important tool in solving optimization problems


[32, 34, 35, 38, 43, 55, 70, 87, 88]. It is also used for solving variational inequalities
with monotone operators [9, 18, 41, 42, 71, 80, 89, 94] which is an important topic
of nonlinear analysis and optimization [4, 10, 13]. Usually algorithms considering
in the literature generate sequences which converge weakly to an element of F (T ).
Let L1 be a finite set of maximal monotone operators T : X → 2X and L2 be a
finite set of mappings T : X → X. We suppose that the set L1 ∪ L2 is nonempty.
(Note that one of the sets L1 or L2 may be empty.)
Let c̄ ∈ (0, 1] and let c̄ = 1, if L2 = ∅.
We suppose that

F (T ) = {z ∈ X : 0 ∈ T (z)} = ∅ for any T ∈ L1

and that for every mapping T ∈ L2 ,

Fix(T ) := {z ∈ X : T (z) = z} = ∅,

z−x 2
≥ z − T (x) 2
+ c̄ x − T (x) 2

for all x ∈ X and all z ∈ Fix(T ).

Let M∗ > 0,

F := (∩T ∈L1 F (T )) ∩ (∩Q∈L2 Fix(Q)) = ∅

and

F ∩ B(0, M∗ ) = ∅.

Let  > 0. For every monotone operator T ∈ L1 define

F (T ) = {x ∈ X : T (x) ∩ B(0, ) = ∅}

and for every mapping T ∈ L2 set

Fix (T ) = {x ∈ X : T (x) − x ≤ }.

Define
14 1 Introduction

F = (∩T ∈L1 F (T )) ∩ (∩Q∈L2 Fix (Q)),

F̃ = (∩T ∈L1 {x ∈ X : d(x, F (T )) ≤ })

∩(∩Q∈L2 {x ∈ X : d(x, Fix (Q)) ≤ }).

Let f : X → R 1 be a convex continuous function. We consider the minimization


problem

f (x) → min, x ∈ F.

Assume that

inf(f, F ) = inf(f, F ∩ B(0, M∗ )).

Let λ̄ > 0 and let λ̄ = ∞ and λ̄−1 = 0, if L1 = ∅.


Recall that the sum over an empty set is zero.
Fix α > 0.
Let us describe our algorithm.
Cimmino Proximal Point Subgradient Algorithm
Initialization: select an arbitrary x0 ∈ X.
Iterative step: given a current iteration vector xk ∈ X pick c(T ) ≥ λ̄, T ∈ L1
and w : L1 ∪ L2 → (0, ∞) such that

{w(S) : S ∈ L1 ∪ L2 } = 1,

lk ∈ ∂f (xk )

and define the next iteration vector


 
xk+1 = w(S)S(xk − αlk ) + w(S)Pc(S),S (xk − αlk ).
S∈L2 S∈L1

In Chapter 3 this algorithm is studied under the presence of computational errors.


Fix

Δ ∈ (0, Card(L1 ∪ L2 )−1 ].

We suppose that δf ∈ (0, 1] is a computational error produced by our computer


system, when we calculate a subgradient of the objective function f while δp ∈
[0, 1] is a computational error produced by our computer system, when we calculate
the operators Pc,S , S ∈ L1 , c ≥ λ̄ and S ∈ L2 . Let α > 0 be a step size.
Cimmino Proximal Point Subgradient Algorithm with Computational Errors
1.3 Proximal Point Subgradient Algorithm 15

Initialization: select an arbitrary x0 ∈ X.


Iterative step: given a current iteration vector xk ∈ X pick c(T ) ≥ λ̄, T ∈ L1
and w : L1 ∪ L2 → [Δ, ∞) such that

{w(S) : S ∈ L1 ∪ L2 } = 1,

calculate

lk ∈ ∂f (xk ) + B(0, δf )

and

yk,S ∈ B(S(xk − αξk ), δp ), S ∈ L2 ,

yk,S ∈ B(Pck (S),S (xk − αξk ), δp ), S ∈ L1

and calculate the next iteration vector xk+1 ∈ X satisfying



xk+1 − w(S)yk,S ≤ δp .
S∈L1 ∪L2

The following result is established in Chapter 3 (Theorem 3.6).


Theorem 1.3 Let the function f be Lipschitz on bounded subsets of X,

lim f (x) = ∞,
x →∞

M ≥ 2M∗ + 6, L0 ≥ 1,

M1 > sup{|f (u)| : u ∈ B(0, M∗ + 4)} + 4,

f (u) > M1 + 4 for all u ∈ X \ B(0, 2−1 M),

|f (z1 ) − f (z2 )| ≤ L0 z1 − z2 for all z1 , z2 ∈ B(0, 3M + 4),

δf , δp ∈ [0, 1], α > 0 satisfy

α ≤ min{L−2 −1
0 , (L0 + 1) }, α ≥ 2δp (6M + 3),

δf ≤ (6M + L0 + 1)−1 ,

T be a natural number and let

γT = (4M 2 T −1 + α(L0 + 1)(12M + 1) + δp (12M + 13)(Δc̄)−1 )1/2 + δp .


16 1 Introduction

Assume that for all t = 1, . . . , T ,

wt : L1 ∪ L2 → [Δ, ∞),

{wt (S) : S ∈ L1 ∪ L2 } = 1,

c(T ) ≥ λ̄, T ∈ L1 ,
T −1
{xt }Tt=0 ⊂ X, {ξt }t=0 ⊂ X,

x0 ∈ B(0, M)

and that for all integers t ∈ {0, . . . , T − 1},

B(ξt , δf ) ∩ ∂f (xt ) = ∅,

yt,S ∈ B(S(xt − αξt ), δp ), S ∈ L2 ,

yt,S ∈ B(Pct (S),S (xt − αξt ), δp ), S ∈ L1 ,



xt+1 − wt+1 (S)yt,S ≤ δp .
S∈L1 ∪L2

Then

min{max{Δc̄ xt − αξt − yt,S 2

S∈L1 ∪L2

−α(L0 + 1)(12M + 1) − δp (12M + 13),

2α(f (xt ) − inf(f, F ))

−δp (6M + 3) − 2−1 α 2 L20 − αδf (6M + L1 + 1)} :

t = 0, . . . , T − 1} ≤ 4M 2 T −1 .

Moreover, if t ∈ {0, . . . , T − 1} and



max{Δc̄ xt − αξt − yt,S 2

S∈L1 ∪L2

−α(L0 + 1)(12M + 1) − δp (12M + 13),

2α(f (xt ) − inf(f, F ))


1.4 Cimmino Subgradient Projection Algorithm 17

−δp (6M + 3) − 2−1 α 2 L20 − αδf (6M + L1 + 1)} ≤ 4M 2 T −1

then

f (xt ) ≤ inf(f, F )

+2M 2 (T α)−1 + α −1 δp (3M + 2) + 4−1 αL20 + δf (6M + L0 + 1)

and

xt ∈ F̃max{α(L0 +1)+γT ,λ̄−1 γT } .

In Chapter 3 we also obtain an extension of this result (Theorem 3.7) when


instead of assuming that f satisfies the growth condition we suppose that there
exists r0 ∈ (0, 1] such that the set F̃r0 is bounded.
As in the case of Theorem 1.2 we choose α, T and an approximate solution of our
problem after T iterations. In Chapter 3 we also establish analogs of Theorem 1.3 for
the iterative proximal point subgradient algorithm and the dynamic string-averaging
proximal point subgradient algorithm.

1.4 Cimmino Subgradient Projection Algorithm

In Chapter 4 we consider a minimization of a convex function on a solution set


of a convex feasibility problem in a general Hilbert space using the Cimmino
subgradient projection algorithm. Our goal is to obtain a good approximate solution
of the problem in the presence of computational errors. We show that an algorithm
generates a good approximate solution, if the sequence of computational errors is
bounded from above by a small constant. Moreover, if we know computational
errors for our algorithm, we find out what an approximate solution can be obtained
and how many iterates one needs for this.
Let (X, ·, · ) be a Hilbert space with an inner product ·, · which induces a
complete norm · .
We recall the following useful facts on convex functions.
Let f : X → R 1 be a continuous convex function such that

{x ∈ X : f (x) ≤ 0} = ∅.

Let y0 ∈ X. For every l ∈ ∂f (y0 ) it is easy to see that

{x ∈ X : f (x) ≤ 0} ⊂ {x ∈ X : f (y0 ) + l, x − y0 ≤ 0}.

It is well-known that the following lemma holds (see Lemma 11.1 of [94]).
18 1 Introduction

Lemma 1.4 Let y0 ∈ X, f (y0 ) > 0, l ∈ ∂f (y0 ) and let

D = {x ∈ X : f (y0 ) + l, x − y0 ≤ 0}.

The l = 0 and
−2
PD (y0 ) = y0 − f (y0 ) l l.

Let us now describe the convex feasibility problem and the Cimmino subgradient
projection algorithm which is studied in Chapter 4.
Let m be a natural number and fi : X → R 1 , i = 1, . . . , m be convex continuous
functions.
For every integer i = 1, . . . , m put

Ci = {x ∈ X : fi (x) ≤ 0},

i=1 Ci = ∩i=1 {x ∈ X : fi (x) ≤ 0}.


C = ∩m m

We suppose that

C = ∅.

A point x ∈ C is called a solution of our feasibility problem. For a given positive


number  a point x ∈ X is called an -approximate solution of the feasibility
problem if

fi (x) ≤  for all i = 1, . . . , m.

Let M∗ > 0 and

C ∩ B(0, M∗ ) = ∅.

Let f : X → R 1 be a continuous function. We consider the minimization


problem

f (x) → min, x ∈ C.

Assume that

inf(f, C) = inf(f, C ∩ B(0, M∗ )).

Fix

Δ̄ ∈ (0, m−1 ].
1.4 Cimmino Subgradient Projection Algorithm 19

Let us describe our algorithm.


Cimmino Subgradient Projection Algorithm Fix α > 0.
Initialization: select an arbitrary x0 ∈ X.
Iterative step: given a current iteration vector xk ∈ X calculate

lk ∈ ∂f (xk ),

pick wk+1 = (wk+1 (1), . . . , wk+1 (m)) ∈ R m such that

wk+1 (i) ≥ Δ̄, i = 1, . . . , m,


m
wk+1 (i) = 1,
i=1

for each i ∈ {1, . . . , m},

if fi (xk − αlk ) ≤ 0 then xk,i = xk − αlk , lk,i = 0

and if fi (xk − αlk ) > 0 then

lk,i ∈ ∂fi (xk − αlk ),


−2
xk,i = xk − αlk − fi (xk − αlk ) lk,i lk,i

and define the next iteration vector


m
xk+1 = wk+1 (i)xk,i .
i=1

In Chapter 4 this algorithm is studied under the presence of computational errors.


Cimmino Subgradient Projection Algorithm with Computational Errors
We suppose that δf ∈ (0, 1] is a computational error produced by our computer
system, when we calculate a subgradient of the objective function f , δC ∈ [0, 1]
is a computational error produced by our computer system, when we calculate
subgradients of the constraint functions fi , i = 1, . . . , m and δ̄C is a computational
error produced by our computer system, when we calculate auxiliary projection
operators. Let α > 0 be a step size and Δ ∈ (0, 1].
Initialization: select an arbitrary x0 ∈ X.
Iterative step: given a current iteration vector xt ∈ X calculate

lt ∈ ∂f (xt ) + B(0, δf ),

pick wt+1 = (wt+1 (1), . . . , wt+1 (m)) ∈ R m such that


20 1 Introduction

wt+1 (i) ≥ Δ̄, i = 1, . . . , m,


m
wt+1 (i) = 1,
i=1

for each i ∈ {1, . . . , m},

if fi (xt − αlt ) ≤ Δ, then yt+1,i = xt − αlt , lt,i = 0,

if fi (xt − αlt ) > Δ, then we calculate

lt,i ∈ ∂fi (xt − αlt ) + B(0, δC ),

(this implies that lt,i = 0),


−2
yt+1,i ∈ B(xt − αlt − fi (xt − αlt ) lt,i lt,i , δ̄C )

and the next iteration vector


m
xt+1 ∈ B( wt+1 (i)yt+1,i , δ̄C ).
i=1

Let Δ ∈ (0, 1], δf , δC , δ̄C ∈ [0, 1], α ∈ (0, 1], M̃ ≥ M∗ , M0 ≥ max{1, M̃},
M1 > 2, L0 ≥ 1,

fi (B(0, 3M̃ + 4)) ⊂ [−M0 .M0 ], i = 1, . . . , m,

|fi (u) − fi (v)| ≤ (M1 − 2) u − v

for all u, v ∈ (0, 3M̃ + 2) and all i = 1, . . . , m,

|f (u) − f (v)| ≤ L0 u − v for all u, v ∈ (0, 3M̃ + 4),

α ≤ 2−1 (L0 + 1)−1 (6M̃ + 5)−1 , δC ≤ 32−1 Δ2 (6M̃ + 5)−2 (M0 + 5)−1 ,

α ≤ min{8−1 (L0 + 1)−1 ΔM1−1 , L−2


0 },

δ̄C ≤ 8−1 ΔM1−1 , δC ≤ 2−7 Δ3 M1−1 (6M̃ + 1)−2 ,

α ≤ 96−1 (L0 + 1)−1 (6M̃ + 5)−1 Δ̄Δ2 M1−2 ,

δ̄C < 32−1 Δ̄Δ2 M1−2 (6M̃ + 5)−1 ,


1.4 Cimmino Subgradient Projection Algorithm 21

δf (6M̃ + L0 + 1) ≤ 1, δC < 2−9 Δ4 Δ̄M1−2 (6M̃ + 5)−3 .

In Chapter 4 we obtain the following result (Theorem 4.6).


Theorem 1.5 Let M∗,0 > 0,

|f (u)| ≤ M∗,0 for all u ∈ B(0, M∗ ),

M̃ ≥ 2M∗ + 2,

α ≥ max{64δC (3M̃ + 1)Δ−2 (6M̃ + 1)2 , δ̄C (4M̃ + 5)},

f (u) > M∗,0 + 8 for all u ∈ X \ B(0, 2−1 M̃),

T be a natural number satisfying

T ≥ 128M̃ 2 Δ̄−1 Δ−2 M12 ,

T −1
{xt }Tt=0 ⊂ X, {lt }t=0 ⊂ X, lt,i ∈ X, t = 0, . . . , T − 1, i = 1, . . . , m,

x0 ≤ M̃,

wt = (wt (1), . . . , wt (m)) ∈ R m , t = 1, . . . , T ,

wt (i) ≥ Δ̄, i = 1, . . . , m, t = 1, . . . , T ,


m
wt (i) = 1, t = 1, . . . , T
i=1

and yt,i ∈ X, t = 1, . . . , T , i = 1, . . . , m.
Assume that for all integers t ∈ {0, . . . , T − 1} and all integers i ∈ {1, . . . , m},

B(lt , δf ) ∩ ∂f (xt ) = ∅,

if fi (xt − αlt ) ≤ Δ, then yt+1,i = xt − αlt , lt,i = 0,

if fi (xt − αlt ) > Δ, then

B(lt,i , δC ) ∩ ∂fi (xt − αlt ) = ∅

(this implies that lt,i = 0),


−2
yt+1,i ∈ B(xt − αlt − fi (xt − αlt ) lt,i lt,i , δC )
22 1 Introduction

and that


m
xt+1 − wt+1 (i)yt+1,i ≤ δ̄C .
i=1

Then

xt ≤ 3M̃, t = 0, . . . , T ,

min{max{2α(f (xt ) − inf(f, C)) − 2α 2 L20 − 2αδf (6M̃ + L0 + 1)

−64(6M̃ + 2)δC Δ−2 (6M̃ + 1)2 − δC (4M̃ + 5),


m
Δ̄ xt − yt+1,i 2
− 3α(L0 + 1)(6M̃ + 5)
i=1

−δ̄C (6M̃ + 5) − 16δC Δ−2 (6M̃ + 5)3 } : t = 0, . . . , T − 1} ≤ 4M 2 T −1 .

Moreover, if t ∈ {0, . . . , T − 1} and

max{2α(f (xt ) − inf(f, C)) − 2α 2 L20 − 2αδf (6M̃ + L0 + 1)

−64(6M̃ + 2)δC Δ−2 (6M̃ + 1)2 − δC (4M̃ + 5),


m
Δ̄ xt − yt+1,i 2
− 3α(L0 + 1)(6M̃ + 5)
i=1

−δ̄C (6M̃ + 5) − 16δC Δ−2 (6M̃ + 5)3 } ≤ 4M 2 T −1

then

f (xt ) ≤ inf(f, C)) + 2M 2 (T α)−1 + αL20 + δf (6M̃ + L0 + 1)

+32(6M̃ + 2)δC Δ−2 (6M̃ + 1)2 + α −1 δ̄C (4M̃ + 5),

fi (xt ) ≤ Δ + M2 α(L0 + 1), i = 1, . . . , m.

In Chapter 4 we also obtain an extension of this result (Theorem 4.7) when


instead of assuming that f satisfies the growth condition we suppose that there
exists r0 ∈ (0, 1] such that the set

{x ∈ X : fi (x) ≤ r0 , i = 1, . . . , m}
1.5 Examples 23

is bounded. Chapter 4 also contains Theorems 4.9 and 4.10 which are extensions
of Theorems 4.6 and 4.7 obtained for a modification of the Cimmino subgradient
projection algorithm.
As in the case of Theorem 1.2 we choose α, T and an approximate solution of
our problem after T iterations.
In Chapters 5 and 6 we continue to study the optimization problem considered
in Chapter 4. In Chapter 5 we analyze the iterative subgradient projection algorithm
while in Chapter 6 the dynamic string-averaging subgradient projection algorithm
is used. The analogs of the theorem above are established for these two algorithms.
In Chapters 7 and 8 we study minimization problems with smooth objective
functions using a fixed point gradient projection algorithm and a Cimmino gradient
projection algorithm respectively.

1.5 Examples

In this section we consider several examples arising in the real world applications.
They belong to the class of problems considered in the book and all our results can
be applied to them.
Example 1.6 In [22] it was studied a problem of computerized tomography image
reconstruction, posed as a constrained minimization problem aiming at finding a
constraint-compatible solution that has a reduced value of the total variation of the
reconstructed image.
The fully-discretized model in the series expansion approach to the image
reconstruction problem of x-ray computerized tomography (CT) is formulated in
the following manner. A Cartesian grid of square picture elements, called pixels, is
introduced into the region of interest so that it covers the whole picture that has to
be reconstructed. The pixels are numbered in some agreed manner, say from 1 (top
left corner pixel) to J (bottom right corner pixel). The x-ray attenuation function is
assumed to take a constant value xj throughout the j th pixel, for j = 1, 2, ..., J .
Sources and detectors are assumed to be points and the rays between them are
assumed to be lines. Further, assume that the length of intersection of the ith ray
with the j th pixel, denoted by aji , for i = 1, 2, ..., I , j = 1, 2, ..., J , represents the
weight of the contribution of the j th pixel to the total attenuation along the ith ray.
The physical measurement of the total attenuation along the ith ray, denoted by
bi , represents the line integral of the unknown attenuation function along the path
of the ray. Therefore, in this fully-discretized model, the line integral turns out to be
a finite sum and the model is described by a system of linear equations


J
xj aji = bi , i = 1, . . . , I.
j =1
24 1 Introduction

In matrix notation we rewrite this system of linear equations as

Ax = b,

where b ∈ R I is the measurement vector, x ∈ R J is the image vector, and the I × J


matrix A = (aji ) is the projection matrix.
In [22] the image reconstruction problem is represented by the optimization
problem

minimize {f (x) : Ax = b and 0 ≤ x ≤ 1},

where the function f (x) is the total variation (TV) of the image vector x.
Example 1.7 In [39] the CT reconstruction problem is formulated as a constrained
optimization problem of the following kind:

Find x ∗ = argminf (x) subject to y − Ax 2 ≤ ,

where the positive constant , the vector y and the matrix A are given and · 2 is
the Euclidean norm. As it was pointed out in [39], there are many possible choices
for the regularizing convex function f . A popular option is a total variation.
Example 1.8 In [26] string-averaging algorithmic structures are used for handling a
family of operators in situations where the algorithm needs to employ the operators
in a specific order. String-averaging allows to use strings of indices taken from the
index set of all operators, to apply the operators along these strings, and to combine
their end-points in some agreed manner to yield the next iterate of the algorithm.
It is considered a Hilbert space X with the inner product ·, · which induces a
complete norm · and a finite family of mappings Ti : X → X, i = 1, . . . , m. For
a given u ∈ X it is studied the problem

u − x → min, x ∈ D,

where D is the set of common fixed points of the mappings Ti , i = 1, . . . , m.


Example 1.9 In [29] the development of radiation therapy treatment planning
is considered from a mathematical point of view as the following optimization
problem.
Let f (d, x) be a given objective convex function and let cm (d, x) be given convex
constraint functions, for m = 1, . . . , M. Let aij be given for j = 1, . . . , J and
i = 1, . . . , I , and let lm and um be lower and upper bounds for the constraints cm ,
for m = 1, . . . , M, respectively. The problem is to find a radiation intensity vector
x ∗ ∈ R I and a corresponding dose vector d ∗ ∈ R J that solve the problem:
1.5 Examples 25

f (d, x) → min,

such that

a j , x = dj , j = 1, . . . , J,

lm ≤ cm (d, x) ≤ um , m = 1, . . . , M,

xi ≥ 0, i = 1, . . . , I.

In practice, the objective function f and the constraints are typically chosen to
be convex so that the subgradient projection algorithm is applicable. Traditionally,
a widely used objective function is the 2-Norm of the difference of dose d and a
desired dose b.
Example 1.10 In [49] it is analyzed total variation (TV) minimization for semi-
supervised learning from partially-labeled network-structured data. Its approach
exploits an intrinsic duality between TV minimization and network flow problems.
Consider a dataset of N data points that can be represented as supported at the
nodes of a simple undirected weighted graph G = (V , E, W ), where V are nodes,
E are edges and W are edge weights. It is assumed that labels xi are known at only a
few nodes i ∈ V of a (small) training set M ⊂ V . The goal is to learn the unknown
labels xi for all data points i ∈ V \ M outside the training set. This learning problem
is formulated as the optimization problem

Wi,j |x̃j − x̃i | → min
(i,j )∈E

subject to x̃ ∈ R N , x̃i = xi , i ∈ M.

Example 1.11 The following problem of adaptive filtering and equalization is


considered in [58]:

w, f → max

subject to |w, g (i) | ≤ 1, i = 1, . . . , m.

Here f is a direction associated with the desired signal, while g (i) are directions
associated with interference or noise signals.
Example 1.12 This is an example of a resource allocation or resource sharing
problem considered in [58], where the resource to be allocated is the bandwidth over
each of a set of links. Consider a network with m edges or links, labeled 1, . . . , m,
and n flows, labeled 1, . . . , n. Each flow has an associated non-negative flow rate
fj ; each edge or link has an associated positive capacity ci . Each flow passes over
26 1 Introduction

a fixed set of links (its route); the total traffic ti on link i is the sum of the flow rates
over all flows that pass through link i. The flow routes are described by a routing
matrix A ∈ {0, 1}m×n defined as Aij = 1 if flow j passes through link i and Aij = 0
otherwise. Thus, the vector of link traffic, t ∈ R m , is given by t = Af . The link
capacity constraints can be expressed as Af ≤ c. With a given flow vector f , we
associate a total utility

U (f ) = U1 (f1 ) + · · · + Un (fn ),

where Ui is the utility for flow i, which we assume is concave and nondecreasing.
We will choose flow rates that maximize total utility, in other words, that are
solutions of the problem

U (f ) → max

subject to Af ≤ c, f ≥ 0.

This is the network utility maximization problem.


Example 1.13 Many important engineering problems can be represented in the
form of the following quadratically constrained quadratic program [56]:

x, P0 x + q0 , x → min

subject to x, Pi x + qi , x + ri ≤ 0, i = 1, . . . , m,

where x, qi ∈ R k , ri ∈ R 1 and Pi is a symmetric matrix of rank k × k which is


positive semi-definite.
Chapter 2
Fixed Point Subgradient Algorithm

In this chapter we consider a minimization of a convex function on a common


fixed point set of a finite family of quasi-nonexpansive mappings in a Hilbert space.
Our goal is to obtain a good approximate solution of the problem in the presence
of computational errors. We use the Cimmino subgradient algorithm, the iterative
subgradient algorithm and the dynamic string-averaging subgradient algorithm and
show that each of them generates a good approximate solution, if the sequence of
computational errors is bounded from above by a small constant. Moreover, if we
known computational errors for our algorithm, we find out what an approximate
solution can be obtained and how many iterates one needs for this.

2.1 Common Fixed Point Problems

Let (X, ·, · ) be a Hilbert space with an inner product ·, · which induces a
complete norm · .
For each x ∈ X and each nonempty set E ⊂ X put

d(x, E) = inf{ x − y : y ∈ E}.

For every point x ∈ X and every positive number r > 0 set

B(x, r) = {y ∈ X : x − y ≤ r}.

Suppose that m is a natural number, c̄ ∈ (0, 1], Pi : X → X, i = 1, . . . , m, for


every integer i ∈ {1, . . . , m},

Fix(Pi ) := {z ∈ X : Pi (z) = z} = ∅

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 27


A. J. Zaslavski, Optimization on Solution Sets of Common Fixed Point Problems,
Springer Optimization and Its Applications 178,
https://doi.org/10.1007/978-3-030-78849-0_2
28 2 Fixed Point Subgradient Algorithm

and that the inequality

z−x 2
≥ z − Pi (x) 2
+ c̄ x − Pi (x) 2
(2.1)

holds for every for every integer i ∈ {1, . . . , m}, every point x ∈ X and every point
z ∈ Fix(Pi ). Set

F = ∩m
i=1 Fix(Pi ). (2.2)

For every positive number  and every integer i ∈ {1, . . . , m} set

F (Pi ) = {x ∈ X : x − Pi (x) ≤ }, (2.3)

F̃ (Pi ) = F (Pi ) + B(0, ), (2.4)

F = ∩m
i=1 F (Pi ), (2.5)

F̃ = ∩m
i=1 F̃ (Pi ) (2.6)

and

 = F + B(0, ).
F (2.7)

A point belonging to the set F is a solution of our common fixed point problem
while a point which belongs to the set F̃ is its -approximate solution.
Let M∗ > 0 satisfy

F ∩ B(0, M∗ ) = ∅. (2.8)

Proposition 2.1 Let  > 0, i ∈ {1, . . . , m} and let

Pi (x) − Pi (y) ≤ x − y for all x, y ∈ X. (2.9)

Then F̃ (Pi ) ⊂ F3 (Pi ).


Proof Let x ∈ F̃ (Pi ). By (2.4), there exists

y ∈ F (Pi )

such that x − y ≤ . In view of (2.3) and (2.9),

y − Pi (y) ≤ ,

Pi (x) − Pi (y) ≤ ,
2.1 Common Fixed Point Problems 29

x − Pi (x) ≤ x − y + y − Pi (y) + P( y) − Pi (x) ≤ 3

and x ∈ F3 (Pi ). Proposition 2.1 is proved.


Corollary 2.2 Assume that  > 0 and that for all i ∈ {1, . . . , m},

Pi (x) − Pi (y) ≤ x − y for all x, y ∈ X.

Then F̃ ⊂ F3 .


Proposition 2.3 Let  > 0, i ∈ {1, . . . , m} and let

Fix(Pi ) = Pi (X). (2.10)

Then

F̃ (Pi ) ⊂ Fix(Pi ) + B(0, 2).

Proof Let x ∈ F̃ (Pi ). By (2.3) and (2.4), there exists y ∈ X such that

x − y ≤ , y − Pi (y) ≤ .

In view of the relations above and (2.10),

x − Pi (y) ≤ 2,

x ∈ Fix(Pi ) + B(0, 2).

Proposition 2.3 is proved.


Corollary 2.4 Assume that  > 0 and that (2.10) holds for all i ∈ {1, . . . , m}, Then

F̃ ⊂ ∩m
i=1 (Fix(Pi ) + B(0, 2)).

Example 2.5 ([8, 93]) Let D be a nonempty closed convex subset of X. Then for
each x ∈ X there is a unique point PD (x) ∈ D satisfying

x − PD (x) = inf{ x − y : y ∈ D}.

Moreover,

PD (x) − PD (y)|| ≤ x − y for all x, y ∈ X

and for each x ∈ X and each z ∈ D,

z − PD (x), x − PD (x) ≤ 0,
30 2 Fixed Point Subgradient Algorithm

z − PD (x) 2
+ x − PD (x) 2
≤ z − x 2.

Example 2.6 Denote by I the identity self-mapping of X: I (x) = x, x ∈ X. A


mapping T : X → X is called firmly nonexpansive [8] if for all x, y ∈ X,

T (x) − T (y) 2
+ (I − T )(x) − (I − T )(y) 2
≤ x − y 2.

It is easy to see that if a mapping T : X → X is firmly nonexpansive and z ∈ X


satisfies z = T (z), then for all y ∈ X,

z − T (y) 2
+ y − T (y) 2
≤ z − y 2.

2.2 The Cimmino Subgradient Algorithm

Let f : X → R 1 be a convex continuous function. We consider the minimization


problem

f (x) → min, x ∈ F.

Assume that

inf(f, F ) = inf(f, F ∩ B(0, M∗ )). (2.11)

Fix α > 0. Let us describe our algorithm.


Cimmino Subgradient Algorithm
Initialization: select an arbitrary x0 ∈ X.
Iterative step: given a current iteration vector xk ∈ X calculate

lk ∈ ∂f (xk ),

pick wk+1 = (wk+1 (1), . . . , wk+1 (m)) ∈ R m such that

wk+1 (i) ≥ 0, i = 1, . . . , m,


m
wk+1 (i) = 1
i=1

and define the next iteration vector


m
xk+1 = wk+1 (i)Pi (xk − αlk ).
i=1
2.3 Two Auxiliary Results 31

In this chapter this algorithm is studied under the presence of computational


errors and two convergence results are obtained. Fix

Δ ∈ (0, m−1 ). (2.12)

We suppose that δf ∈ (0, 1] is a computational error produced by our computer


system, when we calculate a subgradient of the objective function f while δp ∈
[0, 1] is a computational error produced by our computer system, when we calculate
the operators Pi , i = 1, . . . , m. Let α > 0 be a step size.
Cimmino Subgradient Algorithm with Computational Errors
Initialization: select an arbitrary x0 ∈ X.
Iterative step: given a current iteration vector xk ∈ X calculate

ξk ∈ ∂f (xk ) + B(0, δf ),

pick wk+1 = (wk+1 (1), . . . , wk+1 (m)) ∈ R m such that

wk+1 (i) ≥ Δ, i = 1, . . . , m,


m
wk+1 (i) = 1,
i=1

calculate

yk,i ∈ B(Pi (xk − αξk ), δp ), i = 1, . . . , m

and the next iteration vector and xk+1 ∈ X such that


m
xk+1 − wt+1 (i)yt,i ≤ δp .
i=1

In this algorithm, as well for other algorithms considered in the book, we assume
that the step size does not depend on the number of iterative step k. The same
analysis can be done when step sizes depend on k. On the other hand, as it was
shown in [93, 95], in the case of computational errors, the best choice of step sizes
is step sizes which do not depend on iterative step numbers.

2.3 Two Auxiliary Results

Our study of the algorithm is based on the following auxiliary results.


Lemma 2.7 Let F0 ⊂ X be nonempty, M0 > 0, L0 ≥ 1,
32 2 Fixed Point Subgradient Algorithm

f (z1 − f (z2 ) ≤ L0 z1 − z2 for all z1 , z2 ∈ B(0, M0 + 4), (2.13)

a mapping Q : X → X satisfy

Q(z) = z, z ∈ F0 , (2.14)

Q(u) − z ≤ u − z for all u ∈ X and all z ∈ F0 (2.15)

and let δ1 , δ2 ∈ [0, 1], α ∈ (0, 1]. Assume that

z ∈ F0 ∩ B(0, M0 ), (2.16)

x ∈ B(0, M0 ), (2.17)

ξ ∈ ∂f (x) + B(0, δ1 ) (2.18)

and that

u∈X (2.19)

satisfies

u − Q(x − αξ ) ≤ δ2 . (2.20)

Then

α(f (x) − f (z))

≤ 2−1 x − z 2
− 2−1 u − z 2
+ δ2 (2M0 + 2 + αL0 )

+2−1 α 2 L20 + αδ1 (2M0 + L0 + 1).

Proof By (2.18), there exists

l ∈ ∂f (x) (2.21)

such that

l − ξ ≤ δ1 . (2.22)

In view of (2.13) and (2.17),

∂f (x) ⊂ B(0, L0 ). (2.23)

In view of (2.21),
2.3 Two Auxiliary Results 33

f (z) − f (x) ≥ l, z − x . (2.24)

It follows from (2.22) and (2.23) that

x − αξ − z 2
= x − αl + (αl − αξ ) − z 2

= x − αl − z 2
+ α2 l − ξ 2
+ 2α l − ξ x − αl − z

≤ x − αl − z 2
+ α 2 δ12 + 2αδ1 (2M0 + αL0 ). (2.25)

By (2.23) and (2.24),

x − αl − z 2
= x−z 2
− 2αl, x − z + α 2 l 2

≤ x−z 2
+ 2α(f (z) − f (x)) + α 2 L20 . (2.26)

In view of (2.25) and (2.26),

x − αξ − z 2
≤ x−z 2
+ 2α(f (z) − f (x)) + α 2 L20

+ α 2 δ12 + 2αδ1 (2M0 + L0 ). (2.27)

It follows from (2.15)–(2.17), (2.20), (2.22), (2.23) and (2.27) that

u−z 2
= u − Q(x − αξ ) + Q(x − αξ ) − z 2

≤ u − Q(x − αξ ) 2
+ 2 u − Q(x − αξ ) Q(x − αξ ) − z + Q(x − αξ ) − z 2

≤ δ22 + 2δ2 x − αξ − z + x − αξ − z 2

≤ δ22 + 2δ2 (2M0 + α(L0 + 1))

+ x−z 2
+ 2α(f (z) − f (x)) + α 2 L20

+α 2 δ12 + 2αδ1 (2M0 + L0 ).

This relation implies that

2α(f (x) − f (z))

≤ x−z 2
− u−z 2
+ δ22 + 2δ2 (2M0 + α(L0 + 1))

+α 2 L20 + α 2 δ12 + 2αδ1 (2M0 + L0 )


34 2 Fixed Point Subgradient Algorithm

≤ x−z 2
− u−z 2
+ 2δ2 (2M0 + 2 + αL0 )

+α 2 L20 + 2αδ1 (2M0 + L0 + 1).

Lemma 2.7 is proved.


Lemma 2.8 Let M0 ≥ M∗ , δ1 , δ2 ∈ [0, 1],

w(i) ≥ Δ, i = 1, . . . , m, (2.28)


m
w(i) = 1, (2.29)
i=1

z ∈ F ∩ B(0, M0 ), (2.30)

x ∈ B(0, M0 ), (2.31)

x0 ∈ B(x, δ1 ), (2.32)

for all i = 1, . . . , m, yi ∈ X satisfy

yi − Pi (x0 ) ≤ δ2 (2.33)

and let


m
y ∈ B( w(i)yi , δ2 ). (2.34)
i=1

Then


m
z−x 2
− z−y 2
≥ Δc̄ x0 − yi 2

i=1

−δ1 (4M0 + 1) − 2δ2 (4M0 + 4) − Δc̄mδ2 ((4M0 + 2)c̄−1/2 + 1).

Proof In view of (2.1) and (2.30), for i = 1, . . . , m,

z − Pi (x0 ) 2
+ c̄ x0 − Pi (x0 ) 2
≤ z − x0 2 , (2.35)

z − Pi (x0 ) ≤ z − x0 . (2.36)

Since the function u → u − z 2 , u ∈ X is convex it follows from (2.28) and (2.29)


that
2.3 Two Auxiliary Results 35


m 
m
z− w(i)Pi (x0 ) 2
≤ w(i) z − Pi (x0 ) 2 . (2.37)
i=1 i=1

By (2.28), (2.29), (2.35) and (2.37),


m
z − x0 2
− z− w(i)Pi (x0 ) 2

i=1


m
≥ z − x0 2
− w(i) z − Pi (x0 ) 2

i=1


m
≥ w(i)( z − x0 2
− z − Pi (x0 ) 2 )
i=1


m
≥Δ ( z − x0 2
− z − Pi (x0 ) 2 )
i=1


m
≥ Δc̄ x0 − Pi (x0 ) 2 . (2.38)
i=1

In view of (2.30)–(2.32),

z − x ≤ 2M0 , z − x0 ≤ 2M0 + 1. (2.39)

By (2.32) and (2.39),

| z−x 2
− z − x0 2 |

≤ | z − x − z − x0 |( z − x + z − x0 ) ≤ δ1 (4M0 + 1). (2.40)

It follows from (2.28), (2.29), (2.36) and (2.39) that


m 
m
z− w(i)Pi (x0 ) ≤ w(i) z − Pi (x0 ) ≤ z − x0 ≤ 2M0 + 1. (2.41)
i=1 i=1

In view of (2.33) and (2.34),


m 
m
| z−y − z− w(i)Pi (x0 ) | ≤ y − w(i)Pi (x0 ) ≤ 2δ2 . (2.42)
i=1 i=1
36 2 Fixed Point Subgradient Algorithm

Equations (2.41) and (2.42) imply that


m
| z−y 2
− z− w(i)Pi (x0 ) 2 |
i=1


m
≤ 2δ2 ( z − y + z − w(i)Pi (x0 ) ) ≤ 2δ2 (4M0 + 4). (2.43)
i=1

By (2.38), (2.40) and (2.43),

z−x 2
− z−y 2

≥ z − x0 2
− δ1 (4M0 + 1)


m
− z− w(i)Pi (x0 ) 2
− 2δ2 (4M0 + 4)
i=1


m
≥ Δc̄ x0 − Pi (x0 ) 2

i=1

− δ1 (4M0 + 1) − 2δ2 (4M0 + 4). (2.44)

It follows from (2.3), (2.33) and (2.35) that for i = 1, . . . , m,

| x0 − yi 2
− x0 − Pi (x0 ) 2 |

≤ yi − Pi (x0 ) (2 x0 − Pi (x0 ) + yi − Pi (x0 ) )

≤ δ2 (2 x0 − Pi (x0 ) + δ2 )

≤ δ2 (2 z − x0 c̄−1/2 + 1) ≤ δ2 ((4M0 + 2)c̄−1/2 + 1). (2.45)

By (2.44) and (2.45),

z−x 2
− z−y 2


m
≥ Δc̄ x0 − yi 2
− Δc̄δ2 ((4M0 + 2)c̄−1/2 + 1)m
i=1

−δ1 (4M0 + 1) − 2δ2 (4M0 + 4).

Lemma 2.8 is proved.


2.4 The First Result for the Cimmino Subgradient Algorithm 37

2.4 The First Result for the Cimmino Subgradient Algorithm

In the following result we assume the objective function f satisfies the coercivity
growth condition.
Theorem 2.9 Let the function f be Lipschitz on bounded subsets of X,

lim f (x) = ∞,
x →∞

M ≥ 2M∗ + 8, L0 ≥ 1,

M1 > sup{|f (u)| : u ∈ B(0, M∗ + 4)} + 4, (2.46)

f (u) > M1 + 4 for all u ∈ X \ B(0, 2−1 M), (2.47)

|f (z1 ) − f (z2 )| ≤ L0 z1 − z2 for all z1 , z2 ∈ B(0, 3M + 4), (2.48)

δf , δp ∈ [0, 1], α > 0 satisfy

α ≤ L−2
0 , α ≥ δf (6M + L0 + 2), α ≥ 2δp (6M + 2), (2.49)

T be a natural number and let

γT = max{α(L0 + 1), (Δc̄)−1/2 (4M 2 T −1 + α(L0 + 1)(12M + 4))1/2 + δp }.


(2.50)
T −1
Assume that {xt }Tt=0 ⊂ X, {ξt }t=0 ⊂ X,

(wt (1), . . . , wt (m)) ∈ R m , t = 1, . . . , T ,


m
wt (i) = 1, t = 1, . . . , T , (2.51)
i=1

wt (i) ≥ Δ, i = 1, . . . , m, t = 1, . . . , T , (2.52)

x0 ∈ B(0, M) (2.53)

and that for all integers t ∈ {0, . . . , T − 1},

B(ξt , δf ) ∩ ∂f (xt ) = ∅, (2.54)

yt,i ∈ B(Pi (xt − αξt ), δp ), i = 1, . . . , m, (2.55)


38 2 Fixed Point Subgradient Algorithm


m
xt+1 − wt+1 (i)yt,i ≤ δp . (2.56)
i=1

Then

xt ≤ 2M + M∗ , t = 0, . . . , T

and


m
min{max{Δc̄ xt − αξt − yt,i 2
− α(L0 + 1)(12M + 4),
i=0

2α(f (xt ) − inf(f, F )) − 4δp (6M + 3) − α 2 L20 − 2αδf (6M + L0 + 1)} :

t = 0, . . . , T − 1} ≤ 4M 2 T −1 .

Moreover, if t ∈ {0, . . . , T − 1} and


m
max{Δc̄ xt − αξt − yt,i 2
− α(L0 + 1)(12M + 4),
i=0

2α(f (xt ) − inf(f, F )) − 4δp (6M + 3)

−α 2 L20 − 2αδf (6M + L0 + 1)} ≤ 4M 2 T −1 ,

then

f (xt ) ≤ inf(f, F ) + 2M 2 (T α)−1

+2α −1 δp (6M + 3) + 2−1 αL20 + δf (6M + L0 + 3)

and

γT .
xt ∈ F

Proof In view of (2.8), there exists

z ∈ B(0, M∗ ) ∩ F. (2.57)

By (2.53) and (2.57),

z − x0 ≤ 2M. (2.58)
2.4 The First Result for the Cimmino Subgradient Algorithm 39

We show that for all t = 0, . . . , T ,

z − xt ≤ 2M. (2.59)

In view of (2.58), (2.59) is true for t = 0.


Assume that there exists an integer k ∈ {0, . . . , T } such that

z − xk > 2M. (2.60)

By (2.58) and (2.60), k > 0. We may assume without loss of generality that (2.59)
holds for all integers t = 0, . . . , k − 1. In particular,

z − xk−1 ≤ 2M. (2.61)

By (2.1), (2.2), (2.51), (2.52), (2.54)–(2.56) and (2.61), we apply Lemma 2.7 with

δ1 = δf , δ2 = 2δp , F0 = F, M0 = 3M,


m
Q= wk (i)Pi , x = xk−1 , ξ = ξk−1 , u = xk
i=1

and obtain that

α(f (xk−1 ) − f (z))

≤ 2−1 xk−1 − z 2
− 2−1 xk − z 2

+ 2δp (6M + 2) + 2−1 α 2 L20 + αδf (6M + L0 + 1). (2.62)

There are two cases:

z − xk ≤ z − xk−1 (2.63)

z − xk > z − xk−1 . (2.64)

Assume that (2.63) holds. Then in view of (2.61),

xk − z ≤ 2M.

Assume that (2.64) is true. By (2.62) and (2.64),

α(f (xk−1 ) − f (z))

≤ 2δp (6M + 2) + 2−1 α 2 L20 + αδf (6M + L0 + 1). (2.65)


Random documents with unrelated
content Scribd suggests to you:
The Project Gutenberg eBook of Plutarch on
the Delay of the Divine Justice
This ebook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this ebook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.

Title: Plutarch on the Delay of the Divine Justice

Author: Plutarch

Translator: Andrew P. Peabody

Release date: December 30, 2018 [eBook #58567]

Language: English

Credits: Produced by Turgut Dincer, Lisa Reigel, and the Online


Distributed Proofreading Team at http://www.pgdp.net
(This
file was produced from images generously made
available
by The Internet Archive)

*** START OF THE PROJECT GUTENBERG EBOOK PLUTARCH ON


THE DELAY OF THE DIVINE JUSTICE ***
Transcriber’s Notes:
A Table of Contents has been added by the
transcriber.

Synopsis.
Introduction.
Plutarch on the Delay of the Divine Justice.
Index.

A complete list of corrections as well as other


notes follows the text.
PLUTARCH
ON THE

DELAY OF THE DIVINE JUSTICE.

TRANSLATED
WITH
AN INTRODUCTION AND NOTES.

BY
ANDREW P. PEABODY.

BOSTON:
LITTLE, BROWN, AND COMPANY.
1885.
Copyright, 1885,
By Andrew P. Peabody.

University Press:
John Wilson and Son, Cambridge.
SYNOPSIS.
§ 1. The dialogue opens with comments on the
cavils against the Divine Providence by a person
who is supposed to have just departed.
2. The alleged encouragement to the guilty by the
delay of punishment, while the sufferers by the
guilt of others are disheartened by failing to see
the wrong-doers duly punished.
3. The guilty themselves, it is said, do not
recognize punishment when it comes late, but
think it mere misfortune.
4. Plutarch answers the objections to the course of
Providence. In the first place, man must not be
too confident of his ability to pass judgment on
things divine. There are many things in human
legislation undoubtedly reasonable, yet with no
obvious reason. How much more in the
administration of the universe by the Supreme
Being!
5. God by the delay of punishment gives man the
example of forbearance, and rebukes his
yielding to the first impulses of anger and of a
vindictive temper.
6. God has reference, in the delay of punishment,
to the possible reformation of the guilty, and to
the services which, when reformed, they may
render to their country or their race. Instances
cited.
7. The wicked often have their punishment
postponed till after they have rendered some
important service in which they are essential
agents, and sometimes that, before their own
punishment, they may serve as executioners for
other guilty persons or communities.
8. There is frequently a peculiar timeliness and
appropriateness in delayed punishment.
9. Punishment is delayed only in appearance, but
commences when the guilt is incurred, so that it
seems slow because it is long.
10. Instances of punishment in visions,
apprehensions, and inward wretchedness, while
there was no outward infliction of penalty.
11. There is really no need that punishment be
inflicted; guilt is in the consciousness of the
guilty its own adequate punishment.
12. Objection is made by one of the interlocutors to
the justice of punishing children or posterity for
the guilt of fathers or ancestors, and he heaps
up an incongruous collection of cases in which
he mingles confusedly the action of the Divine
Providence and that of human caprice or
malignity.
13. In answer to the objection, Plutarch first
adduces as a precisely parallel order of things,
with which no one finds fault, that by which
children or posterity derive enduring benefit and
honor from a parent’s or ancestor’s virtues and
services.
14. There are alike in outward and in human nature
occult and subtle transmissions of qualities and
properties, both in time and in space. Those in
space are so familiar that they excite no
wonder; those in time, though less liable to
attract notice, are no more wonderful.
15. A city has a continuous life, a definite and
permanent character, and an individual unity, so
that its moral responsibility may long outlast the
lives of those who first contracted a specific
form of guilt.
16. The same is to be said of a family or a race;
and, moreover, the punishment for inherited
guilt may often have a curative, or even a
preventive efficacy, so that children or posterity
may refrain from guilt because the ancestral
penalty falls upon them before they become
guilty.
17. The immortality of the soul asserted, on the
ground that God would not have deemed a race
doomed to perish after a brief earthly life worth
rewarding or punishing.
18. Punishments in a future state of being are out
of sight, and are liable to be disbelieved.
Therefore it is necessary, in order to deter men
from guilt, that there should be visible
punishments in this life.
19. The remedial efficacy of the penal
consequences of parental or ancestral guilt
reaffirmed, and illustrated by analogies in the
treatment of disease.
20. God often punishes latent and potential vice,
visible only to Omniscience.
21. If a child has no taint of a father’s vices, he
remains unpunished. But moral qualities,
equally with physical traits, often lapse in the
first generation, and reappear in the second or
third, and even later.
22. The story of Thespesius, who—apparently
killed, but really in a trance, in consequence of
a fall—went into the infernal regions, beheld the
punishments there inflicted, and came back to
the body and its life, converted from a profligate
into a man of pre-eminent virtue and
excellence.
INTRODUCTION.[vii:1]
Plutarch[vii:2] was born, about the middle of the first Christian
century, at Cheroneia in Boeotia, where he spent the greater part of
his life, and where he probably died. The precise dates of his birth
and death are unknown; but he can hardly have been born earlier
than A. D. 45, and he must have lived nearly or quite till A. D. 120, as
some of his works contain references to events that cannot have
taken place earlier than the second decade of the second century.
We know little of him from other sources, much from his own
writings. There may have been many such men in his time; but
antiquity has transmitted to us no record like his. He reminds one of
such men as were to be found half a century ago in many of our
American country towns. Those potentially like them have now, for
the most part, emigrated to the large cities, and have become very
unlike their prototypes. Cheroneia, with its great memories, was a
small and insignificant town, and Plutarch was a country gentleman,
superior, as in culture so in serviceableness, to all his fellow citizens,
holding the foremost place in municipal affairs, liberal, generous,
chosen to all local offices of honor, and especially of trust and
responsibility, associating on the most pleasant terms with the
common people, always ready to give them his advice and aid, and
evidently respected and beloved by all. He belonged to an old and
distinguished family, and seems always to have possessed a
competency for an affluent, though sober, domestic establishment
and style of living, and for an unstinted hospitality. He was probably
the richest man in his native city; for he assigns as a reason for not
leaving it and living at some centre of intellectual activity, that
Cheroneia could not afford to lose the property which he would take
with him in case of his removal.
He had what corresponds to our university education, at Athens,
under the Peripatetic philosopher Ammonius. He also visited
Alexandria, then a renowned seat of learning; but how long he
stayed there, or whether he extended his Egyptian travel beyond
that city, we have no means of knowing. There is no proof of his
having been in Rome or in Italy more than once, and that was when
he was about forty years of age. He went to Rome on public
business, probably in behalf of his native city, and remained there
long enough to become acquainted with some eminent men, to
make himself known as a scholar and an ethical philosopher, and to
deliver lectures that attracted no little public notice. This visit seems
to have been the great event of his life, as a winter spent in Boston
or New York used to be in the life of one of our country gentlemen
before the time of railways.
He had a wife, who appears to have been of a character kindred
to his own; at least five children, of whom two sons, if not more,
lived to be themselves substantial citizens and worthy members of
society; and two brothers, who seem to have possessed his full
confidence and warm affection. He was singularly happy in his
relations to a large circle of friends, especially in Athens, for which
he had the lifelong love that students in our time acquire for a
university town. He was archon, or mayor, of Cheroneia, probably
more than once,—the office having doubtless been annual and
elective,—and in this capacity he entered, like a veritable country
magistrate, into material details of the public service,
superintending, as he says, the measuring of tiles and the delivery of
stone and mortar for municipal uses. He officiated for many years as
priest of Apollo at Delphi, and as such gave several sumptuous
entertainments. Indeed, hospitality of this sort appears, so far as we
can see, to have been the sole or chief duty of his priestly office. As
an adopted citizen of one of the Athenian tribes, he was not
infrequently a guest at civic banquets and semi-civic festivals.
As regards Plutarch’s philosophy, it is easier to say to which of the
great schools he did not belong than to determine by what name he
would have preferred to be called. He probably would have termed
himself a Platonist, but not, like Cicero, of the New Academy, which
had incorporated Pyrrhonism with the provisional acceptance of the
Platonic philosophy. At the same time, he was a closer follower and a
more literal interpreter of Plato than were the Neo-Platonists of
Alexandria, who had not yet become a distinctly recognized sect,
and who in many respects were the precursors of the mysticism of
the Reformation era. Plutarch, with Plato, recognized two eternities:
that of the Divine Being, supremely good and purely spiritual; and
that of matter, as, if not intrinsically evil, the cause, condition, and
seat of all evil, and as at least opposing such obstacles to its own
best ideal manipulation that the Divine Being could not embody his
pure and perfect goodness, unalloyed by evil, in any material form.
Herein the Platonists were at variance with both the Stoics and the
Epicureans. The Stoics regarded matter as virtually an emanation
from the Supreme Being, who is not only the universal soul and
reason, but the creative fire, which, transformed into air and water,
—part of the water becoming earth,—is the source of the material
universe, which must at the end of a certain cosmical cycle be re-
absorbed into the divine essence, whence will emanate in endless
succession new universes to replace those that pass away. The
Epicureans, on the other hand, believed in the existence of matter
only, and regarded mind and soul as the ultimate product of material
organization.
In one respect Plutarch transcends Plato, and, so far as I know,
all pre-Christian philosophers. Plato’s theism bears a close kindred to
pantheism. His God, if I may be permitted the phrase, is only semi-
detached. He becomes the creator rather by blending his essence
with eternal matter, than by shaping that matter to his will. He is
rather in all things than above all things, rather the Soul of the
universe than its sovereign Lord. But in Plutarch’s writings the
Supreme Being is regarded as existing independently of material
things; they, as subject to his will, not as a part of his essence.
Plutarch was, like Plato, a realist. He regarded the ideas or
patterns of material things, that is, genera, or kinds of objects, as
having an actual existence (where or how it is hard to say), as
projected from the Divine Mind, floating somewhere in ethereal
spaces between the Deity and the material universe,—the models by
which all things in the universe are made.
As to Plutarch’s theology, he was certainly a monotheist. He
probably had some vague belief in inferior deities (daemons he
would have called them), as holding a place like that filled by angels
and by evil spirits in the creed of most Christians; yet it is entirely
conceivable that his occasional references to these deities are due
merely to the conventional rhetoric of his age. His priesthood of the
Delphian Apollo can hardly be said to have been a religious office. It
was rather a post of dignity and honor, which a gentleman of
respectable standing, courteous manners, and hospitable habits
might creditably fill, even though he had no faith in Apollo. But that
Plutarch had a serious, earnest, and efficient faith in the one
Supreme God, in the wise and eternal Providence, and in the Divine
wisdom, purity, and holiness, we have in his writings an absolute
certainty. Nor can we find, even in Christian literature, the record of
a firmer belief than his in human immortality, and in a righteous
retribution beginning in this world and reaching on into the world
beyond death.
But Plutarch was, most of all, an ethical philosopher. Yet here
again he cannot be classed as belonging to any school. For
Epicureanism he has an intense abhorrence, and regards the
doctrines of that sect as theoretically absurd and practically
demoralizing. He maintains that the disciples of Epicurus, as such,
utterly fail in the quest of pleasure, or what according to their
master is still better, painlessness: for the condition of those who, as
he says, “swill the mind with the pleasures of the body, as hogherds
do their swine,” cannot entirely smother the sense of vacuity and
need; nor is it possible by any appliances of luxury to cut off even
sources of bodily disquietude, which are only the more fatal to the
happiness of him who seeks bodily well-being alone; while the
prospect of annihilation at death deprives those necessarily unhappy
in this life of their only solace, and gives those who live happily here
the discomfort of anticipating the speedy and entire loss of all that
has ministered to their enjoyment.
In Plutarch’s moderation, his avoidance of extreme views, and his
just estimate of happiness as an end, though not the supreme end,
of being, he is in harmony with the Peripatetics, among whom his
Athenian preceptor was the shining light of his age; but his ethical
system was much more strict and uncompromising than theirs, and I
cannot find that he quotes them or refers to them as a distinct
school of philosophy. In matters appertaining to physical science he
indeed often cites Aristotle, but not, I think, in a single instance, as
to any question in morals.
As regards the Stoics, Plutarch writes against them, but chiefly
against dogmas which in his time had become nearly obsolete,—
namely, that all acts not in accordance with the absolute right are
equally bad; that all virtuous acts are equally good; that there is no
intermediate moral condition between that of the wise or perfectly
good man and that of the utterly vicious; and that outward
circumstances neither enhance nor diminish the happiness of the
truly wise man. These extravagances do not appear in the writings
of Seneca, nor in Epictetus as reported by Arrian, and Plutarch in
reasoning against them is controverting Zeno rather than his later
disciples. He is in full sympathy with the Stoics as to their elevated
moral standard, though without the sternness and rigidness which
had often characterized their professed beliefs and their public
teaching, yet of which there remained few vestiges among his
contemporaries. With the utmost mildness and gentleness, he
manifests everywhere an inflexibility of principle and a settled
conviction as to the rightfulness or wrongfulness of specific acts
which might satisfy the most rigid Stoic, and in which he plants
himself as firmly on the ground of the eternal Right as if his
philosophy had been founded on a distinctively Christian basis.
Indeed, Plutarch is so often decidedly Christian in spirit, and in
many passages of his writings there is such an almost manifest
transcript of the thought of the Divine Founder of our religion, that it
has been frequently maintained that he drew from Christian sources.
This, I must believe, is utterly false in the sense in which it is
commonly asserted, yet in a more recondite sense true. If Plutarch
had known anything about Christians or the Christian Scriptures, he
could not have failed to refer to them; for he is constantly making
references to contemporary persons and objects, sects and opinions.
We know of no Christian church at Cheroneia in that age, and
indeed it is exceedingly improbable that there should have been one
in so small a town. The circulation of thought, and consequently the
diffusion of a new religion from the great centres of population to
outlying districts or villages, was infinitesimally slow. Our word
pagan is an enduring witness of this tardiness of transmission. It had
its birth (in its present sense) after Christianity had become the
legally established religion of the Empire, and had supplanted
heathen temples and rites in the cities, while in the pagi, or villages,
the old gods were still in the ascendant. There were indeed Christian
churches in Athens and in Rome; but they would most probably have
eluded the curiosity and escaped the knowledge of a temporary
resident, especially as most of their chief members were either Jews
or slaves. Yet I cannot doubt that an infusion of Christianity had
somehow infiltrated itself into Plutarch’s ethical opinions and
sentiments, as into those of Seneca, who has been represented as
an acquaintance and correspondent of St. Paul, though it is
historically almost impossible that the two men ever saw or heard of
each other.
In one respect, the metaphor by which we call the Author of our
religion the Sun of Righteousness has a special aptness. The sun,
unlike lesser luminaries, lights up sheltered groves and grottos that
are completely dark under the full moon, and sends its rays through
every chink and cranny of roof or wall. In like manner there seems
to have been an indirect and tortuous transmission of Christian
thought into regions where its source was wholly unknown. In the
ethical writings of the post-Christian philosophers, of Plutarch,
Seneca, Epictetus, and Marcus Aurelius, there may be traced a
loftiness, precision, delicacy, tenderness, breadth of human
sympathy, and recognition of holiness in the Divine Being as the
archetype of human purity, transcending all that is most admirable in
pre-Christian moralists. Thus, while I cannot but regard Cicero’s “De
Officiis” as in many respects the world’s master-work in ethical
philosophy, containing fewer unchristian sentences than I could
number on the fingers of one hand, there is nothing in it that
reminds me of the Gospels; while these others often shape their
thoughts in what seem to be evangelic moulds.
Now I think that we may account for the large diffusion of
Christian thought and sentiment among persons who knew not
Christianity even by name. The new religion was very extensively
embraced among slaves in all parts of the Roman Empire, and slave
then meant something very different from what it means now. It is
an open question whether there was not, at least out of Greece,
more of learning, culture, and refinement in the slave than in the
free population of the Empire. We must remember how many
illustrious names in Greek and Roman literature—such names as
those of Aesop, Terence, Epictetus—belonged to slaves. Tiro,
Cicero’s slave, was not only one of his dearest friends, but foremost
among his literary confidants and advisers. Most of the rich men who
had any love of literature owned their librarians and their copyists,
and the teachers of the children were generally the property of the
father. Among Christian slaves there were undoubtedly many who
felt no call to martyrdom, (which can have been incumbent on them
only when the alternative was apostasy and denial of their faith,)
who therefore made no open profession of their religion, while in
precept, conversation, and life they were imbued with its spirit,—a
spirit as subtile in its penetrating power as it is refining and purifying
in its influence. From the lips of Christian slaves many children, no
doubt, received in classic forms moral precepts redolent of the
aroma breathed from the Sermon on the Mount. If the social
medium which Plutarch represents is a fair specimen of the best
rural society of the Empire in his time, there must have been a ready
receptivity for the highest style of ethical teaching,—a genial soil for
the germination of a truly evangelic righteousness of moral
conception, maxim, and principle.
Probably no book except the Bible has had more readers than
Plutarch’s Lives. These biographies have been translated into every
language of the civilized world; they have been among the earliest
and most fascinating books for children and youth of many
successive generations; and down to the present time, when fiction
seems to have almost superseded history and biography, and to
have destroyed the once universal appetency for them among young
people, they have exercised to a marvellous degree a shaping power
over character. They are, indeed, underrated by the exact historian,
because modern research has discovered here and there some
mistake in the details of events. But such mistakes were in that age
inevitable. Historical criticism was then an unknown science.
Documents and traditions covering the same ground were deemed
of equal value when they were in harmony, and when they differed
an author followed the one which best suited his taste, or his
purpose for the time being. Thus Cicero, in one case, in the same
treatise gives three different versions of the same story. Thus, too,
there were several stories afloat about the fate of Regulus; but
Roman writers took that which Niebuhr thinks farthest from the
truth, yet which threw the greatest odium on the hated name of
Carthage. Now I have no doubt that, whenever there were two or
more versions of the same act or event, Plutarch chose that which
would best point his moral. But it is only in few and unimportant
particulars that he has been proved to be inaccurate.
It has been also objected to Plutarch, that he attaches less
importance to the achievements of his heroes in war and in civic life,
than to traits and anecdotes illustrative of their characters. This
seems to me a feature which adds not only to the charm of these
Lives, but even more to their historical value. The events of history
are at once the outcome and the procreant cradle of character, and
we know nothing of any period or portion of history except as we
know the men who made it and the men whom it made. Biography
is the soul; history the body, which it tenants and animates, and
which, when not thus tenanted, is a heap of very dry bones. The
most thorough knowledge of the topography of Julius Caesar’s
battles in Gaul, the minutest description of the campaign that
terminated in Pharsalia, the official journal of the Senate during his
dictatorship, would tell us very little about him and his time. But a
vivid sketch of his character, with well-chosen characteristic
anecdotes, would give us a very distinct and realizing conception of
the antecedent condition of things that made a life like his possible,
and of his actual influence for good and for evil on his country and
his age.
Nor is the value of such a biography affected in the least by any
doubts that we may entertain as to the authenticity of incidents,
trivial except as illustrative of character, which occupy a large space
in Plutarch’s Lives. Indeed, the least authentic may be of the
greatest historical value. An anecdote may be literally true, and yet
some peculiar combination of circumstances may have led him of
whom it is told to speak or act out of character. But a mythical
anecdote of a man, coming down from his own time and people,
must needs owe its origin and complexion to his known character.
It is perfectly easy to see throughout these biographies the
author’s didactic aim. If I may use sacred words, here by no means
misapplied, his prime object was “reproof, correction, and instruction
in righteousness.” He evidently felt and mourned the degeneracy of
his age, was profoundly aware of the worth of teaching by example,
and was solicitous to bring from the past such elements of ethical
wisdom as the records of illustrious men could be made to render
up. True to this purpose, he measures the moral character of such
transactions as he relates by the highest standard of right which he
knows, and there is not a person or deed that fails to bear the
stamp, clear-cut, yet seldom obtrusive, of his approval or censure.
The Lives, though the best known of Plutarch’s writings, are but a
small part of them, and hardly half of those still extant. His other
works are generally grouped under the title of “Moralia,”[xx:1] or
Morals, though among them there are many treatises that belong to
the department of history or biography, some to that of physics.
Most of these works are short; a few, of considerable length. Some
of them may have been lectures; some are letters of advice or of
consolation; some are in a narrative form; many are in the form of
dialogue, which, sanctioned by the prestige of Plato’s pre-eminence,
was very largely employed by philosophers of later times,
possessing, as it does, the great advantage of putting opposite and
diverse opinions in the mouths of interlocutors, and thus giving to
the treatise the vivacity and the dramatic interest of oral discussion.
Some of these dialogues have a symposium, or supper party, for
their scene, and introduce a numerous corps of speakers. In these
Plutarch himself commonly sustains a prominent part, and the
members of his family often have their share in the conversation, or
are the subjects of kindly mention. In several instances the occasion,
circumstances, and conversation are described so naturally as to
make it almost certain that the author simply wrote out from
memory what was actually said. At any rate, these festive dialogues
present very clearly his idea of what a symposium ought to be, and
in its entire freedom from excess and extravagance of any kind it
would bear the strictest ordeal with all modern moralists, the
extreme ascetics alone excepted.
Had not the Lives been written, I am inclined to believe that the
Moralia alone would have given Plutarch as high a place as he now
holds, not only in the esteem of scholars, but in the interest and
delight of all readers of good books; and I am sure that there is no
loving reader of the Lives who will not be thankful to have his
attention drawn to the Moralia. They exhibit throughout the same
moral traits which their author shows as a biographer. He treats,
indeed, incidentally, of some subjects which a purer ethical taste in
the public mind might have excluded. He recognizes the existence of
immoralities, which, not discreditable in the best society of
unevangelized Greece and Rome, have almost lost their place and
name in Christendom. Some of his dialogues have among the
interlocutors those with whom as good a man as he would in our
time associate only in the hope of converting them. But his own
opinion and feeling on all moral questions are uniformly and
explicitly in behalf of all that is pure, and true, and right, and
reverent.
Many of these Moralia are on what are commonly, yet wrongly,
called the minor morals, that is, on the evils that most of all infest
and destroy the happiness of families and the peace of society, and
on the opposite virtues,—on such subjects, for instance, as “Idle
Talking,” “Curiosity,” “Self-Praise,” and the like. Others are on such
grave topics as “The Benefits that a Man may derive from his
Enemies,” and “The Best Means of Self-Knowledge.” There is in all
these treatises a large amount of blended common sense and keen
ethical insight; and so little does human nature change with its
surroundings that the greater part of Plutarch’s cautions, counsels,
and precepts are as closely applicable to our own time as if they had
been written yesterday.
One of the most remarkable writings in this collection is Plutarch’s
letter to his wife on the death of a daughter two years old, during
his absence from home. It not only expresses sweetly and lovingly
the topics of consolation which would most readily occur to a
Christian father; it gives us also a charming picture of a household
united by ties of spiritual affinity, and living in a purer, higher
medium than that of affluence and luxury. A few sentences may
convey something of the tone and spirit of this epistle. “Since our
little daughter afforded us the sweetest and most charming
pleasure, so ought we to cherish her memory, which will conduce in
many ways, or rather many fold, more to our joy than our grief.”
“They who were present at the funeral report this with admiration,
that you neither put on mourning, nor disfigured yourself or any of
your maids, neither were there any costly preparations nor
magnificent pomp; but all things were managed with silence and
moderation, in the presence of our relatives alone.” “So long as she
is gone to a place where she feels no pain, why should we grieve for
her?” “This is the most troublesome thing in old age, that it makes
the soul weak in its remembrance of divine things, and too earnest
for things relating to the body.” “But that which is taken away in
youth, being more soft and tractable, soon returns to its native vigor
and beauty.” “It is good to pass the gates of death before too great
a love of bodily and earthly things be engendered in the soul.” “It is
an impious thing to lament for those whose souls pass immediately
into a better and more divine state.” “Wherefore let us comply with
custom in our outward and public behavior, and let our interior be
more unpolluted, pure and holy.”
Now, when I remember that in the pre-Christian Greek and
Roman world the strongest utterances about immortality had been
by Socrates, if Plato reported him aright, when he expressed strong
hope of life beyond death, yet warned his friends not to be too
confident about a matter so wrapped in uncertainty,—and by Cicero,
who, when his daughter died, confessed that his reasonings had left
no conviction in his own mind,—I cannot doubt that some Easter
morning rays had pierced the dense Boeotian atmosphere, and that
the risen Saviour had in that lovely Cheroneian household those
whom he designates as “other sheep, not of this fold.”
There is among the Moralia another letter of consolation, to
Apollonius on the death of his son, longer, more elaborate, and
evidently intended as a literary composition, to be preserved with
the author’s other works, which breathes the same spirit of
submission and trust.
Another of the Moralia, which has a special interest as regards the
author’s own family, is on the “Training of Children,”—a series of
counsels—including the careful heed of the parents to their own
moral condition and habits—to which the experience of these
intervening centuries has little to add, while it could find nothing to
take away.
In one sense, the miscellanies brought together under the name
of “Moralia” bear that title not inappropriately; for, as I have
intimated, Plutarch could not but be didactic in whatever he wrote,
and the ethical feeling, spirit, and purpose are perpetually, yet never
ostentatiously or inappropriately, coming to the surface on all kinds
of subjects. But there is a great deal in the collection not professedly
or directly ethical. There are many scraps of history and biography,
and a very large number and variety of characteristic anecdotes,
both of well-known personages, and of others who are made known
to us almost as vividly by a single trait, deed, or saying as if we had
their entire life-record. There is an invaluable series of
“Apophthegms”[xxv:1] of kings and great commanders,[xxv:2] and
another of “Laconic [or Spartan] Apophthegms,” which are much
more than their name implies, some of them being condensed
memoirs. There are, also, several papers that give us more definite
notions than can be found anywhere else of the science and natural
history of the author’s time. Withal, we have here so many
references to manners, customs, and habits, such pictures of home
with all that could give it the sweetness and grace that belong to it,
such views of society, both in city and in country, in ordinary
intercourse and on festive occasions, that one can learn more of life
in that age in the Roman Empire from these volumes than from any
other single author; and the writer of a book like Becker’s “Gallus”
might find here almost all the materials that he would need, except
for the delineation of the night-side of Roman extravagance,
gluttony, drunkenness, debauchery, and depravity, which came not
within Plutarch’s experience.
The most remarkable of all Plutarch’s writings, the most valuable
equally in a philosophical and an ethical point of view, and the most
redolent of what we almost involuntarily call Christian sentiment, is
that “On the Delay of the Divine Justice,” or, to give a more literal
translation of the original title, “Concerning those who are punished
slowly by the Divine [Being].”[xxvi:1] It treats of what from the
earliest time has been a mystery to serious minds, and has been
urged equally by malignant irreligion and by honest scepticism
against the supremacy of the Divine justice in the government of the
world; namely, the postponement of the penal consequences of
guilt, sometimes till there are no witnesses of the crime left to
behold the punishment, sometimes till the offender himself has lost
the thread between the evil that he did and its retribution,
sometimes till the sinner has gone to the grave in peace, and left
innocent posterity to suffer for his sins. Plutarch, with his
unquestioning faith in immortality, doubts not that guilt, unpunished
in this life, will be overtaken by just retribution in the life to come.
But, as he says, retribution, though it may be consummated only in
the future life, is not delayed till then. It seems late, because it lasts
long. The sentence is passed upon the guilt when it is committed;
and, however its visible execution may be postponed, the sinner is
from that moment a prisoner of the Divine justice, awaiting
execution. He may give splendid suppers, and live luxuriously; yet
still he is within prison walls from which there is no escape.
This is undoubtedly true, and yet there are many cases, and
those of the worst kind, in which it seems to be not true. A
moderately bad man, in most instances, feels profoundly the shame
and misery that he has brought upon himself. But a thoroughly
wicked man takes contentedly a position which we may fitly term
sub-human. If we suppose a man possessed of a magnificent house,
luxuriously and tastefully furnished, who yet chooses never to
ascend a stair, and lives in the basement shabbily and meanly, with
the coarsest appliances of physical comfort, we might take him as
the type of not a few bad men who seem entirely at their ease. They
live in the basement. They have thrown away the key to the upper
rooms. They have lost all appreciation of the higher, better modes of
human living, and they are contented and satisfied as a well-fed
beast is, in the absence of all spiritual cravings and ambitions. But
this life, poor and mean as it is at the best, becomes still more
narrow and sordid with the lapse of time. Many have looked with
envy on prosperous guilt early or midway in its career; none can
have witnessed its lengthened age without pity and loathing.
Especially is this the case with the several forms of sensual vice. As
age advances, the power of enjoyment wanes, while the morbid
craving grows, even under the consciousness of added misery with
its continued indulgence. The body becomes the soul’s dungeon, and
its walls thicken inward and close up the wonted entrances of
enjoyment. The senses, deadened on the side of pleasure, no longer
avenues of beauty or of harmony, seem to serve only as means of
prolonging a death in life, and as open inlets of discomfort and pain.
But the suspense of sentence has in not a few cases, according to
Plutarch, a directly merciful purpose. As the most fertile soil may
before tillage produce the rankest weeds, so in the soul most
capable of good there may be, prior to culture, a noisome crop of
evil, and yet God may spare the sinner for the good that is in him,
and for the signal service which, when reclaimed, he may render to
mankind. Then, too, by the delay of visible judgment God gives men
in his own example the lesson of long-suffering, and rebukes their
promptness in resentment and revenge. Still further, when penalty
appears to fall on the posterity or successors of the guilty, and a
race, a people, a city, or a family seems punished for the iniquity of
its progenitors, Plutarch brings out very fully and clearly the
absolutely essential and necessary solidarity of the family or the
community, which can hardly fail so to inherit of its ancestors in
disposition and character as to invite upon itself, to merit for itself, or
at best to need as preventive or cure, the penal consequences of
ancestral guilt.
This essay is all the more valuable because not written by a
Christian. It shows that the intense stress laid by Christian teaching
on a righteous retribution lasting on beyond the death-change is not
a mere dogma of the sacred records of our religion, but equally the
postulate of the unsophisticated reason and conscience of developed
humanity.

My translation is not literal, in the common meaning of that term.


If it were so, it would be unintelligible; for Plutarch’s style lacks
simplicity, and his sentences, though seldom obscure, are often
involved and intricate, sometimes elliptical. I have, however, given a
faithful transcript in English of what I understand Plutarch to have
written, omitting no thought or shade of thought that I suppose to
be his, and inserting none of my own.

I have used Wyttenbach’s edition of the Moralia, departing from


his text in but a single instance, and that, one in which he
pronounces the reading in the text impossible, and suggests a
conjectural reading as necessary to the sense of the passage. I have
also made constant reference to the late Professor Hackett’s edition
of this treatise, which it is superfluous to commend where he was
known; for not only was he confessedly among the foremost
scholars of his time, but his exacting conscientiousness would not
suffer him to put less than his best and most thorough work into
whatever came from his hands.
FOOTNOTES:
[vii:1] A large part of this Introduction is reprinted, by
permission of the editors, from an article of mine on “Plutarch and
his Times,” in the Andover Review, November, 1884.
[vii:2] Πλούταρχος.
[xx:1] Τὰ ἠθικά.
[xxv:1] Ἀποφθέγματα.
[xxv:2] The genuineness of this series has been called in
question; but the internal evidence seems decisive in its favor. It
is, throughout, so entirely in Plutarch’s vein, that one is tempted
to ask, Who else could have written it?
[xxvi:1] Περὶ τῶν ὑπὸ τοῦ θεῖου βραδέως τιμορουμένων.
PLUTARCH
ON THE DELAY OF THE DIVINE JUSTICE.

1. Epicurus,[1:1] having said such things, O Cinius,[1:2] before any


one could reply, while we were at the farther end of the porch, went
hastily away. But we, somewhat amazed at the man’s rudeness,
stood still, looking at one another without speaking, and then turned
and resumed our walk.
Then Patrocleas[2:1] commenced the conversation, saying,—What
then? Do you see fit to drop the discussion? or will you answer his
argument as if he were present, though he has taken himself away?
Timon[2:2] then said,—If he threw a javelin[2:3] at us as he went
away, it certainly would not be well for us to take no notice of the
weapon still sticking in our sides. Brasidas,[2:4] indeed, as we are
told, drew out the spear from his own body, and killed with it the
man who had hurled it at him. But it is no concern of ours to
retaliate on those who fling at us misplaced and false reasoning; it is
enough for them if we reject their arguments before they affect our
belief.
Then I said,—Which of the arguments that he urged moved you
the most? For the man, as if inspired both by wrath and by scorn,
brought together against the Divine Providence many things heaped
up in confusion, yet no well-ordered reasons, but such miscellaneous
cavils as could be gathered here and there.
2. Patrocleas then said,—The slowness and procrastination of the
Deity in the punishment of the wicked seem to me the most
mysterious of all things; and now, under these arguments, I find
myself a new and fresh adherent to the doctrine in behalf of which
they are urged. Indeed, I used a long time ago to be vexed by that
saying of Euripides:—
“He lingers; such the nature of the gods.”[3:1]

While in no respect, least of all toward wicked men, is it fitting that


God should be dilatory; for they are in no wise dilatory or slow in ill-
doing, but are hurried on to evil by their passions with the utmost
impetuosity. Indeed, as Thucydides says,[3:2] punishment close at
hand bars the way to those who most hope to gain by guilt.
Moreover, no debt overdue, equally with the delay of due
punishment, renders the person wronged utterly hopeless and
depressed, while it confirms the evil-doer in boldness and audacity.
On the other hand, punishments directly inflicted on those who are
bold in evil are at once preventive of future crimes, and a source of
great consolation to those who have suffered wrong. I am therefore
troubled by the saying of Bias,[4:1] which often recurs to me, when
he told a man of bad character that he had no fear that he would go
unpunished, but feared that he himself might not live to see him
punished. What good, indeed, did the punishment of
Aristocrates[4:2] do to the Messenians who were slain before it came
upon him? He betrayed them in the battle of Taphrus, yet, not being
found out for twenty years, he reigned over the Arcadians all that
time, till at length his treachery was discovered and met with its due
penalty; but the victims of his crime had ceased to be. Again, what
comfort did any of the Orchomenians[4:3] who lost children, friends,
and kindred by the treachery of Lyciscus derive from the disease that
many years afterward seized him and consumed his body, while he,
when he dipped and washed his feet in the river, always prayed, with
oaths and curses, that his limbs might rot if he had ever been guilty
of treason and injustice? Indeed, not even the children’s children of
those who were then murdered could have witnessed at Athens the
snatching of the contaminated bodies of the murderers from their
graves, and their transportation beyond the boundaries of the state.
[5:1] Hence, Euripides is absurd, when, to dissuade from crime, he
says:—

“No haste has Justice; dread not her approach;


Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookbell.com

You might also like