Mastering Python High Performance 1st Edition Fernando Doglio all chapter instant download
Mastering Python High Performance 1st Edition Fernando Doglio all chapter instant download
https://ebookfinal.com/download/python-high-performance-
programming-1st-edition-gabriele-lanaro/
https://ebookfinal.com/download/fast-python-high-performance-
techniques-for-large-datasets-meap-v10-tiago-rodrigues-antao/
https://ebookfinal.com/download/super-high-strength-high-performance-
concrete-1st-edition-pu-xincheng-author/
https://ebookfinal.com/download/r-high-performance-programming-1st-
edition-lim/
PostgreSQL 9 0 High Performance Gregory Smith
https://ebookfinal.com/download/postgresql-9-0-high-performance-
gregory-smith/
https://ebookfinal.com/download/clojure-high-performance-
programming-2nd-edition-shantanu-kumar/
https://ebookfinal.com/download/high-performance-chelation-ion-
chromatography-1st-edition-edition-pavel-nesterenko/
https://ebookfinal.com/download/dk-essential-managers-achieving-high-
performance-pippa-bourne/
Mastering Python High Performance 1st Edition
Fernando Doglio Digital Instant Download
Author(s): Fernando Doglio
ISBN(s): 9781783989300, 1783989300
Edition: 1
File Details: PDF, 5.23 MB
Year: 2015
Language: english
[1]
Mastering Python High
Performance
Fernando Doglio
BIRMINGHAM - MUMBAI
Mastering Python High Performance
All rights reserved. No part of this book may be reproduced, stored in a retrieval
system, or transmitted in any form or by any means, without the prior written
permission of the publisher, except in the case of brief quotations embedded in
critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy
of the information presented. However, the information contained in this book is
sold without warranty, either express or implied. Neither the author, nor Packt
Publishing, and its dealers and distributors will be held liable for any damages
caused or alleged to be caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the
companies and products mentioned in this book by the appropriate use of capitals.
However, Packt Publishing cannot guarantee the accuracy of this information.
ISBN 978-1-78398-930-0
www.packtpub.com
Credits
Reviewers Proofreader
Erik Allik Safis Editing
Mike Driscoll
Enrique Escribano Indexer
Mariammal Chettiyar
Mosudi Isiaka
Graphics
Commissioning Editor
Sheetal Aute
Kunal Parikh
Production Coordinator
Acquisition Editors
Arvindkumar Gupta
Vivek Anantharaman
Richard Brookes-Bland
Cover Work
Arvindkumar Gupta
Content Development Editors
Akashdeep Kundu
Rashmi Suvarna
Technical Editor
Vijin Boricha
Copy Editors
Relin Hedly
Karuna Narayanan
About the Author
Fernando Doglio has been working as a web developer for the past 10 years.
During that time, he shifted his focus to the Web and grabbed the opportunity of
working with most of the leading technologies, such as PHP, Ruby on Rails, MySQL,
Python, Node.js, AngularJS, AJAX, REST APIs, and so on.
In his spare time, Fernando likes to tinker and learn new things. This is why his
GitHub account keeps getting new repos every month. He's also a big open source
supporter and tries to win the support of new people with the help of his website,
lookingforpullrequests.com.
I'd like to thank my lovely wife for putting up with me and the
long hours I spent writing this book; this book would not have
been possible without her continued support. I would also like to
thank my two sons. Without them, this book would've been finished
months earlier.
Finally, I'd like to thank the reviewers and editors. They helped me get
this book in shape and achieve the quality level that you deserve.
About the Reviewers
He has worked primarily with Python, Scala, and JavaScript. Erik is currently
focusing on applying Haskell and other innovative functional programming
techniques in various industries and leveraging the power of a mathematical
approach and formalism in the wild.
Mike Driscoll has been programming in Python since 2006. He enjoys writing
about Python on his blog at http://www.blog.pythonlibrary.org/. Mike has
coauthored Core Python refcard for DZone. He recently authored Python 101 and
was a technical reviewer for the following books by Packt Publishing:
He is an expert in Java and Python and is proficient in using C/C++. Most of his
projects involve working with cloud-based technologies, such as AWS, GAE,
Hadoop, and so on. Enrique is also working on an open source research project
based on security with software-defined networking (SDN) with professor
Dong Jin at IIT Security Lab.
You can find more information about Enrique on his personal website
at enriquescribano.com. You can also reach him on LinkedIn at
linkedin.com/in/enriqueescribano.
I would like to thank my parents, Lucio and Carmen, for all the
unconditional support they have provided me with over the years.
They allowed me to be as ambitious as I wanted. Without them,
I may never have gotten to where I am today.
Lastly, I would also like to thank Paula for always being my main
inspiration and motivation since the very first day. I am so fortunate
to have her in my life.
Mosudi Isiaka is a graduate in electrical and computer engineering from the
Federal University of Technology Minna, Niger State, Nigeria. He demonstrates
excellent skills in numerous aspects of information and communication technology.
From a simple network to a mid-level complex network scenario of no less than
one thousand workstations (Microsoft Windows 7, Microsoft Windows Vista, and
Microsoft Windows XP), along with a Microsoft Windows 2008 Server R2 Active
Directory domain controller deployed in more than a single location, Mosudi has
extensive experience in implementing and managing a local area network. He has
successfully set up a data center infrastructure, VPN, WAN link optimization,
firewall and intrusion detection system, web/e-mail hosting control panel,
OpenNMS network management application, and so on.
Mosudi has the ability to use open source software and applications to achieve
enterprise-level network management solutions in scenarios that cover a virtual
private network (VPN), IP PBX, cloud computing, clustering, virtualization, routing,
high availability, customized firewall with advanced web filtering, network load
balancing, failover and link aggregation for multiple Internet access solutions, traffic
engineering, collaboration suits, network-attached storage (NAS), Linux systems
administration, virtual networking and computing.
You can find more information about him at http://www.mioemi.com. You can also
reach him at http://ng.linkedin.com/pub/isiaka-mosudi/1b/7a2/936/.
Did you know that Packt offers eBook versions of every book published, with PDF
and ePub files available? You can upgrade to the eBook version at www.PacktPub.com
and as a print book customer, you are entitled to a discount on the eBook copy. Get in
touch with us at [email protected] for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign
up for a range of free newsletters and receive exclusive discounts and offers on Packt
books and eBooks.
TM
https://www2.packtpub.com/books/subscription/packtlib
Do you need instant solutions to your IT questions? PacktLib is Packt's online digital
book library. Here, you can search, access, and read Packt's entire library of books.
Why subscribe?
• Fully searchable across every book published by Packt
• Copy and paste, print, and bookmark content
• On demand and accessible via a web browser
[i]
Table of Contents
[ ii ]
Table of Contents
[ iii ]
Table of Contents
[ iv ]
Preface
The idea of this book came to me from the nice people at Packt Publishing.
They wanted someone who could delve into the intricacies of high performance
in Python and everything related to this subject, be it profiling, the available
tools (such as profilers and other performance enhancement techniques),
or even alternatives to the standard Python implementation.
Having said that, I welcome you to Mastering Python High Performance. In this
book, we'll cover everything related to performance improvements. Knowledge
about the subject is not strictly required (although it won't hurt), but knowledge
of the Python programming language is required, especially in some of the
Python-specific chapters.
We'll start by going through the basics of what profiling is, how it fits into the
development cycle, and the benefits related to including this practice in it. Afterwards,
we'll move on to the core tools required to get the job done (profilers and visual
profilers). Then, we will take a look at a set of optimization techniques and finally
arrive at a fully practical chapter that will provide a real-life optimization example.
Chapter 2, The Profilers, tells you how to use the core tools that will be mentioned
throughout the book.
Chapter 3, Going Visual – GUIs to Help Understand Profiler Output, covers how to
use the pyprof2calltree and RunSnakeRun tools. It also helps the developer to
understand the output of cProfile with different visualization techniques.
[v]
Preface
Chapter 4, Optimize Everything, talks about the basic process of optimization and a set
of good/recommended practices that every Python developer should follow before
considering other options.
Chapter 6, Generic Optimization Options, describes and shows you how to install and
use Cython and PyPy in order to improve code performance.
Chapter 7, Lightning Fast Number Crunching with Numba, Parakeet, and pandas, talks
about tools that help optimize Python scripts that deal with numbers. These specific
tools (Numba, Parakeet, and pandas) help make number crunching faster.
Chapter 8, Putting It All into Practice, provides a practical example of profilers, finds
its bottlenecks, and removes them using the tools and techniques mentioned in this
book. To conclude, we'll compare the results of using each technique.
• Python 2.7
• Line profiler 1.0b2
• Kcachegrind 0.7.4
• RunSnakeRun 2.0.4
• Numba 0.17
• The latest version of Parakeet
• pandas 0.15.2
The only essential requirement is to have some basic knowledge of the Python
programing language.
[ vi ]
Preface
Conventions
In this book, you will find a number of text styles that distinguish between different
kinds of information. Here are some examples of these styles and an explanation of
their meaning.
Code words in text, database table names, folder names, filenames, file extensions,
pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "We
can print/gather the information we deem relevant inside the PROFILER function."
sys.setprofile(profiler)
When we wish to draw your attention to a particular part of a code block, the
relevant lines or items are set in bold:
Traceback (most recent call last):
File "cprof-test1.py", line 7, in <module>
runRe() ...
File "/usr/lib/python2.7/cProfile.py", line 140, in runctx
exec cmd in globals, locals
File "<string>", line 1, in <module>
NameError: name 're' is not defined
[ vii ]
Preface
New terms and important words are shown in bold. Words that you see on the
screen, for example, in menus or dialog boxes, appear in the text like this: "Again,
with the Callee Map selected for the first function call, we can see the entire map
of our script."
Reader feedback
Feedback from our readers is always welcome. Let us know what you think about
this book—what you liked or disliked. Reader feedback is important for us as it helps
us develop titles that you will really get the most out of.
If there is a topic that you have expertise in and you are interested in either writing
or contributing to a book, see our author guide at www.packtpub.com/authors.
Customer support
Now that you are the proud owner of a Packt book, we have a number of things to
help you to get the most from your purchase.
[ viii ]
Preface
Errata
Although we have taken every care to ensure the accuracy of our content, mistakes
do happen. If you find a mistake in one of our books—maybe a mistake in the text or
the code—we would be grateful if you could report this to us. By doing so, you can
save other readers from frustration and help us improve subsequent versions of this
book. If you find any errata, please report them by visiting http://www.packtpub.
com/submit-errata, selecting your book, clicking on the Errata Submission Form
link, and entering the details of your errata. Once your errata are verified, your
submission will be accepted and the errata will be uploaded to our website or added
to any list of existing errata under the Errata section of that title.
Piracy
Piracy of copyrighted material on the Internet is an ongoing problem across all
media. At Packt, we take the protection of our copyright and licenses very seriously.
If you come across any illegal copies of our works in any form on the Internet, please
provide us with the location address or website name immediately so that we can
pursue a remedy.
We appreciate your help in protecting our authors and our ability to bring you
valuable content.
Questions
If you have a problem with any aspect of this book, you can contact us at
[email protected], and we will do our best to address the problem.
[ ix ]
Profiling 101
Just like any infant needs to learn how to crawl before running 100 mts with
obstacles in under 12 seconds, programmers need to understand the basics of
profiling before trying to master that art. So, before we start delving into the
mysteries of performance optimization and profiling on Python programs,
we need to have a clear understanding of the basics.
Once you know the basics, you'll be able to learn about the tools and techniques.
So, to start us off, this chapter will cover everything you need to know about
profiling but were too afraid to ask. In this chapter we will do the following things:
[1]
Profiling 101
What is profiling?
A program that hasn't been optimized will normally spend most of its CPU cycles
in some particular subroutines. Profiling is the analysis of how the code behaves
in relation to the resources it's using. For instance, profiling will tell you how
much CPU time an instruction is using or how much memory the full program is
consuming. It is achieved by modifying either the source code of the program or the
binary executable form (when possible) to use something called as a profiler.
Normally, developers profile their programs when they need to either optimize their
performance or when those programs are suffering from some kind of weird bug,
which can normally be associated with memory leaks. In such cases, profiling can
help them get an in-depth understanding of how their code is using the computer's
resources (that is, how many times a certain function is being called).
A developer can use this information, along with a working knowledge of the source
code, to find the program's bottlenecks and memory leaks. The developer can then
fix whatever is wrong with the code.
There are two main methodologies for profiling software: event-based profiling and
statistical profiling. When using these types of software, you should keep in mind
that they both have pros and cons.
Event-based profiling
Not every programming language supports this type of profiling. Here are some
programming languages that support event-based profiling:
• Java: The JVMTI (JVM Tools Interface) provides hooks for profilers to trap
events such as calls, thread-related events, class loads and so on
• .NET: Just like with Java, the runtime provides events (http://
en.wikibooks.org/wiki/Introduction_to_Software_Engineering/
Testing/Profiling#Methods_of_data_gathering)
• Python: Using the sys.setprofile function, a developer can
trap events such as python_[call|return|exception] or c_
[call|return|exception]
[2]
Chapter 1
sys.setprofile(profiler)
def fib_seq(n):
seq = [ ]
if n > 0:
seq.extend(fib_seq(n-1))
seq.append(fib(n))
return seq
print fib_seq(2)
[3]
Profiling 101
[4]
Chapter 1
As you can see, PROFILER is called on every event. We can print/gather the
information we deem relevant inside the PROFILER function. The last line on the
sample code shows that the simple execution of fib_seq(2) generates a lot of
output data. If we were dealing with a real-world program, this output would be
several orders of magnitude bigger. This is why event-based profiling is normally the
last option when it comes to profiling. There are other alternatives out there (as we'll
see) that generate much less output, but, of course, have a lower accuracy rate.
Statistical profiling
Statistical profilers work by sampling the program counter at regular intervals. This
in turn allows the developer to get an idea of how much time the target program is
spending on each function. Since it works by sampling the PC, the resulting numbers
will be a statistical approximation of reality instead of exact numbers. Still, it should
be enough to get a glimpse of what the profiled program is doing and where the
bottlenecks are.
• Less data to analyze: Since we're only sampling the program's execution
instead of saving every little piece of data, the amount of information to
analyze will be significantly smaller.
• Smaller profiling footprint: Due to the way the sampling is made (using
OS interrupts), the target program suffers a smaller hit on its performance.
Although the presence of the profiler is not 100 percent unnoticed, statistical
profiling does less damage than the event-based one.
[5]
Profiling 101
"func1500","statistical_profiling.c",701,1.12%
"func1000","static_functions.c",385,0.61%
"func500","statistical_profiling.c",194,0.31%
Here is the output of profiling the same Fibonacci code from the preceding code
using a statistical profiler for Python called statprof:
% cumulative self
time seconds seconds name
100.00 0.01 0.01 B02088_01_03.py:11:fib
0.00 0.01 0.00 B02088_01_03.py:17:fib_seq
0.00 0.01 0.00 B02088_01_03.py:21:<module>
---
Sample count: 1
Total time: 0.010000 seconds
As you can see, there is quite a difference between the output of both profilers for the
same code.
Profiling is not something everyone is used to do, especially with non-critical software
(unlike peace maker embedded software or any other type of execution-critical
example). Profiling takes time and is normally useful only after we've detected that
something is wrong with our program. However, it could still be performed before
that even happens to catch possible unseen bugs, which would, in turn, help chip away
the time spent debugging the application at a later stage.
[6]
Chapter 1
However, what we sometimes fail to realize is that the higher level our languages
become (we've gone from assembler to JavaScript in just a few years), the less
we think about CPU cycles, memory allocation, CPU registries, and so on. New
generations of programmers learn their craft using higher level languages because
they're easier to understand and provide more power out of the box. However,
they also abstract the hardware and our interaction with it. As this tendency keeps
growing, the chances that new developers will even consider profiling their software
as another step on its development grows weaker by the second.
As we know, profiling measures the resources our program uses. As I've stated earlier,
they keep getting cheaper and cheaper. So, the cost of getting our software out and the
cost of making it available to a higher number of users is also getting cheaper.
These days, it is increasingly easy to create and publish an application that will be
reached by thousands of people. If they like it and spread the word through social
media, that number can blow up exponentially. Once that happens, something that is
very common is that the software will crash, or it'll become impossibly slow and the
users will just go away.
A possible explanation for the preceding scenario is, of course, a badly thought and
non-scalable architecture. After all, one single server with a limited amount of RAM
and processing power will get you so far until it becomes your bottleneck. However,
another possible explanation, one that proves to be true many times, is that we failed
to stress test our application. We didn't think about resource consumption; we just
made sure our tests passed, and we were happy with that. In other words, we failed
to go that extra mile, and as a result, our project crashed and burned.
Profiling can help avoid that crash and burn outcome, since it provides a fairly
accurate view of what our program is doing, no matter the load. So, if we profile it
with a very light load, and the result is that we're spending 80 percent of our time
doing some kind of I/O operation, it might raise a flag for us. Even if, during our
test, the application performed correctly, it might not do so under heavy stress.
Think of a memory leak-type scenario. In those cases, small tests might not generate
a big enough problem for us to detect it. However, a production deployment under
heavy stress will. Profiling can provide enough evidence for us to detect this problem
before it even turns into one.
[7]
Profiling 101
Execution time
The most basic of the numbers we can gather when profiling is the execution time.
The execution time of the entire process or just of a particular portion of the code
will shed some light on its own. If you have experience in the area your program is
running (that is, you're a web developer and you're working on a web framework),
you probably already know what it means for your system to take too much time. For
instance, a simple web server might take up to 100 milliseconds when querying the
database, rendering the response, and sending it back to the client. However, if the
same piece of code starts to slow down and now it takes 60 seconds to do the same
task, then you should start thinking about profiling. You also have to consider that
numbers here are relative. Let's assume another process: a MapReduce job that is
meant to process 2 TB of information stored on a set of text files takes 20 minutes. In
this case, you might not consider it as a slow process, even when it takes considerably
more time than the slow web server mentioned earlier.
To get this type of information, you don't really need a lot of profiling experience or
even complex tools to get the numbers. Just add the required lines into your code
and run the program.
For instance, the following code will calculate the Fibonnacci sequence for the
number 30:
import datetime
tstart = None
tend = None
def start_time():
global tstart
tstart = datetime.datetime.now()
def get_delta():
global tstart
tend = datetime.datetime.now()
return tend - tstart
def fib(n):
[8]
Chapter 1
def fib_seq(n):
seq = [ ]
if n > 0:
seq.extend(fib_seq(n-1))
seq.append(fib(n))
return seq
start_time()
print "About to calculate the fibonacci sequence for the number 30"
delta1 = get_delta()
start_time()
seq = fib_seq(30)
delta2 = get_delta()
[9]
Profiling 101
6765
10946
17711
28657
46368
75025
121393
196418
317811
514229
832040
====== Profiling results =======
Time required to print a simple message: 0:00:00.000030
Time required to calculate fibonacci: 0:00:00.642092
Time required to iterate and print the numbers: 0:00:00.000102
Based on the last three lines, we see the obvious results: the most expensive part of
the code is the actual calculation of the Fibonacci sequence.
• Heavy I/O operations, such as reading and parsing big files, executing
long-running database queries, calling external services (such as HTTP
requests), and so on
• Unexpected memory leaks that start building up until there is no memory
left for the rest of the program to execute properly
• Unoptimized code that gets executed frequently
• Intensive operations that are not cached when they could be
[ 10 ]
Chapter 1
I/O-bound code (file reads/write, database queries, and so on) is usually harder
to optimize, because that would imply changing the way the program is dealing
with that I/O (normally using core functions from the language). Instead, when
optimizing compute-bound code (like a function that is using a badly implemented
algorithm), getting a performance improvement is easier (although not necessarily
easy). This is because it just implies rewriting it.
A general indicator that you're near the end of a performance optimization process is
when most of the bottlenecks left are due to I/O-bound code.
There are some developments, such as embedded systems, that actually require
developers to pay extra attention to the amount of memory they use, because it is a
limited resource in those systems. However, an average developer can expect their
target system to have the amount of RAM they require.
With RAM and higher level languages that come with automatic memory
management (like garbage collection), the developer is less likely to pay much
attention to memory utilization, trusting the platform to do it for them.
[ 11 ]
Profiling 101
With a tool like that (the top command line tool from Linux), spotting memory leaks
can be easy, but that will depend on the type of software you're monitoring. If your
program is constantly loading data, its memory consumption rate will be different
from another program that doesn't have to deal much with external resources.
[ 12 ]
Chapter 1
For instance, if we were to chart the memory consumption over time of a program
dealing with lots of external data, it would look like the following chart:
There will be peaks, when these resources get fully loaded into memory, but there
will also be some drops, when those resources are released. Although the memory
consumption numbers fluctuate quite a bit, it's still possible to estimate the average
amount of memory that the program will use when no resources are loaded. Once
you define that area (marked as a green box in the preceding chart), you can spot
memory leaks.
Let's look at how the same chart would look with bad resource handling (not fully
releasing allocated memory):
[ 13 ]
Profiling 101
In the preceding chart, you can clearly see that not all memory is released when a
resource is no longer used, which is causing the line to move out of the green box.
This means the program is consuming more and more memory every second, even
when the resources loaded are released.
The same can be done with programs that aren't resource heavy, for instance, scripts
that execute a particular processing task for a considerable period of time. In those
cases, the memory consumption and the leaks should be easier to spot.
When the processing stage starts, the memory consumption should stabilize within a
clearly defined range. If we spot numbers outside that range, especially if it goes out
of it and never comes back, we're looking at another example of a memory leak.
[ 14 ]
Chapter 1
A very common pitfall developers face while starting to code a new piece of software
is premature optimization.
When this happens, the end result ends up being quite the opposite of the intended
optimized code. It can contain an incomplete version of the required solution, or it
can even contain errors derived from the optimization-driven design decisions.
As a normal rule of thumb, if you haven't measured (profiled) your code, optimizing
it might not be the best idea. First, focus on readable code. Then, profile it and find out
where the real bottlenecks are, and as a final step, perform the actual optimization.
RTC helps quantify the execution time of a given algorithm. It does so by providing
a mathematical approximation of the time a piece of code will take to execute for any
given input. It is an approximation, because that way, we're able to group similar
algorithms using that value.
In other words, this notation will give us a broad idea of how long our algorithm
will take to process an arbitrarily large input. It will not, however, give us a precise
number for the time of execution, which would require a more in-depth analysis of
the source code.
As I've said earlier, we can use this tendency to group algorithms. Here are some of
the most common groups:
[ 15 ]
Profiling 101
Here are some examples of code that have O(1) execution time:
Even something more conceptually complex, like finding the value of a key inside
a dictionary (or hash table), if implemented correctly, can be done in constant time.
Technically speaking, accessing an element on the hash takes O(1) amortized time,
which roughly means that the average time each operation takes (without taking into
account edge cases) is a constant O(1) time.
[ 16 ]
Chapter 1
The preceding chart clearly shows that both the blue (3n) line and the red one
(4n + 5) have the same upper limit as the black line (n) when x tends to infinity.
So, to simplify, we can just say that all three functions are O(n).
The preceding chart shows three different logarithmic functions. You can clearly
see that they all possess a similar shape, including the upper limit x, which keeps
increasing to infinity.
• Binary search
• Calculating Fibonacci numbers (using matrix multiplications)
[ 17 ]
Profiling 101
Here are some examples of algorithms that have this order of execution:
• Merge sort
• Heap sort
• Quick sort (at least its average time complexity)
Let's see a few examples of plotted linearithmic functions to understand them better:
[ 18 ]
Chapter 1
Here is a rough approximation of how the execution time of our algorithm would
look with factorial time:
• Bubble sort
• Traversing a 2D array
• Insertion sort
[ 19 ]
Exploring the Variety of Random
Documents with Different Content
THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the terms
of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.
• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.F.
1.F.4. Except for the limited right of replacement or refund set forth in
paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebookfinal.com