100% found this document useful (4 votes)
18 views

Digital Image Processing with C++: Implementing Reference Algorithms with the CImg Library 1st Edition David Tschumperle - Own the complete ebook with all chapters in PDF format

The document provides information about the book 'Digital Image Processing with C++: Implementing Reference Algorithms with the CImg Library' by David Tschumperlé, which covers digital image processing theories and practical implementations using the CImg library. It includes various topics such as filtering, mathematical morphology, and feature extraction, aimed at students and developers interested in image processing. Additionally, it lists other related ebooks available for instant download at ebookmeta.com.

Uploaded by

elishderme
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (4 votes)
18 views

Digital Image Processing with C++: Implementing Reference Algorithms with the CImg Library 1st Edition David Tschumperle - Own the complete ebook with all chapters in PDF format

The document provides information about the book 'Digital Image Processing with C++: Implementing Reference Algorithms with the CImg Library' by David Tschumperlé, which covers digital image processing theories and practical implementations using the CImg library. It includes various topics such as filtering, mathematical morphology, and feature extraction, aimed at students and developers interested in image processing. Additionally, it lists other related ebooks available for instant download at ebookmeta.com.

Uploaded by

elishderme
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

Read Anytime Anywhere Easy Ebook Downloads at ebookmeta.

com

Digital Image Processing with C++: Implementing


Reference Algorithms with the CImg Library 1st
Edition David Tschumperle

https://ebookmeta.com/product/digital-image-processing-with-
c-implementing-reference-algorithms-with-the-cimg-
library-1st-edition-david-tschumperle/

OR CLICK HERE

DOWLOAD EBOOK

Visit and Get More Ebook Downloads Instantly at https://ebookmeta.com


Recommended digital products (PDF, EPUB, MOBI) that
you can download immediately if you are interested.

Digital Image Processing with Application to Digital


Cinema 1st Edition Ks Thyagarajan

https://ebookmeta.com/product/digital-image-processing-with-
application-to-digital-cinema-1st-edition-ks-thyagarajan/

ebookmeta.com

Digital Image Processing (4th edition, Global edition)


Rafael C. Gonzalez

https://ebookmeta.com/product/digital-image-processing-4th-edition-
global-edition-rafael-c-gonzalez/

ebookmeta.com

Digital Image Processing Using MATLAB 4th Edition Rafael


C. Gonzalez

https://ebookmeta.com/product/digital-image-processing-using-
matlab-4th-edition-rafael-c-gonzalez/

ebookmeta.com

Extractive Metallurgy of Copper 6th Edition Mark E.


Schlesinger

https://ebookmeta.com/product/extractive-metallurgy-of-copper-6th-
edition-mark-e-schlesinger/

ebookmeta.com
The Developing Mind How Relationships and the Brain
Interact to Shape Who We Are 3rd Edition Daniel J Siegel

https://ebookmeta.com/product/the-developing-mind-how-relationships-
and-the-brain-interact-to-shape-who-we-are-3rd-edition-daniel-j-
siegel/
ebookmeta.com

World Englishes and Second Language Acquisition Insights


from Southeast Asian Englishes 1st Edition Michael
Percillier
https://ebookmeta.com/product/world-englishes-and-second-language-
acquisition-insights-from-southeast-asian-englishes-1st-edition-
michael-percillier/
ebookmeta.com

Salads for Lunch: Discover New and Delicious Salad Recipes


for Lunch (2nd Edition) Booksumo Press

https://ebookmeta.com/product/salads-for-lunch-discover-new-and-
delicious-salad-recipes-for-lunch-2nd-edition-booksumo-press/

ebookmeta.com

Practical Equine Dermatology 2nd Edition Janet D.


Littlewood

https://ebookmeta.com/product/practical-equine-dermatology-2nd-
edition-janet-d-littlewood/

ebookmeta.com

What Every Engineer Should Know about Cyber Security and


Digital Forensics Joanna F. Defranco

https://ebookmeta.com/product/what-every-engineer-should-know-about-
cyber-security-and-digital-forensics-joanna-f-defranco/

ebookmeta.com
The Right to Development in the African Human Rights
System 1st Edition Serges Djoyou Kamga

https://ebookmeta.com/product/the-right-to-development-in-the-african-
human-rights-system-1st-edition-serges-djoyou-kamga/

ebookmeta.com
Digital Image Processing
with C++

Digital Image Processing with C++: Implementing Reference Algorithms with the Clmg Library
presents the theory of digital image processing and implementations of algorithms using a dedi-
cated library. Processing a digital image means transforming its content (denoising, stylizing,
etc.) or extracting information to solve a given problem (object recognition, measurement, mo-
tion estimation, etc.). This book presents the mathematical theories underlying digital image
processing as well as their practical implementation through examples of algorithms implement-
ed in the C++ language using the free and easy-to-use CImg library.

Chapters cover the field of digital image processing in a broad way and propose practical and
functional implementations of each method theoretically described. The main topics covered
include filtering in spatial and frequency domains, mathematical morphology, feature extraction
and applications to segmentation, motion estimation, multispectral image processing and 3D
visualization.

Students or developers wishing to discover or specialize in this discipline and teachers and re-
searchers hoping to quickly prototype new algorithms or develop courses will all find in this
book material to discover image processing or deepen their knowledge in this field.

David Tschumperlé is a permanent CNRS research scientist heading the IMAGE team at the
GREYC Laboratory in Caen, France. He’s particularly interested in partial differential equations
and variational methods for processing multi-valued images in a local or non-local way. He has
authored more than 40 papers in journals or conferences and is the project leader of CImg and
G’MIC, two open-source software/libraries.

Christophe Tilmant is associate professor in computer science at Clermont-Auvergne Univer-


sity. His research activities include image processing and artificial intelligence, where he has au-
thored more than 30 papers. His teaching includes deep learning, image processing and network
security. He participates or leads several French research programs.

Vincent Barra is a full professor in computer science at Clermont-Auvergne University and


associate director of the LIMOS Lab. He teaches artificial intelligence and image processing in
engineering schools and master’s programs. His research activities focus on n-dimensional data
analysis with methodological and application aspects in various fields. He has authored more
than 90 papers in journals or conferences and participates or leads several French and European
research programs.
Taylor & Francis
Taylor & Francis Group
http://taylorandfrancis.com
Digital Image Processing
with C++
Implementing Reference Algorithms
with the CImg Library

David Tschumperlé
Christophe Tilmant
Vincent Barra
First edition published 2023
by CRC Press
6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742

and by CRC Press


4 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN

© 2023 David Tschumperlé, Christophe Tilmant and Vincent Barra

CRC Press is an imprint of Taylor & Francis Group, LLC

Title of the original French edition, Le traitement numerique des images en C++. Implementation
d’algorithmes avec la bibliotheque CIMG- published by Ellipses - Copyright 2021, Edition Marketing S. A.

Reasonable efforts have been made to publish reliable data and information, but the author and publisher
cannot assume responsibility for the validity of all materials or the consequences of their use. The authors
and publishers have attempted to trace the copyright holders of all material reproduced in this publica-
tion and apologize to copyright holders if permission to publish in this form has not been obtained. If any
copyright material has not been acknowledged please write and let us know so we may rectify in any future
reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, trans-
mitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter
invented, including photocopying, microfilming, and recording, or in any information storage or retrieval
system, without written permission from the publishers.

For permission to photocopy or use material electronically from this work, access www.copyright.com or
contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-
8400. For works that are not available on CCC please contact [email protected]

Trademark notice: Product or corporate names may be trademarks or registered trademarks and are used
only for identification and explanation without intent to infringe.

ISBN: 978-1-032-34752-3 (hbk)


ISBN: 978-1-032-34753-0 (pbk)
ISBN: 978-1-003-32369-3 (ebk)

DOI: 10.1201/9781003323693

Typeset in Nimbus Roman


by KnowledgeWorks Global Ltd.

Publisher’s note: This book has been prepared from camera-ready copy provided by the author.
Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

Preamble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv

I I NTRODUCTION TO CImg

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Getting Started with the CImg Library . . . . . . . . . . . . . . . . . 17


2.1 Objective: subdivide an image into blocks 17
2.2 Setup and first program 18
2.3 Computing the variations 19
2.4 Computing the block decomposition 24
2.5 Rendering of the decomposition 26
2.6 Interactive visualization 31
2.7 Final source code 36

II I MAGE P ROCESSING U SING CImg

3 Point Processing Transformations . . . . . . . . . . . . . . . . . . . . . . 43


3.1 Image operations 43
3.1.1 Mathematical transformations . . . . . . . . . . . . . . . . . . . . . . . . 43
3.1.2 Bitwise transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.1.3 Contrast enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
vi Table of contents

3.2 Histogram operations 48


3.2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.2.2 Histogram specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.2.3 Local histogram specification . . . . . . . . . . . . . . . . . . . . . . . . . 50

4 Mathematical Morphology . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.1 Binary images 54
4.1.1 Dilation and erosion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.1.2 Opening and closing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.2 Gray-level images 58
4.3 Some applications 59
4.3.1 Kramer-Bruckner filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.3.2 Alternating sequential filters . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.3.3 Morphological gradients . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.3.4 Skeletonization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

5 Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.1 Spatial filtering 69
5.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.1.2 Low-pass filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.1.3 High-pass filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.1.4 Adaptive filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.1.5 Adaptive window filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.2 Recursive filtering 84
5.2.1 Optimal edge detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.2.2 Deriche filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.3 Frequency filtering 94
5.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.3.2 The Fourier transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.3.3 Frequency filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.3.4 Processing a Moiré image . . . . . . . . . . . . . . . . . . . . . . . . . . 105
5.4 Diffusion filtering 110
5.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
5.4.2 Physical basis of diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
5.4.3 Linear diffusion filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Table of contents vii

5.4.4 Non-linear diffusion filter in two dimensions . . . . . . . . . . . . . . 114


5.4.5 Non-linear diffusion filter on a video sequence . . . . . . . . . . 117

6 Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121


6.1 Points of interest 121
6.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
6.1.2 Harris and Stephens detector . . . . . . . . . . . . . . . . . . . . . . . . 122
6.1.3 Shi and Tomasi algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
6.1.4 Points of interest with sub-pixel accuracy . . . . . . . . . . . . . . . 127
6.2 Hough transform 128
6.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
6.2.2 Line detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
6.2.3 Circle and ellipse detection . . . . . . . . . . . . . . . . . . . . . . . . . 134
6.3 Texture features 137
6.3.1 Texture spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
6.3.2 Tamura coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
6.3.3 Local binary pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
6.3.4 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

7 Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
7.1 Edge-based approaches 151
7.1.1 Introduction to implicit active contours . . . . . . . . . . . . . . . . 151
7.1.2 Implicit representation of a contour . . . . . . . . . . . . . . . . . . . 156
7.1.3 Evolution equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
7.1.4 Discretization of the evolution equation . . . . . . . . . . . . . . . . 160
7.1.5 Geodesic model propagation algorithm . . . . . . . . . . . . . . . 161
7.2 Region-based approaches 163
7.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
7.2.2 Histogram-based methods . . . . . . . . . . . . . . . . . . . . . . . . . . 163
7.2.3 Thresholding by clustering . . . . . . . . . . . . . . . . . . . . . . . . . . 167
7.2.4 Transformation of regions . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
7.2.5 Super-pixels partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
viii Table of contents

8 Motion Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183


8.1 Optical flow: dense motion estimation 183
8.1.1 Variational methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
8.1.2 Lucas and Kanade differential method . . . . . . . . . . . . . . . . 189
8.1.3 Affine flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
8.2 Sparse estimation 195
8.2.1 Displacement field using spatial correlation . . . . . . . . . . . . . 196
8.2.2 Displacement field using phase correlation . . . . . . . . . . . . . 198
8.2.3 Kalman filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

9 Multispectral Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . 209


9.1 Dimension reduction 209
9.1.1 Principal component analysis . . . . . . . . . . . . . . . . . . . . . . . . 210
9.1.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
9.2 Color imaging 213
9.2.1 Colorimetric spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
9.2.2 Median filtering in color imaging . . . . . . . . . . . . . . . . . . . . . 218
9.2.3 Edge detection in color imaging . . . . . . . . . . . . . . . . . . . . . 220

10 3D Visualisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
10.1 Structuring of 3D mesh objects 227
10.2 3D plot of a function z = f (x, y) 229
10.3 Creating complex 3D objects 233
10.3.1 Details on vertex structuring . . . . . . . . . . . . . . . . . . . . . . . . . 233
10.3.2 Details on primitive structuring . . . . . . . . . . . . . . . . . . . . . . . 234
10.3.3 Details on material structuring . . . . . . . . . . . . . . . . . . . . . . . 235
10.3.4 Details on opacity structuring . . . . . . . . . . . . . . . . . . . . . . . . 235
10.4 Visualization of a cardiac segmentation in MRI 236
10.4.1 Description of the input data . . . . . . . . . . . . . . . . . . . . . . . . 236
10.4.2 Extraction of the 3D surface of the ventricle . . . . . . . . . . . . 237
10.4.3 Adding 3D motion vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 238
10.4.4 Adding cutting planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
10.4.5 Final result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Table of contents ix

11 And So Many Other Things. . . . . . . . . . . . . . . . . . . . . . . . . . 243


11.1 Compression by transform (JPEG) 243
11.1.1 Introduction - Compression by transform . . . . . . . . . . . . . . . 243
11.1.2 JPEG Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
11.1.3 Discrete cosine transform and quantization . . . . . . . . . . . . . 245
11.1.4 Simplified JPEG algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
11.2 Tomographic reconstruction 252
11.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
11.2.2 Analytical tomographic reconstruction . . . . . . . . . . . . . . . . 254
11.2.3 Algebraic tomographic reconstruction . . . . . . . . . . . . . . . . 259
11.3 Stereovision 264
11.3.1 Epipolar geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
11.3.2 Depth estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
11.4 Interactive deformation using RBF 273
11.4.1 Goal of the application . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
11.4.2 The RBF interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
11.4.3 RBF for image warping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
11.4.4 User interface for keypoint management . . . . . . . . . . . . . . . 277

List of CImg Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Taylor & Francis
Taylor & Francis Group
http://taylorandfrancis.com
Preface

Rachid Deriche is a senior research director at The Inria cen-


ter at Université Côte d’Azur in France where he leads the
Athena research project-team. His research aims to explore
the Central Nervous System (CNS) through mathematical
and computational models of medical imaging, with a focus
on the recovery of the human brain’s structural and func-
tional connectivities. He was awarded the EADS Grand Prize
(Computer Science) by the French Academy of Sciences
in 2013, a prestigeous ERC Advanced Grant for his project
in Computational Brain Connectivity Mapping in 2016, Doc-
torate Honoris Causa by Sherbrooke University in 2014 and a
3IA Université Côte d’Azur Chair in 2019. He has published
over 100 journal articles and over 300 conference papers.
His extensive scientific career includes three main research
areas: i) computational image processing, ii) 3D computer
vision, and iii) computational neuroimaging.

It is with great pleasure that I accepted to preface this book, which is the successful
outcome of several years of research and experience by a trio of authors who are
experts in digital image processing, combining both its theoretical aspects and its
software implementations.

I know David Tschumperlé since I had him as a trainee in my project-team at Inria


Sophia Antipolis-Méditerranée, in March 1999, as part of his Master’s degree, from
the University of Nice Sophia-Antipolis in which I was teaching a module about image
processing based on variational approaches, PDE-based and Level Sets techniques
(PDE: Partial Differential Equation).
The internship was about the study and development of diffusion PDE methods
for multi-valued images (more particularly color images), i.e., images with potentially
more than three components per pixel. I selected him among several applicants of
the promotion, because he was also a general engineer in computer science, and his
double background perfectly met the need for a trainee who could program with ease
in C/C++ while mastering the theoretical aspects and those related to an efficient
software implementation of the developed algorithms. PDE-based methods often
xii Preface

require several hundreds or even thousands of complex iterations to be applied to


images, as you will discover in this book, and the need to optimize the machine’s
processor resources is therefore all the more pressing.

After promising results, David continued his work in a PhD thesis, under my su-
pervision. And very quickly, it became obvious that we needed to develop a reference
C/C++ library to process images with more than three channels or volume images
with any values (matrices, tensors, . . . ) to develop our research work.

Through the implemented algorithms, tests, successes and failures, David grad-
ually built his own personal C++ library of reusable features, in order to complete
his thesis work. The originality of David’s research work, the need to optimize and
develop software that survives his PhD period and that is “reusable” by the members
of the team constitute in my opinion the basis of the CImg library’s genesis.

At the end of David’s thesis, the ease of use of CImg had already seduced the new
PhD students and permanent members of the team. At the end of 2003, we decided,
in agreement with Inria’s development department, to distribute CImg more widely
as free software, naturally using the new French free license CeCILL, which had just
been created jointly by Inria, CEA and CNRS.

More than 20 years after its first lines of code, CImg is now an image processing
library used by thousands of people around the world, at the heart of dozens of free
projects, and just as importantly, continuously and actively maintained.

At the origin of this remarkable success is, first of all, the nature and the quality of
the methodological work carried out throughout the doctoral program, as well as its
implementation guided by the development of processing algorithms that must work
on images of types and modalities from the field of computer vision (cameras, video,
velocity fields) as well as in the satellite or medical fields, in particular neuroimaging,
with magnetic resonance diffusion imaging and its well-known model called the diffu-
sion tensor.

This aspect of data genericity was very quickly a central element in the design and
success of the library. With a focus on simplicity of design and use, and a constant
and coherent development of the library API, the authors have clearly succeeded in
coupling ease of use with the genericity of the processing that the library allows. The
free distribution of the library has allowed the academic world, as well as the research
and industrial world, to discover the prototyping and implementation of efficient image
processing algorithms in a gentle and enjoyable way.
Preface xiii

For teachers, researchers, students or engineers, this book will provide you with
an introduction to the vast field of image processing, as well as an introduction to the
CImg library for the development of state-of-the-art algorithms.

This book is expected to spark new passions for image processing, e.g., for begin-
ners or more experienced C++ developers who are interested in getting started in this
discipline. But this book will also shed new light on the field of image processing for
users and readers interested in recent advances in artificial intelligence, deep learning,
and neural networks. You will learn, for example, that it is not necessary to have a
neural network with 500 million weights, nor a million training images, to extract
the edges of an image, to segment it, to detect geometric features as segments and
circles, or objects located in it, to estimate displacement vectors in video sequences,
etc. And even better, you will be able to study the implementations of corresponding
algorithms, disseminated and explained throughout this book, made with the CImg
library, while testing them on your own data.

Exploring an exciting branch of science like image processing, in a reproducible


way, with a free library as good as CImg, is a valuable gift that the authors are offering
us. I personally dreamed about it. They made it happen ! I would like to thank them
because with this book, digital image processing is not only given a new life but also
opens new perspectives for a bright future.

Rachid Deriche
Sophia Antipolis, June 22, 2022.
Taylor & Francis
Taylor & Francis Group
http://taylorandfrancis.com
Preamble

W HAT IS IMAGE PROCESSING ?


Image processing is a discipline where different scientific fields meet: signal process-
ing, applied mathematics and computer science. As a result, the definition of what
image processing is varies according to the background of the person speaking about it.
Signal processing is a broader discipline that consists in extracting information from
a measurement or an observation that is generally perturbed by noise, distorted and
where the information we are looking for is not directly accessible. The word signal,
coming from electrical engineering, is a generic term that represents an observable
quantity that can be a measurement over time (one dimension = time), an image (two
dimensions = two distances), a video sequence (three dimensions = two distances
+ time), a volume (three dimensions = three distances), a temporal volume (four
dimensions = three distances + time), . . . The objective of signal processing, and by
the way of image processing, is to develop methods that allow to efficiently search for
this information by a denoising, reconstruction or estimation process for example. The
design of these processing methods calls upon many fields of mathematics (stochastic
processes, statistics, probability, linear algebra, . . . ) and applied mathematics (infor-
mation theory, optimization, numerical analysis, . . . ).

The practical realization of these methods depends on the nature of the image. In
the vast majority of cases, they are in digital form, i.e., sampled and quantified signals.
One carries out digital image processing which processes data-processing algorithms
on numerical machines (computers or dedicated circuits).

W HAT IS AN IMAGE ?
An image is a d dimensional signal. In order to process it, we associate this signal with
the notion of abstract signal. In this book we will only consider deterministic signals
to which we associate a function. In the context of random or stochastic signals, we
can use for example random processes.

To present an image processing method, we can switch between a continuous


representation of the image for the theory and its numerical representation for its
computer realization (Eq. 1 and Fig. 1).
xvi Preamble

I : Z ⊂ Rd → Rc I : Ω ⊂ Nd → Zc
(1)
(x1 , . . . , xd ) 7→ I (x1 , . . . , xd ) [i1 , . . . , id ] 7→ I [i1 , . . . , id ]
| {z } | {z }
Continuous image Digital (or numerical) image

x i

Sampling-Quantification

y I : Z ⊂ R2 → R3 I : Ω ⊂ N2 → Z3
j
(x, y) 7→ I (x, y) [i, j] 7→ I [i, j]

Figure 1 – Continuous and digital representations of a color image (d = 2 c = 3).

The conversion of a continuous signal (or image) to a digital (or numerical) signal
(or image) is carried out in two stages (Fig. 2):
• Sampling: discretize the evolution parameters (time, distances) ;
• Quantization: discretize the signal values.

Continuous-time signal Discrete-time signal Digital signal

3 3 3

2 2 2

1 1 1

2 4 6 8 10 12 14 2 4 6 8 10 12 14 2 4 6 8 10 12 14

Figure 2 – Principle of sampling and quantization. Example with 16 samples and a


quantization on 2 bits (22 = 4 discrete values).
I- Introduction to CImg
Taylor & Francis
Taylor & Francis Group
http://taylorandfrancis.com
1. Introduction

W HOM IS THIS BOOK FOR ?

With this book, we would like to offer you an enchanting, yet pragmatic walk
through the wonderful world of image processing:

• Enchanting, because image processing is a vast and captivating universe, resting


on diversified but solid theoretical bases, which are used as well to model
and propose efficient algorithms to solve problems. The number of amusing
and/or practical applications that can be imagined and implemented in image
processing is potentially infinite.
• Pragmatic, because we do not want to make a simple overview of the different
formalisms. We will try to make each of the techniques discussed tangible,
by systematically “translating” the theoretical points presented in the form of
functional and usable C++ programs.

This intertwining of theory and implementation is the essence of this book, and its
content is therefore intended for a variety of readers:

• C++ Programmers, beginners or experts, amateurs of applied mathematics,


wishing to study the discipline of image processing, to eventually develop
concrete applications in this field.
• Mathematicians, image or signal processing makers, wishing to confront the
practical problem of software implementation of the various methods of the
domain in C++, a language universally recognized as being generic, powerful
and fast at execution.
• Teachers or students in computer science and applied mathematics, who will
find in this book the basic building blocks to develop or perform their practical
work.
• Developers or experienced researchers, for whom the CImg library described
in this book will be an ideal companion for rapid and efficient prototyping of
new innovative image processing algorithms.
4 Chapter 1. Introduction

It is important to underline that we will only use simple concepts of the C++ lan-
guage and that the proposed programs will therefore be readable enough to be easily
transcribed into other languages if necessary. The CImg library, on which we rely,
has been developed for several years by researchers in computer science and image
processing (from CNRS - French National Centre for Scientific Research, INRIA -
French National Institute for Research in Digital Science and Technology, and the
University), mainly to allow rapid prototyping of new algorithms. It is also used as
a development tool in the practical work of several courses given at the bachelor’s,
master’s or engineering school level. Its use is therefore perfectly adapted to the
pedagogical approach that we wish to develop in this book.

The book is structured to allow, on the one hand, a quick appropriation of the
concepts of the CImg library (which motivates the first part of this book), and on
the other hand, its practical use in many fields of image processing, through various
workshops (constituting the second part of the book). The set of examples proposed,
ranging from the simplest application to more advanced algorithms, helps developing
a joint know-how in theory, algorithmic and implementation in the field of image
processing. Note that all the source codes published in this book are also available in
digital format, on the following repository:
https://github.com/CImg-Image-Processing-Book.

W HY STILL DO IMAGE PROCESSING TODAY ?

Image is everywhere. It is the medium of many types of information, is used


in various applications such as medical diagnosis using MRI ( Magnetic Resonance
Imaging), scanner or scintigraphic imaging, the study of deforestation by satellite
imagery, the detection of abnormal behaviors in crowds from video acquisitions, pho-
tographic retouching and special effects, or handwriting recognition, to name but a
few. Today, even without realizing it, we use image processing algorithms on a daily
basis: automatic contrast enhancement of our favorite vacation photos, detection of
license plates at the entrance of the supermarket parking, automatic detection of faces
or places in the photographs that we post on social networks, etc.

Image processing is based on a solid theoretical background, derived from signal


processing and Shannon’s information theory [38]. Processing an image aims to
extract one or several relevant pieces of information for a given problem: the size of
an object, its localization, its movement, its color, even its identification. Extracting
this information may require some pre-processing steps, if the image is too noisy or
badly contrasted.
Introduction 5

Image processing took off in the 1960s, with the advent of computers and the devel-
opment (or rediscovery) of signal processing techniques (the Fourier transform, for
example). From then on, in all the domains that this book proposes to approach in its
second part, many algorithms have been developed, always more powerful and precise,
that are able to process images of increasingly important size and in consequent
number.

In parallel to this development, since the 2000s, machine learning, and more
particularly deep learning, has achieved unequalled performance in computer vision,
even surpassing human capacities in certain areas. A deep neural network is now able
to annotate a scene by identifying all the objects, to realistically colorize a grayscale
image, or to restore highly noisy images.

So why are we still interested in “classical” image processing ? The shift from
image processing to deep learning is accompanied by a paradigm shift in data process-
ing: classical techniques first compute features on the original images (Chapter 6) and
then use them for the actual processing (segmentation: Chapter 7, tracking: Chapter
8, . . . ). The computation of these features is possibly preceded by pre-processing
(Chapters 3, 4 and 5) facilitating their extraction. In contrast, deep learning learns
these features, most often through convolution layers in deep networks, and uses these
learned features to perform processing.
And this is where the major difference comes in: to perform its task, the deep
network must learn. And to do this, it must have a training set made up of several
thousands (or even millions) of examples, telling it what it must do. For example, to
be able to recognize images of cats and dogs, the network must learn on thousands of
pairs (x, y), where x is an image of a cat or a dog and y is the associated label, before
being able to decide on an unknown image (Fig. 1.1).
However, obtaining this data is far from being an easy task. If, in some domains
(such as object recognition), well-established labeled databases are available (for
example, ImageNet1 , composed of more than 14 million different images distributed
in 21800 categories), it is most often very difficult, if not impossible, to constitute a
sufficiently well-supplied and constituted training set to train a neural network.
Moreover, beyond this problem of training data availability, deep neural networks
often require, during their learning phase, significant computing power and hardware
resources (via the use of GPUs - Graphics Processing Units - and TPUs - Tensor
Processing Units). Power that the student, or the engineer in search of a quick result,
will not necessarily have at his disposal.
1 http://www.image-net.org
6 Chapter 1. Introduction

Learning Inferring

Figure 1.1 – Principle of learning in image processing.

So, even if the field of deep neural network learning has been expanding rapidly
for the last twenty years, and provides impressive results, classical image processing
has certainly not said its last word!

W HY DO IMAGE PROCESSING IN C++?

Among the plethora of existing programming languages, the C++ language has
the following advantages:

• It is a multi-paradigm, well-established, and popular language. It is generally


taught in universities and engineering schools offering computer science related
courses. It therefore reaches a wide audience, who will be able to use it to write
programs addressing a wide range of problems, in order to solve various tasks,
both at “low-level” and “high-level”.
• C++ is a compiled language, which produces highly optimized binaries. In
image processing, the data to be processed is often large: a standard resolution
image has several million values to analyze, and it is therefore important to have
programs that are fast enough to iterate on these values within a reasonable
time, which is not always possible with interpreted languages. In Python, for
example, most of the existing modules for image processing are implemented
in C/C++, for speed issues (if you have already tested looping over all pixels of
an image with a “pure” Python loop, you guess why!).
Introduction 7

• The use of C++ templates eases the manipulation of generic image data, for
example, when the pixel values of images you process have different numerical
types (Boolean, integer, floating point, etc.).

W HY USE AN EXTERNAL LIBRARY ?

Classically, programming image processing algorithms requires the ability to


import/export images in the form of arrays of values. Images are usually stored on
the disk in standardized file formats (JPEG, PNG, TIFF, . . . ). Displaying them on
the screen is also often desirable. However, no such functionality is present in the
standard C++ library, neither for loading or saving image files, nor for analyzing,
processing, and visualizing images.

One has to realize that writing such features from scratch is actually a tedious
task. Today, classic file formats have indeed a very complex binary structure: images
are mostly stored on disk in compressed form, each format uses its own compression
method that can be destructive or not. In practice, each image format is associated
with an advanced third-party library (e.g., libjpeg, libpng, libtiff, . . . ), each
being focused on loading and saving image data in its own file format. Similarly,
displaying an image in a window is a more complex task than it seems, and is al-
ways done through the use of third-party libraries, either specialized in “raw” display
(libX11, Wayland under Unix, or gdi32 under Windows), or in the display of
more advanced graphical interfaces with widgets (GTK, Qt, . . . ).

Finally, basic processing algorithms themselves are not always trivial to implement,
especially when optimized versions are required. For all these reasons, one usually
resorts to a high-level third-party library specialized in image processing, to work
comfortably in this domain in C++.

W HICH C++ LIBRARIES FOR IMAGE PROCESSING ?

A relevant image processing library should allow the reading/writing of image


data in the most common file formats, the display of these images, and should propose
a few of the most classical algorithms for image processing. Among the dozens of
existing choices, we propose this purified list of libraries, verifying these minimal
conditions:

CImg, ITK, libvips, Magick++, OpenCV, and VTK.

Why only these six libraries? Because they are well-established ones (all of them
existing for more than 15 years), widely used in the image processing community
8 Chapter 1. Introduction

and therefore well-proven in terms of performance and robustness. They are also still
under active development, free to use, multi-platform, and extensive enough to allow
the development of complex and diversified image processing programs. We have
voluntarily put aside libraries that are either distributed under a proprietary license,
or that are too young, or not actively maintained, or with a too restrictive application
domain (for example, libraries that are only capable of reading/writing images in a
few file formats, or that propose a too limited panel of image processing algorithms).
This diversity of choice actually reflects the various application domains that were
initially targeted by the authors of these different libraries.

Our selection can be summarized as follows:


• CImg (Cool Image) was created in the early 2000s by the French National
Institute for Research in Digital Science and Technology (Inria), a French
public research institute. It is an open source library that was designed in the
context of research in image processing algorithms, in order to allow its users
(initially, mainly researchers, teachers and PhD students) to conceive, easily
implement and test new original image processing algorithms, even from scratch.
CImg can be downloaded from http://cimg.eu.
• ITK (Insight Segmentation and Registration Toolkit) is a library made available
in 2001, initially created for medical image analysis and processing (the project
was initiated by the American National Library of Medicine). This medical
specialization is still relevant today, and ITK is mainly used for visualization,
segmentation and registration of medical images. ITK can be downloaded from
https://itk.org.
• libvips is a library written mainly in C, in the 1990’s, specialized in the pro-
cessing of large images. It is part of a larger framework (named VIPS) which
also includes a software offering a graphical interface dedicated to image vi-
sualization and processing. It is a library well adapted when the images to be
analyzed or processed are very large, typically when the set of images is larger
than the memory available on the computer. libvips can be downloaded from
https://libvips.github.io/libvips.
• Magick++ is one of the oldest libraries in our list. It was designed in the
late 1980s. Its original purpose was the conversion of formats between 2D
images in color or grayscale. However, it is not that easy to use for writing
processing algorithms for generic image types (for example, 3D volumetric
images or images with more than 4 channels). Magick++ can be downloaded
from https://imagemagick.org/Magick++.
• OpenCV is a library developed in the early 2000s, which focuses on computer
vision, a field at the crossroads of image processing and artificial intelligence
seeking to imitate human vision. OpenCV is a very popular library, and offers a
Introduction 9

large set of already implemented algorithms, often in a very optimized way. It


is ideal for a user wishing to chain together elementary algorithmic blocks in
order to build efficient processing pipelines. On the other hand, using OpenCV
for prototyping new algorithms is less convenient: the already implemented
algorithms act like “black boxes”, which are difficult to modify. The relatively
complex API of the library does not really facilitate writing new algorithms
from scratch. OpenCV can be downloaded from https://opencv.org.
• VTK is a library created in the mid-1990s, specialized in the processing and
visualization of 3D meshes. It is therefore focused on the processing and vi-
sualization of structured data in the form of graphs or meshes, rather than on
the processing of more traditional images defined on regular sampling grids.
This library is distributed by the American company Kitware, Inc., which also
develops the ITK library. VTK and ITK are often used together, mainly for the
analysis and visualization of medical images. VTK can be downloaded from
https://vtk.org.

a) CImg b) ITK c) VTK

d) OpenCV e) Magick++

Figure 1.2 – Logos of the main opensource C++ libraries for image processing (note
that the libvips library does not have an official logo).

W HY DID WE ADOPT CImg FOR THIS BOOK ?

CImg is a lightweight C++ library, which has been around for more than 20
years. It is a free library, whose source code is open (distributed under the CeCILL-C
open-source license), and which runs on various operating systems (Windows, Linux,
Mac OSX, FreeBSD, etc.). CImg gives the programmer access to classes and methods
for manipulating images or sequences of images, an image being defined here in the
broadest sense of the term, as a volumetric array with up to three spatial coordinates
10 Chapter 1. Introduction

(x, y, z) and containing vector values of any size and type. The library allows the
programmer to be relieved of the usual “low-level” tasks of manipulating images on
a computer, such as managing memory allocations and I/O, accessing pixel values,
displaying images, user interaction, etc. It also offers a fairly complete range of
common processing algorithms, in several areas including:

• Arithmetic operations between images, and applications of usual mathematical


functions: abs(), cos(), exp(), sin(), sqrt(),. . .
• Statistical calculation: extraction of the minimum, maximum, mean, variance,
median value, . . .
• Color space manipulation: RGB, HSV , YCbCr , L∗ a∗ b∗ , . . .
• Geometric transformations: rotation, mirror, crop, resize, warp, . . .
• Filtering: convolution/correlation, Fourier transform, Gradient and Laplacian
computation, recursive filters, morphological filters, anisotropic smoothing, . . .
• Feature extraction: interpolated values, connected components labeling, dis-
tance function, . . .
• Matrix calculus: SV D and LU decomposition, eigenvalues/eigenvectors, linear
system solver, least-square solver, . . .
• Graphical primitive drawing: segments, polygons, ellipses, splines, text, 3D
mesh objects, . . .
• Evaluation of mathematical expressions, specified as character strings.

Compared to its competitors, the properties of the CImg library make it particularly
interesting in a pedagogical context such as the one we want to develop with this book:

• CImg is a lightweight library, and therefore particularly easy to install and


deploy. CImg as a whole is indeed implemented as a single header file CImg.h,
which you just have to copy on your computer to use the library’s features
immediately.
• CImg is a generic library, able to manipulate indifferently 1D (signals), 2D or
3D images, or sequences of images, whose pixel values are of any type (via the
use of C++ templates). It is therefore adaptable to all kinds of input images
and can be used to tackle a wide variety of problems and applications in image
processing.
• CImg is simple to understand and manipulate, since it is entirely structured
in only four different classes and two namespaces. Its design does not rely
on advanced C++ concepts, which makes it easy to learn and use, even for
C++ beginners. It also makes it easy to re-read and modify algorithms already
implemented in the library. Finally, its syntax, which favors the elaboration of
pipelines, allows the writing of source codes that are both concise and readable.
Introduction 11

• CImg is powerful. Most of its algorithms can be run in parallel, using the
different cores of the available processor(s). Parallelization is done through the
use of the OpenMP library, which can be optionally activated when compiling a
CImg-based program.
• CImg is an open source library, whose development is currently led by the
GREYC (Research lab in digital science of the CNRS), a public research lab-
oratory located in Caen, France. This ensures that the development of CImg
is scientifically and financially independent from any private interest. The
source code of CImg is and will remain open, freely accessible, studyable by
anyone, and thus favoring the reproducibility and sharing of image processing
algorithms. Its permissive free license (CeCILL-C) authorizes its use in any
type of computer program (including those with closed source code, intended
to be distributed under a proprietary license).

All these features make it an excellent library for practicing image processing in C++,
either to develop and prototype new algorithms from scratch, or to have a complete and
powerful collection of image processing algorithms already implemented, immediately
usable in one’s own programs.

S TRUCTURE OF THE CImg LIBRARY

The CImg API is simple: the library exposes four classes (two of them with a
template parameter) and two namespaces (Fig. 1.3).

Figure 1.3 – Structure of the CImg library.

CImg defines two namespaces:

• cimg_library: this namespace includes all the classes and functions of the
library. A source code using CImg usually starts with the following two lines:
12 Chapter 1. Introduction

#include "CImg.h"
using namespace cimg_library;

Thus, the programmer will have direct access to the library classes, without
having to prefix them with the namespace identifier cimg_library::.
• cimg: this namespace contains some utility functions of the library, which
are not linked to particular classes, and which can be useful for the devel-
oper. For example, functions cimg::sqr() (returns the square of a number),
cimg::factorial() (returns the factorial of a number), cimg::gcd()
(returns the greatest common divisor between two numbers) or
cimg::maxabs() (compute the maximum absolute value between two num-
bers) are some of the functions defined in the cimg:: namespace.
CImg defines four classes:

• CImg<T>: this is the most essential and populated class of the library. An
instance of CImg<T> represents an “image” that the programmer can manip-
ulate in his C++ program. The numerical type T of the pixel values can be
anything. The default type T is float, so we can write CImg<> instead of
CImg<float>.
• CImgList<T>: this class represents a list of CImg<T> images. It is used for
example to store sequences of images, or sets of images (that may have different
sizes). The default type T is float, so you can write CImgList<> instead of
CImgList<float>.
• CImgDisplay: this class represents a window that can display an image on
the screen, and interact through user events. It can be used to display animations
or to create applications requiring some user interactions (e.g., placement of
key points on an image, moving them, . . . ).
• CImgException: this is the class used to handle library exceptions, i.e.,
errors that occur when classes and functions of the library are misused. The pro-
grammer never instantiates objects of this class, but can catch the corresponding
exceptions raised with this class by the library to manage errors.

This concise design of the library makes it easy to learn, even for novice C++ pro-
grammers.

W HY ONLY A SINGLE HEADER FILE ?

CImg is a library distributed in a rather particular form, since it is entirely imple-


mented in a single C++ header file, named CImg.h.
Introduction 13

At first sight, this conception may seem surprising: in C/C++, the libraries that
one encounters are generally organized in the form of one or more header files (most
often, one header file per different structure or class defined by the library), completed
by a binary file (.a or .so files under Linux, .lib or .dll files under Windows),
which contains the library’s functions in compiled form.

Our teaching experience with CImg has shown that the first question raised by
new users of the library is: “Why is everything put in one file?”. Here we answer this
frequent question, by listing the technical choices justifying this kind of structuring,
and by pointing out the advantages (and disadvantages) of it. The global answer takes
into account several different aspects of the C++ language, and requires consideration
of the following points:

1. Why doesn’t CImg propose a pre-compilation of its functions as .a,


.lib, .so or .dll binary file(s)?

Because the library is generic. The CImg<T> image and CImgList<T> image
structures exposed by the library have a template parameter T, which corresponds
to the type of pixels considered for these images. However, the types T that will be
selected by the user of CImg classes are a priori unknown.

Of course, the most commonly used types T are in practice the basic C++ types
for representing numbers, i.e.,: bool (Boolean), unsigned char (unsigned 8-bit
integer), unsigned short (unsigned 16-bit integer), short (signed 16-bit inte-
ger), unsigned int (unsigned 32-bit integer), int (signed 32-bit integer), float
(float value, 32-bit), double (float value, 64-bit), etc. However, it is not uncom-
mon to see source codes that uses images of other types, such as CImg<void*>,
CImg<unsigned long long> or CImg<std::complex>.

One might think that pre-compiling the methods of the two classes CImg<T> and
CImgList<T> for these ten or so most common types T would be a good idea. This
is to overlook the fact that many of these methods take as arguments values whose
types are themselves CImg<t> images or CImgList<t> image lists, with another
template parameter t potentially different from T.

For instance, the method


CImg<T>::warp(CImg<t>&, unsigned int, unsigned int, unsigned int)

applies an arbitrary deformation field to an image CImg<T>. It is common to use this


method to deform an image of type CImg<unsigned char> (classic color image,
with 8 bits/channel), passing as argument a deformation field of type CImg<float>,
14 Chapter 1. Introduction

which contains deformation vectors with floating-point precision (sub-pixel).

One can easily see that the multiplicity of possible combinations of types for the
arguments of the library’s methods makes it unwise to precompile these functions in
the form of binary files. The size of the generated file(s) would simply be huge, and
the functions actually used by the programmer would in practice only represent a tiny
portion of the pre-compiled functions.

The correct approach is therefore to let the compiler instantiate the methods and
functions of the CImg classes only for those combinations of template types that are
actually exploited in the user’s program. In this way, lighter and more optimized
binary objects or executables are generated, compared to what would be done with
a static binding to a large pre-compiled library. The main disadvantage is that the
functions of the CImg library used in the program must be compiled at the same time
as those of the program itself, which leads to an additional compilation overhead.

2. Why isn’t CImg subdivided into multiple header files?

If we follow this principle of usage minimality, why doesn’t CImg propose to


include only the classes of the library the user needs in his program? If he only
needs objects of type CImg<T>, why make him include a header file that also defines
CImgList<T>? After all, that’s what the standard C++ library provides: if you only
want to work with std::vector, you just have to include <vector>. . .

First, because unlike the C++ standard library, CImg defines only four different
classes, which turn out to be strongly interdependent. Moreover, the algorithms
operating on these class instances are defined as methods of the classes, not as external
functions acting on “containers”. This differs a lot from how the C++ standard library
is designed.
In practice, methods of the CImg<T> class need methods of CImgList<T>
(even if this is sometimes invisible to the user), simply because implementations
of CImg<T> methods require the functionality of CImgList<T> (and vice versa).
Similarly, CImgException is a ubiquitous class in CImg, since it is used to handle
errors that occur when library functions are misused. If the programmer does not want
to handle these errors, this class might seem useless to include. However, it is required
during compilation, since it is obviously used by the library core, which is, after all,
compiled at the same time as the user’s program.

This class interdependence means that if we wanted to have one header file per
CImg class, the first thing it would do is probably include the header files for the other
Introduction 15

classes. From a purely technical point of view, the gain from such a split would be
null: the four header files would be systematically included as soon as only one of
the classes in the library is used. In consequence, CImg proposes only one header file,
rather than one per class, without any real consequences on the compilation time.

3. What are the advantages of a single header file?

But the fact that CImg is distributed in the form of a single header file is not only
due to the satisfaction of technical constraints bound to the C++ language. In practice,
this is indeed an undeniable advantage for the library user:

• Easy to install: copying a single file to a folder to get access to the functions
of a complete image processing library is comfortable (a thing that few current
libraries actually offer).

• Lightweight and performance: on-the-fly compilation of CImg functions


means more optimized output binaries. The library code as well as the program
that uses it are compiled as a single entity. As a consequence, some of the
library functions can be inlined in the final binary, bringing more performance
at runtime (typically, the methods for accessing pixel values in images).

• Fine-tuning of dependencies: CImg’s on-the-fly compilation also allows the


user to define specific macros that tell the library to use features from a particular
third-party library. For example, writing in your program:
#define cimg_use_tiff
#define cimg_use_jpeg
#include <CImg.h>

will tell CImg to use the functions of the libtiff and libjpeg libraries
when it needs to read or write images in TIFF or JPEG format (it is then of
course necessary to link the generated binary, statically or dynamically, with
these two libraries). There are a lot of such configuration macros that can be set
to activate specific features of CImg, when compiling a program.

On the other hand, this means that it is also possible to compile a CImg-based
program without activating any dependencies on external libraries. This flexibil-
ity in the control of dependencies is very important: using CImg does not imply
an automatic dependency on dozens of “low-level” third-party libraries, whose
functionalities might not be used by the programmer. Yet this is what happens
with most of the competing image processing libraries!
16 Chapter 1. Introduction

• Possibility of internally extending the library: in a similar way, defining


some of these macros allows the library user to insert pieces of code directly
into the CImg classes, while it is compiling, via an original system of plug-ins.
Consequently, the library is extensible from the outside, without having to explic-
itly modify its code: a user can, if desired, add his own methods to the CImg<T>
and CImgList<T> classes (for example, new processing algorithms). We will
not use this possibility in this book, but it is an interesting approach when one
wishes to develop functionalities that integrate harmoniously with the rest of
the library API.

But one of the great strengths of the CImg library is its ease of use, and its ability to
express image processing algorithms in a clear and concise way in C++. This is what
we will show you in the rest of this book.
2. Getting Started with the CImg Library

As mentioned in Chapter 1, CImg is structured with a minimum of classes to represent


the relevant and manipulable objects of the library. In practice, the possibilities offered
by the library are expressed by the great diversity of methods available in its classes
(in particular those of the principal class CImg<T>, representing the main “image”
object). This chapter illustrates the actual use of CImg, by detailing the development
of a simple C++ source code example (about 100 lines), to implement a basic image
processing application. This example has been chosen to maximize the use of the
different classes and concepts of the library, without actually requiring advanced
knowledge in image processing. At the end of this tutorial, you will have sufficient
experience with CImg to start building your own applications: you will quickly realize
that CImg is easy to use, not based on complex C++ concepts, and therefore perfectly
suited to discover, learn and teach image processing, as well as to build prototypes or
more elaborate applications in this field.
Let us first explain the purpose of the application, then the details of its C++
implementation.

2.1 Objective: subdivide an image into blocks


A 2D digital image is represented as a 2D array of pixels, usually in grayscale or color.
If we want to analyze the global geometry of an image (e.g., to detect its contours or
to compress its data), it can be interesting to subdivide the image into several distinct
regions of interest, each having its own characteristics (flat areas, texture, contours,
etc.). The image decomposition into blocks is one of the simplest ways to achieve
this subdivision: we try to split the image into several square (or rectangular) areas of
different sizes, such that the large rectangles contain few locally complex structures
(rather flat areas), while the small blocks focus on the contours and textures of the
various elements present in the image.
Our goal here is thus to implement such a decomposition in blocks of a color
image, but also to propose an interactive visualization of this decomposition. The
visualization should let the user explore each extracted block, by visualizing its content
in the original image, as well as its internal variations (Fig. 2.1b).
18 Chapter 2. Getting Started with the CImg Library

a) Color input image b) Visualization of the image decomposition


into blocks

Figure 2.1 – Goal of our first CImg-based program: decompose an image into blocks
of different sizes, and visualize the result in an interactive way.

The most informed readers will notice a parallel with the so-called quadtree
decomposition. The same type of decomposition is proposed here, but by putting
aside the tree structure of the decomposition (so, in practice, we only keep the leaves
of the quadtree).

2.2 Setup and first program


Since the CImg library is written entirely in a single CImg.h header file, it does
not require any special pre-compilation or installation procedure. All you have to do
is copy CImg.h to a folder of your choice (e.g., your project’s folder), to have the
library ready to be used. Note however that some of the features it provides, such as
displaying images in windows, or reading/writing compressed files (typically in .png
or .jpg format), may require the use of third-party libraries. To use these function-
alities, be sure to have these intermediate libraries installed on your system: gdi32
(under Windows) or X11/pthread (under Unix) for display capabilities, libpng
or libjpeg for image format management, etc. CImg has many configuration flags
allowing you to enable or disable the use of these third-party libraries when compiling
programs, which allows you to fine-tune your dependence on external libraries.
In this tutorial, we need to display images in windows, so we need to enable the
display capability in CImg. It is actually enabled by default: being able to display
images when programming in image processing is a feature that indeed seems useful
most of the time !
2.3 Computing the variations 19

All the necessary third-party libraries are expected to be installed, so let’s write
our first code:
Code 2.1 – A first code using CImg.

// first_code.cpp:
// My first code using CImg.
#include "CImg.h"
using namespace cimg_library;

int main() {
CImg<unsigned char> img("kingfisher.bmp");
img.display("Hello World!");

return 0;
}

As you can guess, the purpose of this first program is to instantiate an image object of
type CImg<unsigned char> (i.e., each pixel channel stored as an 8-bit unsigned
integer), by reading the image from the file kingfisher.bmp (the bmp format is
generally uncompressed, so it doesn’t require any additional external dependencies),
and displaying this image in a window.

In order to compile this program, we must specify that the program has to be
linked with the necessary libraries for display. Under Linux, with the g++ compiler,
we will for instance write the following minimal compilation command:
$ g++ -o first_code first_code.cpp -lX11 -lpthread

Under Windows, with the Visual C++ compiler, we will write in a similar way:
> cl.exe /EHsc first_code.cpp /link gdi32.lib user32.lib
shell32.lib

Running the corresponding binary does indeed display the image kingfisher.bmp
in an interactive window, and it allows us to explore the pixel values and zoom in
to see the details of the image (Fig. 2.2). At this point, we are ready to use more
advanced features of CImg to decompose this image into several blocks.

2.3 Computing the variations


The principle of image decomposition into blocks is based on a statistical analysis of
the local variations of pixel values. We try to separate areas that locally have strong
20 Chapter 2. Getting Started with the CImg Library

Figure 2.2 – Result of our program first_code.cpp.

contrast variations (generally corresponding to contours or textures), from areas that


have none (flat areas). Mathematically speaking, the measure of the local geometric
variations of a scalar image I (in grayscale) is classically given by k∇Ik, the scalar
 >
image of the gradient norm, which is the vector ∇I = ∂∂ xI ∂∂ yI of the first directional
derivatives of the image intensities along the horizontal and vertical axes respectively,
estimated on each point (x, y) of the image. For color images, more or less complex
extensions of the gradient exist (see Section 9.2.3), but for the sake of simplicity we
will not use them here. We will just estimate k∇Ik from an image relative to the color
brightness, which is simply computed as the L2 norm of each vector (R, G, B) compos-
ing our input image. More precise definitions of the color brightness exist and could
be calculated, but the L2 norm will be more than sufficient in the context of our tutorial.

With CImg, obtaining such an image of smoothed and normalized brightness can
be written as:
CImg<> lum = img.get_norm().blur(sigma).normalize(0,255);

Here, we notice that the calculation of the lum image is realized by pipelining three
calls to methods of the CImg<T> class:

1. First, CImg<unsigned char>::get_norm() computes the image whose


pixels are the L2 norms of the colors of the instance image (here, the original
image img), and returns its result as a new image of type CImg<float>.
Note the change of pixel type of the returned image: the norm of a vector (here,
an RGB color) whose components are included in the range J0, 255K (since it
2.3 Computing the variations 21

is encoded as an unsigned char) is indeed a non-integer (floating-point)


value that could potentially be greater than 255, and CImg therefore adapts the
type of the pixels of the returned image to avoid possible arithmetic overflows
or truncations of floating-point values into integers.

2. This norm image is then spatially smoothed, using the CImg<float>::blur()


method, which implements an efficient recursive filter for this task. Smooth-
ing the image with a small standard deviation σ ≈ 1 makes the subsequent
calculation of variations more robust and accurate. So why is this method
not called CImg<float>::get_blur()? Simply because, unlike the pre-
vious CImg<float>::get_norm() method, CImg<float>::blur()
directly modifies the pixels of the instance image, instead of returning its result
as a new image. In practice, the vast majority of CImg methods are thus avail-
able in two versions, get and non-get. Note that it could be possible to use
CImg<float>::get_blur() here, but it would be less appropriate: the
considered image instance (which was the one returned by CImg<unsigned
char>::get_norm()) is a temporary image in our pipeline, which we can
then afford to smooth “on the fly”. Using the CImg<float>::get_blur()
version would imply the creation of a new image, with the memory allocation
that goes along. In image processing, one deals with large arrays of numbers
and not allocating more memory than necessary is a good practice. Here, the
CImg<float>::blur() method does not return a new image, but a refer-
ence to the instance that has just been blurred, which allows to continue writing
our pipeline by adding other methods afterwards if necessary.

3. The CImg<float>::normalize() method concludes our sequence of op-


erators. It allows to linearly normalize the pixel values of the resulting image
in the range J0, 255K (keeping of course floating-point values). This way, we
can control the order of magnitude of the different values of variations expected
in the lum image, depending on the type of geometry that will be found there.
Here again, the non-get version of the method is called, acting in place for
reasons of memory efficiency.

Thus, with a single line of code, we have defined a processing pipeline that returns
an image of type CImg<> (i.e., CImg<float>), providing information about the
luminosity of the colors in the original image (Fig. 2.3). The CImg architecture makes
it very easy to write this kind of pipelines, which are often found in source codes
based on this library.
22 Chapter 2. Getting Started with the CImg Library

a) Color input image b) Resulting image lum

Figure 2.3 – Computation of the lum image of color brightness, from an input image.

Now let’s look at the variations of this brightness image. Since the calculation
of the gradient ∇I is a basic operation in image processing, it is already implemented
in CImg via the method CImg<>::get_gradient(), that we are going to use
here:
CImgList<> grad = lum.get_gradient("xy");

This method returns an instance of CImgList<float>, a CImg class that rep-


resents a list of images. In our case, the returned list contains two distinct im-
ages, corresponding to the estimates of the two first derivatives along x (grad[0]
= ∂∂ xI ) and y (grad[1] = ∂∂ yI ) of the image lum. Computing the gradient norm
q
2 2
k∇Ik = ∂∂ xI + ∂∂ yI from these two gradients (scalar images) can then be done as
follows:

CImg<> normGrad = (grad[0].get_sqr() + grad[1].get_sqr()).sqrt();

CImg has many methods for applying mathematical functions to pixel values, and
the usual arithmetic operators are redefined to allow writing such expressions. Here,
calls to the CImg<float>::get_sqr() method return images where each pixel
value has been squared. These two images are then summed via the CImg method
CImg<float>::operator+() which returns a new image of the same size. Fi-
nally, CImg<float>::sqrt() replaces each value of this summed image by its
square root.
2.3 Computing the variations 23

Here again, we chose to use the get and non-get versions of the methods in
order to minimize the number of image copies. With this in mind, we can even use
CImg<float>::operator+=(), which can be seen as the non-get version of
CImg<float>::operator+(), and which avoids an additional creation of an
image in memory:

CImg<> normGrad = (grad[0].get_sqr() += grad[1].get_sqr()).sqrt();

Figure 2.4 shows a detail of the different images of variations that we obtain. Re-
member: at any time, in a CImg program, the content of an image or a list of images can
be displayed by calling the CImg<T>::display() and CImgList<T>::display()
methods, which proves to be very useful for checking the correct step-by-step imple-
mentation of a program.

∂I
a) lum b) grad[0] = ∂x

∂I
c) grad[1] = ∂y
d) normgrad = k∇Ik

Figure 2.4 – Computation of the gradient image and its norm, from image lum (detail).
Another Random Document on
Scribd Without Any Related Topics
Yacht, ‘Southhampton’ changed to ‘Southampton,’ “to Cowes and
to Southampton”
Yacht, ‘for-forfeit’ changed to ‘forfeit,’ “shall forfeit all claim to
the prize”
Yacht, ‘regrad’ changed to ‘regard,’ “with regard to the sailing”
Yarwhip, ‘twelves’ changed to ‘twelve,’ “weight about twelve
ounces”
Zoology, double quote inserted after ‘body,’ “and move the
body.””
*** END OF THE PROJECT GUTENBERG EBOOK THE FIELD BOOK:
OR, SPORTS AND PASTIMES OF THE UNITED KINGDOM ***

Updated editions will replace the previous one—the old editions


will be renamed.

Creating the works from print editions not protected by U.S.


copyright law means that no one owns a United States
copyright in these works, so the Foundation (and you!) can copy
and distribute it in the United States without permission and
without paying copyright royalties. Special rules, set forth in the
General Terms of Use part of this license, apply to copying and
distributing Project Gutenberg™ electronic works to protect the
PROJECT GUTENBERG™ concept and trademark. Project
Gutenberg is a registered trademark, and may not be used if
you charge for an eBook, except by following the terms of the
trademark license, including paying royalties for use of the
Project Gutenberg trademark. If you do not charge anything for
copies of this eBook, complying with the trademark license is
very easy. You may use this eBook for nearly any purpose such
as creation of derivative works, reports, performances and
research. Project Gutenberg eBooks may be modified and
printed and given away—you may do practically ANYTHING in
the United States with eBooks not protected by U.S. copyright
law. Redistribution is subject to the trademark license, especially
commercial redistribution.

START: FULL LICENSE


THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the


free distribution of electronic works, by using or distributing this
work (or any other work associated in any way with the phrase
“Project Gutenberg”), you agree to comply with all the terms of
the Full Project Gutenberg™ License available with this file or
online at www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand,
agree to and accept all the terms of this license and intellectual
property (trademark/copyright) agreement. If you do not agree
to abide by all the terms of this agreement, you must cease
using and return or destroy all copies of Project Gutenberg™
electronic works in your possession. If you paid a fee for
obtaining a copy of or access to a Project Gutenberg™
electronic work and you do not agree to be bound by the terms
of this agreement, you may obtain a refund from the person or
entity to whom you paid the fee as set forth in paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only


be used on or associated in any way with an electronic work by
people who agree to be bound by the terms of this agreement.
There are a few things that you can do with most Project
Gutenberg™ electronic works even without complying with the
full terms of this agreement. See paragraph 1.C below. There
are a lot of things you can do with Project Gutenberg™
electronic works if you follow the terms of this agreement and
help preserve free future access to Project Gutenberg™
electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright
law in the United States and you are located in the United
States, we do not claim a right to prevent you from copying,
distributing, performing, displaying or creating derivative works
based on the work as long as all references to Project
Gutenberg are removed. Of course, we hope that you will
support the Project Gutenberg™ mission of promoting free
access to electronic works by freely sharing Project Gutenberg™
works in compliance with the terms of this agreement for
keeping the Project Gutenberg™ name associated with the
work. You can easily comply with the terms of this agreement
by keeping this work in the same format with its attached full
Project Gutenberg™ License when you share it without charge
with others.

1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside
the United States, check the laws of your country in addition to
the terms of this agreement before downloading, copying,
displaying, performing, distributing or creating derivative works
based on this work or any other Project Gutenberg™ work. The
Foundation makes no representations concerning the copyright
status of any work in any country other than the United States.

1.E. Unless you have removed all references to Project


Gutenberg:

1.E.1. The following sentence, with active links to, or other


immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project
Gutenberg™ work (any work on which the phrase “Project
Gutenberg” appears, or with which the phrase “Project
Gutenberg” is associated) is accessed, displayed, performed,
viewed, copied or distributed:

This eBook is for the use of anyone anywhere in the United


States and most other parts of the world at no cost and
with almost no restrictions whatsoever. You may copy it,
give it away or re-use it under the terms of the Project
Gutenberg License included with this eBook or online at
www.gutenberg.org. If you are not located in the United
States, you will have to check the laws of the country
where you are located before using this eBook.

1.E.2. If an individual Project Gutenberg™ electronic work is


derived from texts not protected by U.S. copyright law (does not
contain a notice indicating that it is posted with permission of
the copyright holder), the work can be copied and distributed to
anyone in the United States without paying any fees or charges.
If you are redistributing or providing access to a work with the
phrase “Project Gutenberg” associated with or appearing on the
work, you must comply either with the requirements of
paragraphs 1.E.1 through 1.E.7 or obtain permission for the use
of the work and the Project Gutenberg™ trademark as set forth
in paragraphs 1.E.8 or 1.E.9.

1.E.3. If an individual Project Gutenberg™ electronic work is


posted with the permission of the copyright holder, your use and
distribution must comply with both paragraphs 1.E.1 through
1.E.7 and any additional terms imposed by the copyright holder.
Additional terms will be linked to the Project Gutenberg™
License for all works posted with the permission of the copyright
holder found at the beginning of this work.

1.E.4. Do not unlink or detach or remove the full Project


Gutenberg™ License terms from this work, or any files
containing a part of this work or any other work associated with
Project Gutenberg™.

1.E.5. Do not copy, display, perform, distribute or redistribute


this electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the
Project Gutenberg™ License.

1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must,
at no additional cost, fee or expense to the user, provide a copy,
a means of exporting a copy, or a means of obtaining a copy
upon request, of the work in its original “Plain Vanilla ASCII” or
other form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™
works unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or


providing access to or distributing Project Gutenberg™
electronic works provided that:

• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of receipt
that s/he does not agree to the terms of the full Project
Gutenberg™ License. You must require such a user to return or
destroy all copies of the works possessed in a physical medium
and discontinue all use of and all access to other copies of
Project Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full refund of


any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project


Gutenberg™ electronic work or group of works on different
terms than are set forth in this agreement, you must obtain
permission in writing from the Project Gutenberg Literary
Archive Foundation, the manager of the Project Gutenberg™
trademark. Contact the Foundation as set forth in Section 3
below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on,
transcribe and proofread works not protected by U.S. copyright
law in creating the Project Gutenberg™ collection. Despite these
efforts, Project Gutenberg™ electronic works, and the medium
on which they may be stored, may contain “Defects,” such as,
but not limited to, incomplete, inaccurate or corrupt data,
transcription errors, a copyright or other intellectual property
infringement, a defective or damaged disk or other medium, a
computer virus, or computer codes that damage or cannot be
read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except


for the “Right of Replacement or Refund” described in
paragraph 1.F.3, the Project Gutenberg Literary Archive
Foundation, the owner of the Project Gutenberg™ trademark,
and any other party distributing a Project Gutenberg™ electronic
work under this agreement, disclaim all liability to you for
damages, costs and expenses, including legal fees. YOU AGREE
THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT
LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT
EXCEPT THOSE PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE
THAT THE FOUNDATION, THE TRADEMARK OWNER, AND ANY
DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE
TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL,
PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE
NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you


discover a defect in this electronic work within 90 days of
receiving it, you can receive a refund of the money (if any) you
paid for it by sending a written explanation to the person you
received the work from. If you received the work on a physical
medium, you must return the medium with your written
explanation. The person or entity that provided you with the
defective work may elect to provide a replacement copy in lieu
of a refund. If you received the work electronically, the person
or entity providing it to you may choose to give you a second
opportunity to receive the work electronically in lieu of a refund.
If the second copy is also defective, you may demand a refund
in writing without further opportunities to fix the problem.

1.F.4. Except for the limited right of replacement or refund set


forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’,
WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of
damages. If any disclaimer or limitation set forth in this
agreement violates the law of the state applicable to this
agreement, the agreement shall be interpreted to make the
maximum disclaimer or limitation permitted by the applicable
state law. The invalidity or unenforceability of any provision of
this agreement shall not void the remaining provisions.

1.F.6. INDEMNITY - You agree to indemnify and hold the


Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and
distribution of Project Gutenberg™ electronic works, harmless
from all liability, costs and expenses, including legal fees, that
arise directly or indirectly from any of the following which you
do or cause to occur: (a) distribution of this or any Project
Gutenberg™ work, (b) alteration, modification, or additions or
deletions to any Project Gutenberg™ work, and (c) any Defect
you cause.

Section 2. Information about the Mission


of Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new
computers. It exists because of the efforts of hundreds of
volunteers and donations from people in all walks of life.

Volunteers and financial support to provide volunteers with the


assistance they need are critical to reaching Project
Gutenberg™’s goals and ensuring that the Project Gutenberg™
collection will remain freely available for generations to come. In
2001, the Project Gutenberg Literary Archive Foundation was
created to provide a secure and permanent future for Project
Gutenberg™ and future generations. To learn more about the
Project Gutenberg Literary Archive Foundation and how your
efforts and donations can help, see Sections 3 and 4 and the
Foundation information page at www.gutenberg.org.

Section 3. Information about the Project


Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-
profit 501(c)(3) educational corporation organized under the
laws of the state of Mississippi and granted tax exempt status
by the Internal Revenue Service. The Foundation’s EIN or
federal tax identification number is 64-6221541. Contributions
to the Project Gutenberg Literary Archive Foundation are tax
deductible to the full extent permitted by U.S. federal laws and
your state’s laws.

The Foundation’s business office is located at 809 North 1500


West, Salt Lake City, UT 84116, (801) 596-1887. Email contact
links and up to date contact information can be found at the
Foundation’s website and official page at
www.gutenberg.org/contact
Section 4. Information about Donations to
the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission
of increasing the number of public domain and licensed works
that can be freely distributed in machine-readable form
accessible by the widest array of equipment including outdated
equipment. Many small donations ($1 to $5,000) are particularly
important to maintaining tax exempt status with the IRS.

The Foundation is committed to complying with the laws


regulating charities and charitable donations in all 50 states of
the United States. Compliance requirements are not uniform
and it takes a considerable effort, much paperwork and many
fees to meet and keep up with these requirements. We do not
solicit donations in locations where we have not received written
confirmation of compliance. To SEND DONATIONS or determine
the status of compliance for any particular state visit
www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states


where we have not met the solicitation requirements, we know
of no prohibition against accepting unsolicited donations from
donors in such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot


make any statements concerning tax treatment of donations
received from outside the United States. U.S. laws alone swamp
our small staff.

Please check the Project Gutenberg web pages for current


donation methods and addresses. Donations are accepted in a
number of other ways including checks, online payments and
credit card donations. To donate, please visit:
www.gutenberg.org/donate.

Section 5. General Information About


Project Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could
be freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose
network of volunteer support.

Project Gutenberg™ eBooks are often created from several


printed editions, all of which are confirmed as not protected by
copyright in the U.S. unless a copyright notice is included. Thus,
we do not necessarily keep eBooks in compliance with any
particular paper edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg
Literary Archive Foundation, how to help produce our new
eBooks, and how to subscribe to our email newsletter to hear
about new eBooks.

You might also like