0% found this document useful (0 votes)
21 views89 pages

Computer Vision A Reference Guide 1st Edition Katsushi Ikeuchi Eds download

The document is a reference guide on computer vision edited by Katsushi Ikeuchi, aimed at providing comprehensive resources for researchers and newcomers in the field. It discusses the challenges of translating 3D world representations into 2D images and highlights the historical development and paradigms of computer vision. The guide includes contributions from various experts and is intended to be continuously updated to reflect ongoing advancements in the field.

Uploaded by

sotisimueet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views89 pages

Computer Vision A Reference Guide 1st Edition Katsushi Ikeuchi Eds download

The document is a reference guide on computer vision edited by Katsushi Ikeuchi, aimed at providing comprehensive resources for researchers and newcomers in the field. It discusses the challenges of translating 3D world representations into 2D images and highlights the historical development and paradigms of computer vision. The guide includes contributions from various experts and is intended to be continuously updated to reflect ongoing advancements in the field.

Uploaded by

sotisimueet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 89

Computer Vision A Reference Guide 1st Edition

Katsushi Ikeuchi Eds download

https://ebookbell.com/product/computer-vision-a-reference-
guide-1st-edition-katsushi-ikeuchi-eds-4696220

Explore and download more ebooks at ebookbell.com


Here are some recommended products that we believe you will be
interested in. You can click the link to download.

Computer Vision A Reference Guide 2e 2nd Edition Katsushi Ikeuchi

https://ebookbell.com/product/computer-vision-a-reference-
guide-2e-2nd-edition-katsushi-ikeuchi-35101648

Computer Vision And Image Processing In The Deep Learning Era A


Srinivasan

https://ebookbell.com/product/computer-vision-and-image-processing-in-
the-deep-learning-era-a-srinivasan-47573638

Challenges And Applications For Implementing Machine Learning In


Computer Vision Ramgopal Kashyap

https://ebookbell.com/product/challenges-and-applications-for-
implementing-machine-learning-in-computer-vision-ramgopal-
kashyap-21967744

Computer Vision A Modern Approach 2nd Edition David A Forsyth

https://ebookbell.com/product/computer-vision-a-modern-approach-2nd-
edition-david-a-forsyth-2259096
Computer Vision A Modern Approach Us Ed David A Forsyth Jean Ponce

https://ebookbell.com/product/computer-vision-a-modern-approach-us-ed-
david-a-forsyth-jean-ponce-921688

Computer Vision A Modern Appr D Forsyth J Ponce

https://ebookbell.com/product/computer-vision-a-modern-appr-d-forsyth-
j-ponce-4089718

Opencv Computer Vision Projects With Python Get Savvy With Opencv And
Actualize Cool Computer Vision Applications A Course In Three Modules
Michael Beyeler

https://ebookbell.com/product/opencv-computer-vision-projects-with-
python-get-savvy-with-opencv-and-actualize-cool-computer-vision-
applications-a-course-in-three-modules-michael-beyeler-10834122

Modern Computer Vision With Pytorch A Practical And Comprehensive


Guide To Understanding Deep Learning And Multimodal Models For
Realworld Vision Tasks 2nd Edition V Kishore Ayyadevarayeshwanth Reddy

https://ebookbell.com/product/modern-computer-vision-with-pytorch-a-
practical-and-comprehensive-guide-to-understanding-deep-learning-and-
multimodal-models-for-realworld-vision-tasks-2nd-edition-v-kishore-
ayyadevarayeshwanth-reddy-57684292

Vision With Direction A Systematic Introduction To Image Processing


And Computer Vision Josef Bigun

https://ebookbell.com/product/vision-with-direction-a-systematic-
introduction-to-image-processing-and-computer-vision-josef-
bigun-2628338
Katsushi Ikeuchi
Editor

Computer Vision
A Reference Guide

1 3Reference
Computer Vision
Katsushi Ikeuchi
Editor

Computer Vision
A Reference Guide

With 433 Figures and 16 Tables


Editor
Katsushi Ikeuchi
Institute of Industrial Science
The University of Tokyo
Tokyo, Japan

ISBN 978-0-387-30771-8 ISBN 978-0-387-31439-6 (eBook)


ISBN 978-0-387-49138-7 (print and electronic bundle)
DOI 10.1007/978-0-387-31439-6
Springer New York Heidelberg Dordrecht London

Library of Congress Control Number: 2014936288

c Springer Science+Business Media New York 2014


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection
with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and
executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this pub-
lication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s
location, in its current version, and permission for use must always be obtained from Springer. Permis-
sions for use may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable
to prosecution under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of publica-
tion, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors
or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the
material contained herein.

Printed on acid-free paper

Springer is part of Springer Science+Business Media (www.springer.com)


Preface

Computer vision is a field of computer science and engineering; its goal is to make
a computer that can see its outer world and understand what is happening. As David
Marr defined, computer vision is “an information processing task that constructs effi-
cient symbolic descriptions of the world from images.” Computer vision aims to
create an alternative for human visual systems on computers.
Takeo Kanade says, “computer vision looks easy, but is difficult. But, it is fun.”
Computer vision looks easy because each human uses vision in daily life without any
effort. Even a new-born baby uses its vision capability to recognize the mother. It is
computationally difficult, however, because the original outer world is made up of
three dimensional objects, while those projected on the retina or an image plane,
are of only two dimensional images. This dimensional reduction from 3D to 2D
occurs along the projection from the outer world to images. “Common sense” needs
to be used to augment the descriptions of the original 3D world from the 2D images.
Computer vision is fun, because we have to discover this common sense. This search
for common sense attracts the interest of vision researchers.
The origin of computer vision can be traced back to Lawrence Roberts’ research,
“Machine Perception of Three-Dimensional Solids.” Later, this line of research has
been extended through Project MAC of MIT. Professor Marvin Minski, the then
director of Project MAC, initially believed that computer vision could be solved as a
summer project of an MIT graduate student. His original estimation was wrong, and
for more than 40 years we have been investigating various aspects of computer vision.
This 40-year effort proved that computer vision is one of the fundamental sci-
ences, and the field is rich enough for researchers to devote their entire research lives
to it. This period also reveals that the field contains a wide variety of topics from
low-level optics to high-level recognition problems. This richness and diversity were
an important motivation for us to decide to compile a reference book on computer
vision.
Lawrence Roberts’ research contains all of the essential components of the
computer vision paradigm, which modern computer vision still follows: homoge-
neous coordinate system to define the coordinates, cross operator for edge detection,
and object models represented as a combination of edges. David Marr defines his
paradigm of computer vision: shape-from-x low-level vision, interpolation and fusion
of such fragmental representations, 2-1/2D viewer-centered representation as the
result of interpolation and fusion, and 3D object-centered representation. Roughly,
this reference guide follows these paradigms, and defines the sections accordingly.

v
vi Preface

The online version of the reference guide is intended to be developed continuously,


both by the updates of existing entries and by the addition of new entries. In this way,
it will provide the resources to help both vision researchers and newcomers to the
field be on the same page with the continuing and exciting developments in computer
vision.
This reference guide has been completed through a team effort. We are most grate-
ful for all the contributors and section editors who have made this project possible.
Our special thanks go to Ms. Neha Thapa and other Springer colleagues for their
assistance in the development and editing of this reference guide.
March 2014 Katsushi Ikeuchi, Editor in Chief
Yasuyuki Matsushita, Associate Editor in Chief
Rei Kawakami, Assistant Editor in Chief
Editor’s Biography

Katsushi Ikeuchi is a professor at the University of Tokyo. He received his


B.E. degree in mechanical engineering from Kyoto University in 1973 and Ph.D. in
information engineering from the University of Tokyo in 1978. After working at the
MIT Artificial Intelligence Laboratory as a postdoctoral fellow for 3 years, the MITI
Electro-Technical Laboratory as a research fellow for 5 years, and the CMU Robotics
Institute as a faculty member for 10 years, he joined the Institute of Industrial Science,
University of Tokyo, as a professor in 1996.
His research activities span computer vision, computer graphics, robotics, and
intelligent transportation systems.
In the computer vision area, he is considered as one of the founders of physics-
based vision: modeling image forming processes, using physics and optics laws,
and applying the inverse models in recovering shape and reflectance from observed
brightness in a rigorous manner. He developed the “smoothness constraint,” a
constraint to force neighboring points to have similar surface orientations. The
constraint optimization method based on the smoothness constraint, later referred
to as the regularization method, has evolved into one of the fundamental paradigms,
commonly employed in various low-level vision algorithms. In 1992, his paper with
Prof. B.K.P. Horn, “Numerical Shape from Shading with Occluding Boundaries,” was
the original paper to describe the constraint-minimization algorithm with the smooth-
ness constraint, [Ikeuchi K, Horn BKP (1981) Numerical shape from shading and
occluding boundaries. Artif Intell 17(1–3):141–184] and was selected as one of the
most influential papers to have appeared in the Artificial Intelligence Journal within
the past 10 years.

vii
viii Editor’s Biography

Dr. Ikeuchi and his students developed a technique to automatically generate a


virtual reality model by observing actual objects along the line of the physics-based
vision paradigm [Sato Y, Wheeler MD, Ikeuchi K (1997) Object shape and reflectance
modeling from observation. Computer graphics proceedings, SIGGRAPH97, Los
Angeles, pp 379–387]. This early work, which appeared in SIGGRAPH1997, is
considered one of the starting points of the area later referred to as “image-based
modeling.” After returning to Japan, he and his team began to apply the image-
based modeling technique to model various cultural heritage sites. This project has
become to be known as the e-Heritage project. They succeeded in modeling all of the
three big Buddha statues in Japan as well as the complicated stone temple, Bayon,
in Angkor Ruin, to name a few [Ikeuchi K, Miyazaki D (2008) Digitally archiving
cultural objects. Springer, New York]. Through these efforts, Dr. Ikeuchi received the
IEICE Distinguished Achievement Award in 2012.
Dr. Ikeuchi has also been working on robot vision. In this area, he has been
concentrating research on how to reduce the cost of production by using robot vision
technologies. This includes how to make efficient production lines and, more impor-
tantly, how to reduce the cost of making robot programs to be used in such production
lines. In the early 1980s and even today, one of the obstacles to introduce robot
technologies to production lines is the so-called bin-picking problem: how to pick
up one part from a stack of similar parts. Using shape-from-shading techniques, he
was successful in making a robot system that could pick up a mechanical part from
a stack [Horn BKP, Ikeuchi K (1984) The mechanical manipulation of randomly
oriented parts. Sci Am 251(2):100–111].
It was evident that the next obstacle was the cost of programming after completing
the bin-picking system. In the early 1990s, he began a project to make a robot pro-
gram which learns motions from observing human operators’ movements [Ikeuchi K,
Suehiro T (1994) Toward an assembly plan from observation, Part I: Task recogni-
tion with polyhedral objects. IEEE Trans Robot Autom 10(3):368–385]. He and his
team demonstrated that this method, programming-by-demonstration, can be applied
to handle not only assembling block-world objects, but also machine parts as well as
flexible objects, such as rope knotting tasks [Takamatsu J et al (2006) Representation
for knot-tying tasks. IEEE Trans Robot 22(1):65–78]. Along with his students, he fur-
ther extended the method in the domain of whole-body motions by a humanoid robot
[Nakaoka S et al (2007) Learning from observation paradigm: leg task models for
enabling a biped humanoid robot to imitate human dance. Int J Robot Res 26(8):829–
844]. They succeeded in making a dancing robot, which was capable of learning
and mimicking Japanese folk dance from observation. He received several best paper
awards in this line of work, including IEEE KS Fu memorial best transaction paper
award and RSJ best transaction paper award.
Besides these research activities, he has also devoted his time to community
service. He chaired a dozen major conferences, including 1995 IEEE-IROS
(General), 1996 IEEE-CVPR (Program), 1999 IEEE-ITSC (General), 2001 IEEE-IV
(General), 2003 IEEE-ICCV (Program), 2009 IEEE-ICRA (program), and 2010
IAPR-ICPR (program). His community service also includes IEEE RAS Adcom
(98-04, 06-08), IEEE ITSS BOG, IEEE Fellow Committee (2010–2012), and 2nd
VP of IAPR. He is an editor in chief of the International Journal of Computer Vision,
and a fellow of IEEE, IEICE, IPSJ, and RSJ.
Through these research activities and community services, Dr. Ikeuchi received
the IEEE PAMI-TC Distinguished Researcher Award (2011) and the Shiju Hou Sho
(Medal of Honor with purple ribbon) from the Japanese Emperor (2012).
Editor’s Biography ix

Yasuyuki Matsushita received his B.S., M.S., and Ph.D. in Electrical Engineering
and Computer Science (EECS) from the University of Tokyo in 1998, 2000, and 2003,
respectively. He joined Microsoft Research Asia in April 2003, where he is a senior
researcher in the Visual Computing Group. His interests are in physics-based com-
puter vision (photometric techniques, such as radiometric calibration, photometric
stereo, shape-from-shading), computational photography, and general 3D reconstruc-
tion methodologies. He is on the editorial board of International Journal of Computer
Vision (IJCV), IEEE Transactions on Pattern Analysis and Machine Intelligence
(TPAMI), The Visual Computer, and associate editor in chief of the IPSJ Transactions
on Computer Vision and Applications Journal of Computer Vision and Applications
(CVA). He served/is serving as a program co-chair of Pacific-Rim Symposium on
Image and Video Technology (PSIVT) 2010, The first joint 3DIM/3DPVT confer-
ence (3DIMPVT, now called 3DV) 2011, Asian Conference on Computer Vision
(ACCV) 2012, International Conference on Computer Vision (ICCV) 2017, and a
general co-chair of ACCV 2014. He has been serving as a guest associate professor
at Osaka University (April 2010–), visiting associate professor at National Institute
of Informatics (April 2011–) and Tohoku University (April 2012–), Japan.
x Editor’s Biography

Rei Kawakami is an assistant professor at The University of Tokyo, Tokyo, Japan.


She received her B.S., M.S., and Ph.D. degrees in information science and technology
from the University of Tokyo in 2003, 2005, and 2008, respectively. After work-
ing as a researcher at the Institute of Industrial Science, The University of Tokyo,
for 3 years, at the University of California, Berkeley, for 2 years, and at Osaka
University for a year, she joined the University of Tokyo as an assistant professor
in 2014. Her research interests are in color constancy, spectral analysis, and physics-
based computer vision. She has been serving as a program committee member and
a reviewer for International Conference on Computer Vision (ICCV), International
Conference on Computer Vision and Pattern Recognition (CVPR), European Confer-
ence on Computer Vision (ECCV), Asian Conference on Computer Vision (ACCV),
and as a reviewer of the International Journal of Computer Vision (IJCV) and the
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI).
Senior Editors

Peter N. Belhumeur Professor, Department of Computer Science, Columbia Uni-


versity, New York, New York, USA

Larry S. Davis Professor, Computer Vision Laboratory, Center for Automation


Research, University of Maryland, College Park, MD, USA

xi
xii Senior Editors

Martial Hebert Professor, The Robotics Institute, School of Computer Science,


Carnegie Mellon University, Pittsburgh, PA, USA

Jitendra Malik Arthur J. Chick Professor of EECS, University of California,


Berkeley, CA, USA

Tomas Pajdla Assistant Professor, Center for Machine Perception, Department of


Cybernetics, Faculty of Electrical Engineering, Czech Technical University, Prague,
Czech Republic
Senior Editors xiii

James M. Rehg Professor, School of Interactive Computing, College of Computing,


Georgia Institute of Technology, Atlanta, GA, USA

Zhengyou Zhang Research Manager/Principal Researcher, Microsoft Research,


Redmond, WA, USA
Section Editors

Rama Chellappa Minta Martin Professor of Engineering, Chair, Department of


Electrical Engineering and Computer Engineering and UMIACS, University
of Maryland, College Park, MD, USA

Daniel Cremers Professor for Computer Science and Mathematics, Chair for
Computer Vision and Pattern Recognition, Managing Director, Department of Com-
puter Science, Technische Universität München, Garching, Germany

xv
xvi Section Editors

Koichiro Deguchi Professor Emeritus of Tohoku University, Katahira, Aoba-ku,


Sendai, Japan

Hervé Delingette Research Director, Project Asclepios, INRIA, Sophia-Antipolis,


France

Andrew Fitzgibbon Principal Researcher, Microsoft Research, Cambridge, England


Section Editors xvii

Luc Van Gool Professor of Computer Vision, Computer Vision Laboratory, ETH,
Zürich, Switzerland

Kenichi Kanatani Professor Emeritus of Okayama University, Okayama, Japan

Kyros Kutulakos Professor, Department of Computer Science, University of


Toronto, Toronto, ON, Canada
xviii Section Editors

In So Kweon Professor, Robotics and Computer Vision Laboratory, Korea


Advanced Institute of Science and Technology (KAIST), Daejeon, Korea

Sang Wook Lee Professor, Department of Media Technology, Sogang University,


Seoul, Korea

Atsuto Maki Associate Professor, Computer Vision and Active Perception Labora-
tory (CVAP), School of Computer Science and Communication, Kungliga Tekniska
Högskolan (KTH), Stockholm, Sweden
Section Editors xix

Yasuyuki Matsushita Senior Researcher, Microsoft Research, Beijing, China

Gerard G. Medioni Professor, Institute for Robotics and Intelligence Systems,


University of Southern California, Los Angeles, CA, USA

Ram Nevatia Director and Professor, Institute for Robotics and Intelligence
Systems, University of Southern California, Los Angeles, CA, USA
xx Section Editors

Ko Nishino Associate Professor, Department of Computing, College of Computing


and Informatics, Drexel University, Philadelphia, PA, USA

Nikolaos Papanikolopoulos Distinguished McKnight University Professor,


Department of Computer Science and Engineering, University of Minnesota,
Minneapolis, MN, USA

Shmuel Peleg Professor, School of Computer Science and Engineering, The Hebrew
University of Jerusalem, Jerusalem, Israel
Section Editors xxi

Marc Pollefeys Professor and the Head, Computer Vision and Geometry Lab
(CVG) – Institute of Visual Computing, Department of Computer Science, ETH
Zürich, Zürich, Switzerland

Jean Ponce Professor, Departement d’Informatique, Ecole Normale Supérieure


[ENS], Paris, France

Long Quan Professor, The Department of Computer Science and Engineering, The
Hong Kong University of Science and Technology, Kowloon, Hong Kong, China
xxii Section Editors

Yoav Y. Schechner Associate Professor, Department of Electrical Engineering,


Technion – Israel Institute of Technology, Haifa, Israel

Cordelia Schmid INRIA Research Director, Head of the LEAR Project,


Montbonnot, France

Jun Takamatsu Associate Professor, Graduate School of Information Science, Nara


Institute of Science and Technology (NAIST), Ikoma, Japan
Section Editors xxiii

Xiaoou Tang Professor, Department of Information Engineering, The Chinese


University of Hong Kong, Hong Kong SAR

Song-Chun Zhu Professor, Statics Department and Computer Science Department,


University of California, Los Angeles, CA, USA

Todd Zickler William and Ami Kuan Danoff Professor of Electrical Engineering and
Computer Science, School of Engineering and Applied Sciences, Harvard University,
Cambridge, MA, USA
Contributors

Hanno Ackermann Leibniz University Hannover, Hannover, Germany


J. K. Aggarwal Department of Electrical and Computer Engineering, The University
of Texas at Austin, Austin, TX, USA
Amit Agrawal Mitsubishi Electric Research Laboratories, Cambridge, MA, USA
Daniel C. Alexander Centre for Medical Image Computing, Department of
Computer Science, University College London, London, UK
Marina Alterman Department of Electrical Engineering, Technion – Israel Institute
of Technology, Haifa, Israel
Yali Amit Department of Computer Science, University of Chicago, Chicago,
IL, USA
Gary A. Atkinson Machine Vision Laboratory, University of the West of England,
Bristol, UK
Ruzena Bajcsy Department of Electrical and Computer Sciences, College of
Engineering University of California, Berkeley, CA, USA
Simon Baker Microsoft Research, Redmond, WA, USA
Dana H. Ballard Department of Computer Sciences, University of Texas at Austin,
Austin, TX, USA
Richard G. Baraniuk Department of Electrical and Computer Engineering, Rice
University 2028 Duncan Hal, Houston, TX, USA
Adrian Barbu Department of Statistics, Florida State University, Tallahassee,
FL, USA
Nick Barnes Australian National University, Canberra, Australia
Ronen Basri Department of Computer Science And Applied Mathematics,
Weizmann Institute of Science, Rehovot, Israel
Anup Basu Department of Computing Science, University of Alberta, Edmonton,
AB, Canada
Rodrigo Benenson K.U. Leuven departement Elektrotechniek – ESAT, Centrum
voor beeld- en spraakverwerking – PSI/VISICS, Heverlee, Belgium
Marcelo Bertalmío Universitat Pompeu Fabra, Barcelona, Spain

xxv
xxvi Contributors

Jürgen Beyerer Fraunhofer Institute of Optronics, System Technologies and Image


Exploitation IOSB, Karlsruhe, Germany
Irving Biederman Department of Computer Science, University of Toronto,
Toronto, ON, Canada
Departments of Psychology, Computer Science, and the Neuroscience Program,
University of Southern California, Los Angeles, CA, USA
Tom Bishop Image Algorithms Engineer, Apple, Cupertino, CA, USA
James Bonaiuto California Institute of Technology, Pasadena, CA, USA
Yuri Boykov Department of Computer Science, University of Western Ontario,
London, ON, Canada
Christoph Bregler Courant Institute, New York University, New York, NY, USA
Michael H. Brill Datacolor, Lawrenceville, NJ, USA
Matthew Brown Dept of Computer Science, University of Bath, Bath, UK
Thomas Brox Department of Computer Science, University of Freiburg, Freiburg,
Germany
Frank Michael Caimi IEEE OES, Vero Beach, FL, USA
Fabio Camilli Dipartimento SBAI, “Sapienza”, Università di Roma, Rome, Italy
John N. Carter School of Electronics and Computer Science, University of
Southampton, Southampton, Hampshire, UK
Vicent Caselles Universitat Pompeu Fabra, Barcelona, Spain
Shing Chow Chan Department of Electrical and Electronic Engineering, The Uni-
versity of Hong Kong, Hong Kong, China
Manmohan Chandraker NEC Labs America, Cupertino, CA, USA
Visesh Chari Institut National de Recherche en Informatique et en Automatique
(INRIA), Le Chesnay Cedex, France
François Chaumette Inria, Rennes, France
Rama Chellappa Department of Electrical Engineering and Computer Engineering
and UMIACS, University of Maryland, College Park, MD, USA
Liang Chen Department of Computer Science, University of Northern British
Columbia, Prince George, Canada
Irene Cheng University of Alberta, Edmonton, AB, Canada
James J. Clark Department of Electrical and Computer Engineering, McGill
University, Montreal, QC, Canada
Michael F. Cohen Microsoft Research, One Microsoft Way, Redmond, WA, USA
Kristin Dana Department of Electrical and Computer Engineering, Rutgers Univer-
sity, The State University of New Jersey, Piscataway, NJ, USA
Contributors xxvii

Larry S. Davis Computer Vision Laboratory, Center for Automation Research,


University of Maryland, College Park, MD, USA
Fatih Demirci Department of Computer Engineering, TOBB University of
Economics and Technology, Sogutozu, Ankara, Turkey
Sven J. Dickinson Department of Computer Science, University of Toronto,
Toronto, ON, Canada
Leo Dorst Intelligent Systems Laboratory Amsterdam, Informatics Institute, Univer-
sity of Amsterdam, Amsterdam, The Netherlands
Mark S. Drew School of Computing Science, Simon Fraser University, Vancouver,
BC, Canada
Marc Ebner Institut für Mathematik und Informatik, Ernst Moritz Arndt Universität
Greifswald, Greifswald, Germany
Jan-Olof Eklundh School of Computer Science and Communication, KTH - Royal
Institute of Technology, Stockholm, Swedon
James H. Elde Centre for Vision Research, York University, Toronto, ON, Canada
Francisco J. Estrada Department of Computer and Mathematical Sciences, Univer-
sity of Toronto at Scarborough, Toronto, ON, Canada
Paolo Favaro Department of Computer Science and Applied Mathematics, Univer-
sität Bern, Switzerland
Pedro Felzenszwalb School of Engineering, Brown University, Providence,
RI, USA
Robert B. Fisher School of Informatics, University of Edinburgh, Edinburgh, UK
Boris Flach Department of Cybernetics, Czech Technical University in Prague,
Faculty of Electrical Engineering, Prague 6, Czech Republic
Gian Luca Foresti Department of Mathematics and Computer Science, University
of Udine, Udine, Italy
Wolfgang Förstner ETH Zürich, Zürich, Switzerland
Universität Bonn, Bonn, Germany
Kazuhiro Fukui Department of Computer Science, Graduate School of Systems and
Information Engineering, University of Tsukuba, Tsukuba, Japan
Yasutaka Furukawa Google Inc., Seattle, WA, USA
Juergen Gall Max Planck Institute for Intelligent Systems, Tübingen, Germany
David Gallup Google Inc., Seattle, WA, USA
Abhijeet Ghosh Institute for Creative Technologies, University of Southern
California, Playa Vista, CA, USA
Michael Goesele GCC - Graphics, Capture and Massively Parallel Computing, TU
Darmstadt, Darmstadt, Germany
xxviii Contributors

Bastian Goldluecke Department of Computer Science, Technische Universität


München, München, Germany
Gaopeng Gou Beihang University, Beijing, China
Mohit Gupta Department of Computer Science, Columbia University, New York,
NY, USA
Bohyung Han Department of Computer Science and Engineering, Pohang Univer-
sity of Science and Technology (POSTECH), Pohang, South Korea
Richard Hartley Department of Engineering, Australian National University, ACT,
Australia
Samuel W. Hasinoff Google, Inc., Mountain View, CA, USA
Nils Hasler Graphics, Vision & Video, MPI Informatik, Saarbrücken, Germany
Vaclav Hlavac Department of Cybernetics, Czech Technical University in Prague,
Faculty of Electrical Engineering, Prague 6, Czech Republic
Andrew Hogue Faculty of Business and Information Technology, University of
Ontario Institute of Technology, Oshawa, ON, Canada
Takahiko Horiuchi Graduate School of Advanced Integration Science, Chiba
University, Inage-ku, Chiba, Japan
Berthold K. P. Horn Department of Electrical Engineering and Computer Science,
MIT, Cambridge, MA, USA
Zhanyi Hu National Laboratory of Pattern Recognition, Institute of Automation,
Chinese Academy of Sciences, Beijing, China
Ivo Ihrke MPI Informatik, Saarland University, Saarbrücken, Germany
Michael R. M. Jenkin Department of Computer Science and Engineering, York
University, Toronto, ON, Canada
Jiaya Jia Department of Computer Science and Engineering, The Chinese University
of Hong Kong, Shatin, N.T., Hong Kong, China
Zhuolin Jiang Noah’s Ark Lab, Huawei Tech. Investment Co., LTD., Shatin,
Hong Kong, China
Micah K. Johnson Computer Science and Artificial Intelligence Laboratory,
Massachusetts Institute of Technology, Cambridge, MA, USA
Neel Joshi Microsoft Corporation, Redmond, WA, USA
Avinash C. Kak School of Electrical and Computer Engineering, Purdue University,
West Lafayette, IN, USA
Kenichi Kanatani Professor Emeritus of Okayama University, Okayama, Japan
Sing Bing Kang Microsoft Research, Redmond, WA, USA
Peter Karasev Schools of Electrical and Computer and Biomedical Engineering,
Georgia Institute of Technology, Atlanta, GA, USA
Contributors xxix

Jun-Sik Kim Korea Institute of Science and Technology, Seoul, Republic of Korea
Ron Kimmel Department of Computer Science, Technion – Israel Institute of
Technology, Haifa, Israel
Eric Klassen Ohio State University, Columbus, OH, USA
Reinhard Koch Institut für Informatik Christian-Albrechts-Universität, Kiel,
Germany
Jan J. Koenderink Faculty of EEMSC, Delft University of Technology, Delft,
The Netherlands
The Flemish Academic Centre for Science and the Arts (VLAC), Brussels, Belgium
Laboratory of Experimental Psychology, University of Leuven (K.U. Leuven),
Leuven, Belgium
Pushmeet Kohli Department of Computer Science And Applied Mathematics,
Weizmann Institute of Science, Rehovot, Israel
Ivan Kolesov Schools of Electrical and Computer and Biomedical Engineering,
Georgia Institute of Technology, Atlanta, GA, USA
Sanjeev J. Koppal Harvard University, Cambridge, MA, USA
Kevin Köser Institute for Visual Computing, ETH Zurich, Zürich, Switzerland
Sanjiv Kumar Google Research, New York, NY, USA
Takio Kurita Graduate School of Engineering, Hiroshima University, Higashi-
Hiroshima, Japan
Sebastian Kurtek Ohio State University, Columbus, OH, USA
Annika Lang Seminar für Angewandte Mathematik, ETH Zürich, Zürich,
Switzerland
Michael S. Langer School of Computer Science, McGill University, Montreal,
QC, Canada
Fabian Langguth GCC - Graphics, Capture and Massively Parallel Computing, TU
Darmstadt, Darmstadt, Germany
Longin Jan Latecki Department of Computer and Information Sciences, Temple
University, Philadelphia, PA, USA
Denis Laurendeau Faculty of Science and Engineering, Department of Electrical
and Computer Engineering, Laval University, QC, Canada
Jason Lawrence Department of Computer Science, School of Engineering and
Applied Science University of Virginia, Charlottesville, VA, USA
Svetlana Lazebnik Department of Computer Science, University of Illinois at
Urbana-Champaign, Urbana, IL, USA
Longzhuang Li Department of Computing Science, Texas A and M University at
Corpus Christi, Corpus Christi, TX, USA
xxx Contributors

Wanqing Li University of Wollongong, Wollongong, NSW, Australia


Stephen Lin Microsoft Research Asia, Beijing Sigma Center, Beijing, China
Zhe Lin Advanced Technology Labs, Adobe Systems Incorporated, San Jose,
CA, USA
Tony Lindeberg School of Computer Science and Communication, KTH Royal
Institute of Technology, Stockholm, Sweden
Yanxi Liu Computer Science and Engineering, Penn State University, University
Park, PA, USA
Yonghuai Liu Department of Computer Science, Aberystwyth University, Ceredi-
gion, Wales, UK
Zhi-Yong Liu Institute of Automation, Chinese Academy of Sciences, Beijing,
P. R. China
Zicheng Liu Microsoft Research, Microsoft Corporation, Redmond, WA, USA
Songde Ma Ministry of Science & Technology, Beijing, China
Eisaku Maeda NTT Communication Science Laboratories, Soraku-gun, Kyoto,
Japan
Michael Maire California Institute of Technology, Pasadena, CA, USA
Pascal Mamassian Laboratoire Psychologie de la Perception, Université Paris
Descartes, Paris, France
Ralph R. Martin School of Computer Science and Informatics Cardiff University,
Cardiff, UK
Simon Masnou Institut Camille Jordan, Universitè Lyon 1, Villeurbanne, France
Darko S. Matovski School of Electronics and Computer Science, University of
Southampton, Southampton, Hampshire, UK
Yasuyuki Matsushita Microsoft Research, Beijing, China
Larry Matthies Jet Propulsion Laboratory, California Institute of Technology,
Pasadena, CA, USA
Stephen J. Maybank Department of Computer Science and Information Systems,
Birkbeck College University of London, London, UK
Peter Meer Department of Electrical and Computer Engineering, Rutgers Univer-
sity, Piscataway, NJ, USA

Christian Micheloni Department of Mathematics and Computer Science, University


of Udine, Udine, Italy

Sushil Mittal Department of Statistics, Columbia University, New York, NY, USA

Daisuke Miyazaki Graduate School of Information Sciences, Hiroshima City


University, Asaminami-ku, Hiroshima, Japan
Contributors xxxi

Vlad I. Morariu Computer Vision Laboratory, University of Maryland, College


Park, MD, USA
Joseph L. Mundy Division of Engineering, Brown University Rensselaer
Polytechnic Institute, Providence, RI, USA
Hiroshi Murase Faculty of Economics and Information, Gifu Shotoku Gakuen
University, Gifu, Japan
Kazuo Murota Department of Mathematical Informatics, University of Tokyo,
Bunkyo-ku, Tokyo, Japan
Bernd Neumann University of Hamburg, Hamburg, Germany
Mark S. Nixon School of Electronics and Computer Science, University of
Southampton, Southampton, Hampshire, UK
Takayuki Okatani Graduate School of Information Sciences, Tohoku University,
Sendai-shi, Japan
Vasu Parameswaran Microsoft Corporation, Sunnyvale, CA, USA
Johnny Park School of Electrical and Computer Engineering, Purdue
University, West Lafayette, IN, USA
Samunda Perera Canberra Research Laboratory, NICTA, Canberra, Australia
Matti Pietikäinen Department of Computer Science and Engineering, University of
Oulu, Oulu, Finland
Tomaso Poggio Department of Brain and Cognitive Sciences, McGovern Institute,
Massachusetts Institute of Technology, Cambridge, MA, USA
Marc Pollefeys Computer Vision and Geometry Lab (CVG) – Institute of Visual
Computing, Department of Computer Science, ETH Zürich, Zürich, Switzerland
S. C. Pont Industrial Design Engineering, Delft University of Technology, Delft, The
Netherlands
Emmanuel Prados INRIA Rhône-Alpes, Montbonnot, France
Jerry L. Prince Electrical and Computer Engineering, Johns Hopkins University,
Baltimore, MD, USA
Srikumar Ramalingam Mitsubishi Electric Research Laboratories, Cambridge,
MA, USA
Rajeev Ramanath DLP
R
Products, Texas Instruments Incorporated, Plano,
TX, USA
Dikpal Reddy Nvidia Research, Santa Clara, CA, USA
Xiaofeng Ren Intel Science and Technology Center for Pervasive Computing, Intel
Labs, Seattle, WA, USA
Xuejun Ren School of Engineering Liverpool John Moores University,
Liverpool, UK
xxxii Contributors

Szymon Rusinkiewicz Department of Computer Science, Princeton University,


Princeton, NJ, USA
Aswin C. Sankaranarayanan ECE Department, Rice University, Houston,
TX, USA
Guillermo Sapiro Electrical and Computer Engineering, Computer Science, and
Biomedical Engineering, Duke University, Durham, NC, USA
Silvio Savarese Department of Electrical and Computer Engineering, University of
Michigan, Ann Arbor, MI, USA
Davide Scaramuzza Artificial Intelligence Lab - Robotics and Perception Group,
Department of Informatics, University of Zurich, Zurich, Switzerland
Konrad Schindler ETH Zürich, Zürich, Switzerland
David C. Schneider Image Processing Department, Fraunhofer Heinrich Hertz Insti-
tute, Berlin, Germany
William Robson Schwartz Department of Computer Science, Universidade Federal
de Minas Gerais, Belo Horizonte, MG, Brazil
Guna Seetharaman Air Force Research Lab RITB, Rome, NY, USA
Chunhua Shen School of Computer Science , The University of Adelaide, Adelaide,
SA, Australia
Rui Shen University of Alberta, Edmonton, AB, Canada
Ali Shokoufandeh Department of Computer Science, Drexel University Philadel-
phia, PA, USA
Jamie Shotton Microsoft Research Ltd, Cambridge, UK
Kaleem Siddiqi School of Computer Science, McGill University, Montreal,
PQ, Canada
Leonid Sigal Disney Research, Pittsburgh, PA, USA
Manish Singh Department of Psychology, Rutgers University, Piscataway,
NJ, USA
Sudipta N. Sinha Microsoft Research, Redmond, WA, USA
Arnold W. M. Smeulders Centre for Mathematics and Computer Science (CWI),
University of Amsterdam, Amsterdam, The Netherlands
Intelligent Systems Lab Amsterdam, Informatics Institute University of Amsterdam,
Amsterdam, The Netherlands
Cees G. M. Snoek University of Amsterdam, Amsterdam, The Netherlands
Intelligent Systems Lab Amsterdam, Informatics Institute University of Amsterdam,
Amsterdam, The Netherlands
Gunnar Sparr Centre for Mathematical Sciences, Lund University, Lund,
Sweden
Anuj Srivastava Florida State University, Tallahassee, FL, USA
Contributors xxxiii

Gaurav Srivastava School of Electrical and Computer Engineering, Purdue Univer-


sity, West Lafayette, IN, USA
Peer Stelldinger Computer Science Department, University of Hamburg, Hamburg,
Germany
Peter Sturm INRIA Grenoble Rhône-Alpes, St Ismier Cedex, France
Kokichi Sugihara Graduate School of Advanced Mathematical Sciences, Meiji
University, Kawasaki, Kanagawa, Japan
Min Sun Department of Electrical and Computer Engineering, University of Michi-
gan, Ann Arbor, MI, USA
Richard Szeliski Microsoft Research, One Microsoft Way, Redmond, WA, USA
Yu-Wing Tai Department of Computer Science, Korean Advanced Institute of
Science and Technology (KAIST), Yuseong-gu, Daejeon, South Korea
Tomokazu Takahashi Faculty of Economics and Information, Gifu Shotoku Gakuen
University, Gifu, Japan
Birgi Tamersoy Department of Electrical and Computer Engineering, The Univer-
sity of Texas at Austin, Austin, TX, USA
Ping Tan Department of Electrical and Computer Engineering, National University
of Singapore, Singapore, Singapore
Robby T. Tan Department of Information and Computing Sciences, Utrecht Univer-
sity, Utrecht, CH, The Netherlands
Allen Tannenbaum Schools of Electrical and Computer and Biomedical Engineer-
ing, Georgia Institute of Technology, Atlanta, GA, USA
Marshall F. Tappen University of Central Florida, Orlando, FL, USA
Federico Tombari DEIS, University of Bologna, Bologna, Italy
Shoji Tominaga Graduate School of Advanced Integration Science, Chiba Univer-
sity, Inage-ku, Chiba, Japan
Lorenzo Torresani Computer Science Department, Dartmouth College, Hanover,
NH, USA
Tali Treibitz Department of Computer Science and Engineering, University of
California, San Diego, La Jolla, CA, USA
Yanghai Tsin Corporate R&D, Qualcomm Inc., San Diego, CA, USA
Pavan Turaga Department of Electrical and Computer Engineering, Center for
Automation Research University of Maryland, College Park, MD, USA
Matthew Turk Computer Science Department and Media Arts and Technology
Graduate Program, University of California, Santa Barbara, CA, USA
Tinne Tuytelaars KULEUVEN Leuven, ESAT-PSI, iMinds, Leuven, Belgium
Shimon Ullman Department of Brain and Cognitive Sciences, McGovern Institute,
Massachusetts Institute of Technology, Cambridge, MA, USA
xxxiv Contributors

Anton van den Hengel School of Computer Science, The University of Adelaide,
Adelaide, SA, Australia
Pramod K. Varshney Department of Electrical Engineering and Computer Science,
Syracuse University, Syracuse, NY, USA
Andrea Vedaldi Oxford University, Oxford, UK
Ashok Veeraraghavan Department of Electrical and Computer Engineering, Rice
University, Houston, TX, USA
David Vernon Informatics Research Centre, University of Skövde, Skövde, Sweden
Ramanarayanan Viswanathan Department of Electrical Engineering, University of
Mississippi, MS, USA
Xiaogang Wang Department of Electronic Engineering, Chinese University of Hong
Kong, Shatin, Hong Kong
Isaac Weiss Center for Automation Research, University of Maryland at College
Park, College Park, MD, USA
Gregory F. Welch Institute for Simulation & Training, The University of Central
Florida, Orlando, FL, USA
Michael Werman The Institute of Computer Science, The Hebrew University of
Jerusalem, Jerusalem, Israel
Tien-Tsin Wong Department of Computer Science and Engineering, The Chinese
University of Hong Kong, Hong Kong SAR, China
Robert J. Woodham Department of Computer Science, University of British
Columbia, Vancouver, BC, Canada
John Wright Visual Computing Group, Microsoft Research Asia, Beijing, China
Ying Nian Wu Department of Statistics, UCLA, Los Angeles, CA, USA
Chenyang Xu Siemens Technology-To-Business Center, Berkeley, CA, USA
David Young School of Informatics, University of Sussex, Falmer, Brighton, UK
Guoshen Yu Electrical and Computer Engineering, University of Minnesota,
Minneapolis, MN, USA
Christopher Zach Computer Vision and Geometry Group, ETH Zürich, Zürich,
Switzerland
Alexander Zelinsky CSIRO, Information Sciences, Canberra, Australia
Zhengyou Zhang Microsoft Research, Redmond, WA, USA
Bo Zheng Computer Vision Laboratory, Institute of Industrial Science, The Univer-
sity of Tokyo, Meguro-ku, Tokyo, Japan
Zhigang Zhu Computer Science Department, The City College of New York,
New York, NY, USA
Todd Zickler School of Engineering and Applied Science, Harvard University,
Cambridge, MA, USA
A

Action Prototype Trees Definition

Prototype-Based Methods for Human Movement Active calibration is a process that determines the geo-
Modeling metric parameters of a camera (or cameras) using the
camera’s controllable movements.

Action Recognition
Background
Activity Recognition
Affordances and Action Recognition
Camera calibration aims to establish the best possible
correspondence between the used camera model and
the realized image acquisition with a given camera
[12], i.e., accurately recover a camera’s geometric
Active Calibration parameters, such as focal length and image center/
principal point, from the captured images. The clas-
Rui Shen1 , Gaopeng Gou2 , Irene Cheng1 and sical calibration techniques (e.g., [10, 16]) require pre-
Anup Basu3 defined patterns and static cameras and often involve
1
University of Alberta, Edmonton, AB, Canada solving complicated equations. Taking advantage of a
2
Beihang University, Beijing, China camera’s controllable movements (e.g., pan, tilt, and
3
Department of Computing Science, University of roll), active calibration techniques can automatically
Alberta, Edmonton, AB, Canada calibrate the camera.

Synonyms Theory

Active camera calibration; Pan-tilt camera calibra- The pinhole camera model is one of the most com-
tion; Pan-tilt-zoom camera calibration; PTZ camera monly used models, as shown in Fig. 1. p D .x; y/T
calibration is the 2D projection of the 3D point P D .X; Y; Z/T
on the image plane. Using homogeneous coordinates,
pQ and PQ have the following relationship:
Related Concepts

Camera Calibration pQ D K.R j t/PQ (1)

K. Ikeuchi (ed.), Computer Vision, DOI 10.1007/978-0-387-31439-6,


© Springer Science+Business Media New York 2014
A 2 Active Calibration

P (X,Y,Z)
(3D Point)
p (x,y)
(2D Point)

O
(0,0,0) f Z
Focal Length
Image Plane

Active Calibration, Fig. 2 Canon VC-C50i PTZ camera


Active Calibration, Fig. 1 The pinhole camera model

0 1 This technique was extended in [2–4] to give more


fx s x0
robust estimation of the intrinsic parameters. No spe-
where K D @ 0 fy y0 A is the camera
cial patterns are required, but the observed scene
0 0 1
calibration matrix;  is the depth (when R is the iden- should have strong and stable edges. Four strategies
tity matrix and t is a zero-vector,  D Z); and R and t are introduced and validated through experiments [4].
are the 3  3 rotation matrix and the 3  1 translation Strategies A and B only utilize pan and tilt movements;
vector, respectively. fx and fy are the focal lengths in strategy C utilizes pan, tilt, and roll movements; strat-
pixels in the x- and y-directions, respectively. They are egy D is a special case of strategy C when the roll
proportional to the focal length f shown in Fig. 1, angle is equal to 180ı. The procedure of strategy C is
which is normally measured in millimeters. .x0 ; y0 / outlined asfollows:
are the coordinates of the image center/principal point – Using the pan and tilt movement of the camera and
on the image plane, i.e., the intersection of the lens’ a single image contour, obtain the values of fx and
optical axis and the image plane. s is the skew. Nor- fy by solving Eqs. 2 and 3.
mally, the sensor element is assumed rectangular; – Using the roll movement of the camera and a single
therefore, s becomes 0.  D fx =fy is the aspect ratio. image contour, obtain the values of ıx and ıy by
If the sensor element is quadratic,  becomes 1. The Eqs. 4 and 5.
five parameters in K are called the camera intrinsic
parameters; the three Euler angles in R and the three yt  y.1 C t2 /
offsets in t are called the camera extrinsic parameters. fy D
2
Figure 2 shows an active camera. When a cam- v t
u !2
era rotates (no translation), the new image points can 1ut y.1 C t2 /  yt2
be obtained as transformations of the original image C  4y 2 (2)
2 t
points [11]. The camera parameters can be related
to the image points before and after rotation and to
the angles of rotation. The equations describing these
relations are simple and easy to solve. Thus, by con- xp  x.1 C p2 /
sidering different movements of the camera (pan, tilt, fx D
and roll with measurement of the angles), the intrinsic 2p
v
u !2
1u
parameters of a camera can be estimated. x.1 C p2 /  xp2
Basu [1] proposed an active calibration technique C t  4x 2 (3)
2 p
utilizing the edge information of a static scene.
Active Calibration 3 A
where .xp ; yp / and .xt ; yt / denote image coordinates deployed landmarks. Borghese et al. [5] proposed a
after pan and tilt movements, respectively; t and p technique to compute camera focal lengths by zoom-
are the tilt angle and pan angle, respectively. ing a single point, assuming the principal point is in A
  a fixed and known position. Sinha and Pollefeys [15]
fx propose a camera model that incorporates the variation
ıx .1  cos.r // C ıy  sin.r /
fy of radial distortion with camera zoom. The intrinsic
fx parameters are first computed at the lowest zoom level
D xr  cos.r /x  sin.r /y (4) from a captured panorama. Then, the intrinsic and
fy
radial distortion parameters are estimated at sequen-
tially increased zoom levels, taking into account the
  influence of camera zoom.
fx
ıx  sin.r / C ıy .1  cos.r //
fy
fx Application
D yr  cos.r /y C sin.r /x (5)
fy
Active calibration has been receiving more and more
where .xr ; yr / denotes image coordinates after roll attention with the increasing use of active systems in
movement; .ıx ; ıy / denotes the error in the estimated various applications, such as object tracking, surveil-
principal point (e.g., taking the geometric center of the lance, and video conference. More generally, cam-
image plane as the estimated principal point); and r is era calibration is important for any application that
the roll angle. involves relating a 2D image to the 3D world. Such
Davis and Chen [9] introduced a new pan-tilt cam- applications include pose estimation, 3D motion esti-
era motion model, in which the pan and tilt axes are mation, automated assembly, close-range photogram-
not necessarily orthogonal or aligned to the image metry, and so on.
plane. A tracked object is used to form a large virtual
calibration object that covers the whole working vol-
ume. A set of pre-calibrated static cameras is needed Open Problems
to record the trajectory. The intrinsic parameters are
recovered by minimizing the projection errors between Recent research is more focused on automatic active
the observed 2D data and the calculated 2D locations calibration of a multi-camera system without using a
of the tracked object using the proposed camera model. predefined calibration pattern/object. Chippendale and
McLauchlan and Murray [13] applied the variable Tobia [7] presented an autocalibration system for the
state-dimension filter to calibrating a single camera estimation of extrinsic parameters of active cameras
mounted on a robot by tracking the trajectories of an in indoor environments. One constraint of the cam-
arbitrary number of tracked corner features and uti- era deployment is that each camera must be able to
lizing accurate knowledge of the camera rotation. The observe at least one other camera to form an obser-
camera’s intrinsic parameters are updated in real time. vation chain. The extrinsic parameters are estimated
Different zoom settings (focal lengths) can also be using the circular shape of the camera lenses and a
employed in active calibration. Seales and Eggert [14] predetermined moving pattern of a particular camera.
calibrate a camera via a fully automated 4-stage global The accuracy of the algorithm is mainly affected by the
optimization process using a sequence of images of a distance between cameras. Brückner and Denzler [6]
known calibration target obtained at different mechan- proposed a three-step multi-camera calibration algo-
ical zoom settings. Collins and Tsin [8] proposed a rithm. The extrinsic parameters of each camera are
parametric camera model and calibration procedures first roughly estimated using a probability distribution
for an outdoor active camera system with pan, tilt, and based on the captured images. Then, each camera pair
zoom control. Intrinsic parameters are recovered by fit- rotates and zooms in a way that maximizes image sim-
ting the camera model with the optic flow produced ilarity, and the extrinsic parameters are reestimated
by the camera’s movements. Extrinsic parameters are based on point correspondence. A final calibration is
estimated as a pose estimation problem using sparsely carried out using the probabilities and the reestimated
A 4 Active Calibration

Active Calibration, Table 1 Results of image center is increased, strategy D produces more reliable results
estimation than strategy C.
Angle Ground truth  D0  D5  D 10 Strategies C and D were also tested on a real camera
(strategy C) ıx ıy ıx ıy ıx ıy ıx ıy in an indoor environment. The estimates for ıx and ıy
20ı 10 20 11 22 12 23 12 23 obtained by strategy C (90ı roll) were 3 and 29 pixels,
40ı 10 20 11 21 12 22 12 19
60ı 10 20 11 21 12 21 13 21 while the values obtained by strategy D were 2 and 30
80ı 10 20 11 21 11 21 11 22 pixels, which demonstrates the stability of the active
100ı 10 20 10 21 10 21 11 21 calibration algorithms in real situations. The estimated
120ı 10 20 11 21 11 21 11 21 values of fx and fy were 908 and 1,126, respectively.
140ı 10 20 11 21 11 21 11 21
160ı 10 20 10 21 11 21 11 21
180ı 10 20 10 20 10 21 10 21
Strategy D 10 20 10 20 11 20 11 20
References

Active Calibration, Table 2 Results of focal length estimation 1. Basu A (1993) Active calibration. In: ICRA’93: proceedings
of the 1993 IEEE international conference on robotics and
Ground truth  D0  D5 automation, Atlanta, vol 2, pp 764–769
Strategy fx fy fx fy fx fy 2. Basu A (1993) Active calibration: alternative strategy and
Strategy C 400 600 403 602 396 603 analysis. In: CVPR’93: proceedings of the 1993 IEEE com-
Strategy D 400 600 401 601 403 599 puter society conference on computer vision and pattern
recognition (CVPR), New York, pp 495–500
3. Basu A (1995) Active calibration of cameras: theory
and implementation. IEEE Trans Syst Man Cybern 25(2):
extrinsic parameters. This method achieves relatively 256–265
high accuracy and robustness, but one drawback is the 4. Basu A, Ravi K (1997) Active camera calibration using pan,
high computational cost. tilt and roll. IEEE Trans Syst Man Cybern B 27(3):559–566
5. Borghese NA, Colombo FM, Alzati A (2006) Computing
camera focal length by zooming a single point. Pattern
Recognit 39(8):1522–1529
6. Brückner M, Denzler J (2010) Active self-calibration of
Experimental Results multi-camera systems. In: Proceedings of the 32nd DAGM
conference on pattern recognition, Darmstadt, pp 31–40
Some experimental results using Strategies C and D 7. Chippendale P, Tobia F (2005) Collective calibration of
from [4] are presented below. active camera groups. In: AVSS’05: proceedings of the
IEEE conference on advanced video and signal based
Tables 1 and 2 summarize the results of comput-
surveillance, Como, pp 456–461
ing the image center and focal lengths using simulated 8. Collins RT, Tsin Y (1999) Calibration of an outdoor active
data, along with the ground truths. The simulated data camera system. In: CVPR’99: proceedings of the 1999 IEEE
contain an image contour consisting of 50 points. The computer society conference on computer vision and pattern
recognition (CVPR), Ft. Collins, pp 528–534
pan and tilt angles were fixed at 3ı . For the experiment
9. Davis J, Chen X (2003) Calibrating pan-tilt cameras in wide-
on image center calculation, additive Gaussian noise area surveillance networks. In: ICCV’03: proceedings of the
with standard deviation  of 0 (no noise), 5, and 10 pix- 9th IEEE international conference on computer vision, Nice,
els were added to test the robustness of the algorithms. pp 144–149
10. Horaud R, Mohr R, Lorecki B (1992) Linear camera cal-
It can be seen that strategy C performs reasonably in
ibration. In: ICRA’92: proceedings of the IEEE interna-
determining the image center even when  is as large tional conference on robotics and automation, Nice, vol 2,
as 15. The results of strategy D are similar to those pp 1539–1544
produced by strategy C when the roll angle is 180ı. 11. Kanatani K (1987) Camera rotation invariance of image
characteristics. Comput Vis Graph Image Process
For the experiment on focal length calculation,
39(3):328–354
additive Gaussian noise with standard deviation  of 12. Klette R, Schlüns K, Koschan A (1998) Computer vision:
0 (no noise) and 5 pixels were added. Strategy D is three-dimensional data from images, 1st edn. Springer,
a little more accurate as the equations are obtained New York/Singapore
13. McLauchlan PF, Murray DW (1996) Active camera cali-
directly without using the estimates of fx and fy .
bration for a head-eye platform using the variable state-
The focal lengths obtained by strategy D are similar dimension filter. IEEE Trans Pattern Anal Mach Intell
to those produced by strategy C. But when the noise 18(1):15–22
Active Sensor (Eye) Movement Control 5 A
14. Seales WB, Eggert DW (1995) Active-camera cali- Background
bration using iterative image feature localization. In:
CAIP’95: proceedings of the 6th international conference
on computer analysis of images and patterns, Prague, The generalized viewpoint [1] of a sensor is the vec- A
pp 723–728 tor of values of the parameters that are under the
15. Sinha SN, Pollefeys M (2006) Pan-tilt-zoom camera cali- control of the observer and which affect the imag-
bration and high-resolution mosaic generation. Comput Vis ing process. Most often, these parameters will be
Image Underst 103(3):170–183
16. Tsai R (1987) A versatile camera calibration technique for the position and orientation of the image sensor, but
high-accuracy 3d machine vision metrology using off-the- may also include such parameters as the focal length,
shelf tv cameras and lenses. IEEE J Robot Autom 3(4): aperture width, and the nodal point to image plane
323–344 distance, of the camera. The definition of general-
ized viewpoint can be extended to include illuminant
degrees of freedom, such as the illuminant position,
wavelength, intensity, spatial distribution (for struc-
Active Camera Calibration tured light applications), and angular distribution (e.g.,
collimation) [2].
Active Calibration
Changes in observer viewpoint are used in active
vision systems for a number of purposes. Some of the
more important uses are:
Active Contours – Tracking a moving object to keep it in the field of
view of the sensor
Numerical Methods in Curve Evolution Theory
– Searching for a specific item in the observer’s envi-
ronment
– Inducing scene-dependent optical flow to aid in
the extraction of 3D structure of objects and
Active Sensor (Eye) Movement Control scenes
– Avoiding “accidental” or nongeneric viewpoints,
James J. Clark which can result in sensor-saturating specularities
Department of Electrical and Computer Engineering, or information-hiding occlusions
McGill University, Montreal, QC, Canada – Minimizing sensor noise and maximizing novel
information content
– Increasing the dynamic range of the sensor, through
Synonyms adjustment of parameters such as sensor sensitivity,
aperture, and focus
Gaze control – Mapping the observer’s environment

Theory
Related Concepts
LowLevel Camera Motion Control Systems Most
Evolution of Robotic Heads; Visual Servoing robotic active vision control systems act mainly to
produce either smooth pursuit motions or rapid sac-
cadic motions. Pursuit motions cause the camera to
Definition move so as to smoothly track a moving object, main-
taining the image of the target object within a small
Active sensors are those whose generalized view- region (usually in the center) of the image frame. Sac-
point (such as sensor aperture, position, and ori- cadic motions are rapid, usually large, jumps in the
entation) is under computer control. Control is position of the camera, which center the camera field
done so as to improve information gathering and of view on different parts of the scene being imaged.
processing. This type of motion is used when scanning a scene,
A 6 Active Sensor (Eye) Movement Control

searching for objects or information, but can also be There are a number of approaches to dealing with
used to recover from a loss of tracking of an object delay. PID or PD systems can be made robust to delays
during visual pursuit. simply by increasing system damping by reducing the
Much has been learned about the design of pursuit proportional feedback gain to a sufficiently low value
and saccadic motion control systems from the study [7]. This results in a system that responds to changes
of primate oculomotor systems. These systems have in target position very slowly, however, and is unac-
a rather complicated architecture distributed among ceptable for most applications. For control of saccadic
many brain areas, the details of which are still sub- motion, a sample/hold can be used, where the posi-
ject to vigorous debate [3]. The high-level structure, tion error is sampled at the time a saccade is triggered,
however, is generally accepted to be that of a feedback and held in a first-order hold (integrator) [8]. In this
system. A very influential model of the human oculo- way, the position error seen by the controller is held
motor control system is that of Robinson [4], and many constant until the saccadic motion is completed. The
robotic vision control systems employ aspects of the controller is insensitive to any changes in the actual
Robinson model. target position until the end of this refractory period.
The control of an active camera system is both sim- This stabilizes the controller, but has the drawback that
ple and difficult at the same time. Simplicity arises if the target moves during the refractory period, the
from the relatively unchanging characteristics of the position error at the end of the refractory period can
load or “plant” being controlled. For most systems the be large. In this case, another, corrective or secondary,
moment of inertia of the camera changes only mini- saccadic motion may need to be triggered. For sta-
mally over the range of motion, with slight variations bilization of pursuit control systems in the presence
arising when zoom lenses are used. The mass of the of delay, an internal positive feedback loop can be
camera and associated linkages does not change. Iner- employed [4, 8]. This positive feedback compensates
tial effects become more important for control of the for delays in the negative feedback servo loop created
“neck” degrees of freedom due to the changing ori- by the time taken to acquire an image and compute
entation and position of the camera bodies relative to the target velocity error. The positive feedback loop
the neck. The specifications on the required veloci- sends a delayed efference copy of the current com-
ties and control bandwidth for the neck motions are manded camera velocity (which is the output of the
typically much less stringent than those for the cam- pursuit controller) back to the velocity error compara-
era motions, so that the inertial effects for the neck tor where it is added to the measured velocity error.
are usually neglected. The relatively simple nature The positive feedback delay is set so that it arrives at
of the oculomotor plant means that straightforward the velocity error comparator at the same time as the
proportional-derivative (PD) or proportional-integral- measurement of the effect of the current control com-
derivative (PID) control systems are often sufficient for mand, effectively canceling out the negative feedback
implementing tracking or pursuit motion. Some sys- and producing a new target velocity for the controller.
tems have employed more complex optimal control Another delay handling technique is to use predictive
systems (e.g., [5]) which provide improved disturbance control, such as the Smith Predictor, where the camera
rejection and trajectory following accuracy compared position and controller states are predicted for a time T
to the simpler approaches. in the future, where T is the controller delay, and con-
There is a serious difficulty in controlling cam- trol signals appropriate for those states are computed
era motion systems, however, caused by delays in the and applied immediately [6, 7]. Predictive methods
control loops. Such delays include the measurement make strong assumptions on changes in the external
delay due to the time needed to acquire and digitize environment (e.g., that all objects in the scene are static
the camera image and subsequent computations, such or traversing known smooth trajectories). Such meth-
as feature extraction and target localization. There is ods can perform poorly when these assumptions are
also a delay or latency arising from the time needed violated.
to compute the controller output signal [6]. If these The Next-Look Problem and Motion Planning The con-
delays are not dealt with, a simple PD or PID controller trol of pursuit and saccadic motions are usually han-
can become unstable, leading to excessive vibration or dled by different controllers. While pursuit or tracking
shaking and loss of target tracking. behavior can be implemented using frequent small
Active Sensor (Eye) Movement Control 7 A
saccade-like motions, this can produce jumpy images cameras, and the pan actions were sometimes linked
which may degrade subsequent processing operations. together to provided vergence and/or version motions
With multiple controllers, there needs to be a way for only. Examples include the UPenn head [11], generally A
the possibly conflicting commands from the controllers recognized as the first of its kind, the Harvard head
to be integrated and arbitrated. The simplest approach [12], the KTH head [13], the TRISH head from the
uses the output of the pursuit control system by default, University of Toronto [14], the Rochester head [15],
with a switch over to the output of the saccade con- the SAGEM-GEC-Inria-Oxford head [16], the Surrey
trol system whenever the position error is greater than head [17], the LIFIA head [18], the LIA/AUC head
some threshold and switching back to pursuit control [19], and the Technion head [5]. These early robotic
when the position error drops below another (lower) heads generally used PD servo loops, some with delay
threshold. compensation mechanisms as described above, and
Pursuit or tracking of visual targets is just one type were capable of speeds up to 180 degrees per second.
of motor activity. Activities such as visual search may The pan axis maximum rotational velocities were usu-
require large shifts of camera position to be executed ally higher than those of the tilt and vergence speeds.
based on a complex analysis of the visual input. The The axes were most often driven either by DC motors
process of determining the active vision system con- or by stepper motors.
troller set point is often referred to as sensor planning A more recent example of a research system is the
[1] or the next-look problem [9]. The next-look prob- head of the iCub humanoid robot [20]. Unlike the early
lem can be interpreted as determining sensor positions robotic heads, which were one-off systems limited to
which increase or maximize the information content use in a single laboratory, this robot was developed
of subsequent measurements. In a visual search task, by a consortium of European institutions and is used
for example, the next-look may be specified to be a in many different research laboratories. It has inde-
location which is expected to maximally discriminate pendent pan and common tilt for two cameras as well
between target and distractor. One principle that has as three neck degrees of freedom. The maximum pan
been successfully employed in next-look processes is speed is 180 degrees per second, and the maximum tilt
that of entropy minimization over viewpoints. In an speed is 160 degrees per second.
object recognition or visual search task, this approach Currently, most robotic active vision systems are
takes as the next viewpoint that which is maximally based on commercially available monocular pan-tilt
informative relative to the most probable hypotheses platforms. The great majority of commercial platforms
[10]. A common approach to the next-view problem are designed for surveillance applications and are rela-
in robotic systems is to employ an attention mecha- tively slow. There are a few systems with specifications
nism to provide the location of the next view. Based that are suitable for robotic active vision systems.
on models of mammalian vision systems, attention Perhaps the most commonly used of these fast plat-
mechanisms determine salient regions in visual input, forms are made by FLIR Motion Control Systems, Inc.
which compete or interact in winner-takes-all fash- (formerly Directed Perception). These are capable of
ion to select a single location as the target for the speeds up to 120 degrees per second and can handle
subsequent motion [8]. loads of up to 90 lbs. Commercial systems generally
lack torsional motion and hence are not suitable for
precise stereo vision applications.
Application The fastest current commercial pan/tilt units, as
well as the early research platforms, only reach max-
In the late 1980s and early 1990s, commercial camera imum speeds of around 200 degrees per second. This
motion platforms lacked the performance needed by is sufficient to match the speeds of human pursuit eye
robotics researchers and manufacturers. This led many movements, which top out around 100 degrees per sec-
universities to construct their own platforms and ond. However, if these speeds are compared to the
develop control systems for them. These were gener- maximum speed of 800 degrees per second for human
ally binocular camera systems with pan and tilt degrees saccadic motions, it can be seen that the performance
of freedom for each camera. Often, to simplify the of robotic active vision motion platforms still has room
design, a common tilt action was employed for both for improvement.
A 8 Active Stereo Vision

References
Active Stereo Vision
1. Tarabanis K, Tsai RY, Allen PK (1991) Automated sen-
sor planning for robotic vision tasks. In: Proceedings of Andrew Hogue1 and Michael R. M. Jenkin2
the 1991 IEEE conference on robotics and automation, 1
Faculty of Business and Information Technology,
Sacramento, pp 76–82
University of Ontario Institute of Technology,
2. Yi S, Haralick RM, Shapiro LG (1990) Automatic sensor
and light source positioning for machine vision. In: Pro- Oshawa, ON, Canada
2
ceedings of the computer vision and pattern recognition Department of Computer Science and Engineering,
conference (CVPR), Atlantic City, June 1990, pp 55–59 York University, Toronto, ON, Canada
3. Kato R, Grantyn A, Dalezios Y, Moschovakis AK (2006)
The local loop of the saccadic system closes downstream of
the superior colliculus. Neuroscience 143(1):319–337
4. Robinson DA (1968) The oculomotor control system: a Related Concepts
review. Proc IEEE 56(6):1032–1049
5. Rivlin E, Rotstein H (2000) Control of a camera for active
Camera Calibration
vision: foveal vision, smooth tracking and saccade. Int J
Comput Vis 39(2):81–96
6. Brown C (1990) Gaze controls with interactions and delays.
IEEE Trans Syst Man Cybern 20(1):518–527
7. Sharkey PM, Murray DW (1996) Delays versus perfor- Definition
mance of visually guided systems. IEE Proc Control Theory
Appl 143(5):436–447 Active stereo vision utilizes multiple cameras for 3D
8. Clark JJ, Ferrier NJ (1992) Attentive visual servoing. In:
Blake A, Yuille AL (eds) An introduction to active vision.
reconstruction, gaze control, measurement, tracking,
MIT, Cambridge, pp 137–154 and surveillance. Active stereo vision is to be con-
9. Swain MJ, Stricker MA (1993) Promising directions in trasted with passive or dynamic stereo vision in that
active vision. Int J Comput Vis 11(2):109–126 passive systems treat stereo imagery as a series of inde-
10. Arbel T, Ferrie FP (1999) Viewpoint selection by naviga-
tion through entropy maps. In: Proceedings of the seventh
pendent static images while active and dynamic sys-
IEEE international conference on computer vision, Kerkyra, tems employ temporal constraints to integrate stereo
pp 248–254 measurements over time. Active systems utilize feed-
11. Krotkov E, Bajcsy R (1988) Active vision for reliable rang- back from the image streams to manipulate camera
ing: cooperating, focus, stereo, and vergence. Int J Comput
Vis 11(2):187–203
parameters, illuminants, or robotic motion controllers
12. Ferrier NJ, Clark JJ (1993) The Harvard binocular head. Int in real time.
J Pattern Recognit Artif Intell 7(1):9–31
13. Pahlavan K, Eklundh J-O (1993) Heads, eyes and head-eye
systems. Int J Pattern Recognit Artif Intell 7(1):33–49
14. Milios E, Jenkin M, Tsotsos J (1993) Design and perfor- Background
mance of TRISH, a binocular robot head with torsional eye
movements. Int J Pattern Recognit Artif Intell 7(1):51–68 Stereo vision uses two or more cameras with over-
15. Coombs DJ, Brown CM (1993) Real-time binocular smooth
lapping fields of view to estimate 3D scene structure
pursuit. Int J Comput Vis 11(2):147–164
16. Murray DW, Du F, McLauchlan PF, Reid ID, Sharkey PM, from 2D projections. Binocular stereo vision – the
Brady M (1992) Design of stereo heads. In: Blake A, most common implementation – uses exactly two cam-
Yuille A (eds) Active vision. MIT, Cambridge, eras, yet one can utilize more than two at the expense
Massachusetts, USA, pp 155–172
of computational speed within the same algorithmic
17. Pretlove JRG, Parker GA (1993) The Surrey attentive robot
vision system. Int J Pattern Recognit Artif Intell 7(1): framework.
89–107 The “passive” stereo vision problem can be
18. Crowley JL, Bobet P, Mesrabi M (1993) Layered control of described as a system of at least two cameras attached
a binocular camera head. Int J Pattern Recognit Artif Intell
rigidly to one another with constant intrinsic calibra-
7(1):109–122
19. Christensen HI (1993) A low-cost robot camera head. Int J tion parameters (assumed), and the stereo pairs are
Pattern Recognit Artif Intell 7(1):69–87 considered to be temporally independent. Thus no
20. Beira R, Lopes M, Praga M, Santos-Victor J, Bernardino A, assumptions are made, nor propagated, about cam-
Metta G, Becchi F, Saltaren R (2006) Design of the robot-
era motion within the algorithmic framework. Passive
cub (iCub) head. In: Proceedings of the 2006 IEEE inter-
national conference on robotics and automation, Orlando, vision systems are limited to the extraction of metric
Florida, USA, pp 94–100 information from a single set of images taken from
Active Stereo Vision 9 A
different locations in space (or at different times) and environment [7, 8]. Examples of the output of such a
treat individual frames in stereo video sequences inde- system is shown in Fig. 2, and [9] provides an exam-
pendently. Dynamic stereo vision systems are charac- ple of an active system that interleaves the vergence A
terized by the extraction of metric information from and focus control of the cameras with surface esti-
sequences of imagery (i.e., video) and employ tempo- mation. The system uses an adaptive self-calibration
ral constraints or consistency on the sequence (e.g., method that integrates the estimation of camera param-
optical flow constraints). Thus, dynamic stereo sys- eters with surface estimation using prior knowledge
tems place assumptions on the camera motion such of the calibration, the motor control, and the previ-
as its smoothness (and small motion) between subse- ous estimate of the 3D surface properties. The result-
quent frames. Active stereo vision systems subsume ing system is able to automatically fixate on salient
both passive and dynamic stereo vision systems and visual targets in order to extend the surface estimation
are characterized by the use of robotic camera systems volume.
(e.g., stereo heads) or specially designed illuminant Although vision is a powerful sensing modality, it
systems (e.g., structured light) coupled with a feedback can fail. This is a critical issue for active stereo vision
system (see Fig. 1) for motor control. Although sys- where data is integrated over time. The use of comple-
tems can be designed with more modest goals – object mentary sensors – traditionally Inertial Measurement
tracking, for example – the common computational Units (see [10]) – augment the camera hardware
goal is the construction of large-scale 3D models of system with the capability to estimate the system
extended environments. dynamics using real-world constraints. Accelerome-
ters, gyroscopes, and compasses can provide timely
and accurate information either to assist in temporal
Theory correspondences and ego-motion estimation or as a
replacement when visual information is unreliable or
Fundamentally, active stereo systems (see [1]) must absent (i.e., dead reckoning).
solve three rather complex problems: (1) spatial
correspondence, (2) temporal correspondence, and Relation to Robotics and Mapping
(3) motor/camera/illuminant control. Spatial corre- A wide range of different active and dynamic stereo
spondence is required in order to infer 3D depth systems have been built (e.g., [7, 8, 11, 12]). Active
information from the information available in camera systems are often built on top of mobile systems
images captured at one time instant, while temporal (e.g., [7]) blurring the distinction between active and
correspondence is necessary to integrate visual infor- dynamic systems. In robotics, active stereo vision has
mation over time. The spatial and temporal correspon- been used for vehicle control in order to create 2D or
dences can either be treated as problems in isolation or 3D maps of the environment. Commonly the vision
integrated within a common framework. For example, system is complemented by other sensors. For instance
stereo correspondence estimation can be seeded using in [13], active stereo vision is combined with sonar
an ongoing 3D representation using temporal coher- sensors to create 2D and 3D models of the environ-
ence (e.g., [2, 3]) or considered in isolation using ment. Murray and Little [14] use a trinocular stereo
standard disparity estimation algorithms (see [4]). system to create occupancy maps of the environment
Motor or camera control systems are necessary for in-the-loop path planning and robot navigation.
to move (rotate and translate) the cameras so they Diebel et al. [15] employ active stereo vision for simul-
look in the appropriate direction (i.e., within a track- taneous estimation of the robot location and 3D map
ing or surveillance application), change their intrinsic construction, and [7], describes a vision system used
camera parameters (e.g., focal length or zoom), or for in-the-loop mapping and navigational control for
to tune the image processing algorithm to achiev- an aquatic robot. Davison, in [6], was one of the first to
ing higher accuracy for a specific purpose. Solving effectively demonstrate the use of active stereo vision
these three problems in an active stereo system enables technology as part of the navigation loop. The system
one to develop “intelligent” algorithms that infer ego- used a stereo head to selectively fixate scene fea-
motion [5], autonomously control vehicles throughout tures that improve the quality of the estimated map
the world [6], and/or reconstruct 3D models of the and trajectory. This involved using knowledge of the
A 10 Active Stereo Vision

a b

Uncontrolled
Movement

Object in Scene
Object in Scene (Dynamic)
(Static)

Left Image Right Image Left Sequence Right Sequence


Passive Stereo Dynamic Stereo

c Visual Feedback d
Pan

Motor Control T
Tilt

Cyclotorsion

Controlled
Movement

Object in Scene Base Frame


(Dynamic)

Translational Control (Mount to vehicle)

Left Sequence Right Sequence Active Stereo Configuration


Active Stereo

Active Stereo Vision, Fig. 1 Different types of stereo systems

Active Stereo Vision, Fig. 2 Point cloud datasets obtained by the active stereo system described in [7]
Active Stereo Vision 11 A
current map of the environment to point the camera up to a rigid rotation/translation/scale) by noting these
system in the direction where it should find salient features should match in 3D space as well as in 2D
features that it had seen before, move the robot to a space. A transformation can be linearly estimated that A
location where the features are visible, and then search- constrains the projective solution to an affine recon-
ing visually to find image locations corresponding to struction. Once the plane at infinity is known, the
these features. affine solution may be upgraded to a metric solution.
In order to achieve the desired accuracy in the intrin-
sics, a nonlinear minimization scheme is employed to
Active Stereo Heads
improve the solution. If one trusts the accuracy of the
The development of hardware platforms to mimic
camera motion control system, the extrinsics can be
human biology has resulted in a variety of differ-
seeded with this information in a nonlinear optimiza-
ent designs and methods for controlling binocular
tion scheme that minimizes the reprojection error of
sets of cameras. These result in what is known as
the image matching points and their 3D triangulated
“stereo heads” (see entry on the evolution of stereo
counterparts. This nonlinear optimization is known as
heads in this volume). These hardware platforms all
bundle adjustment [20] and is used in a variety of forms
have a common set of constraints, i.e., the systems
in the structure-from-motion literature (see [17, 18]).
consist of two cameras (binocular) with camera intrin-
sics/extrinsics that may be controlled. In [16], an active
stereo vision system is developed that mimics human
Relation to Other Types of Stereo Systems
biology that uses a bottom-up saliency map model cou-
Since active stereo systems are characterized by the
pled with a selective attention function to select salient
use of visual feedback to inform motor control sys-
image regions. If both left and right cameras estimate
tems (or higher-level vehicular navigational systems),
similar amounts of saliency in the same areas, the ver-
they are related to a wide range of research areas
gence of the cameras are controlled so that the cameras
and hardware systems. Mounting a stereo system to
are focused on this particular landmark.
a robotic vehicle is common in the robotics literature
to inform the navigation system about the presence
Autocalibration of obstacles [21] and to provide input to mapping
A fundamental issue with active stereo vision is the algorithms [22]. The use of such active systems are
need to establish and maintain calibration parame- applicable directly to autonomous systems as they pro-
ters online. Intrinsics and extrinsics are necessary to vide a high amount of controllable accuracy and dense
the 3D estimation process as they define the epipolar measurements at relatively low computational cost.
constraints which enable efficient disparity estimation One significant example is the use of active stereo in
algorithms [17, 18]. Each time the camera parameters the Mars Rover autonomous vehicles [12].
are modified (e.g., vergence of the cameras, change Estimating 3D information from stereo views is
of focus), the epipolar geometry must be re-estimated. problematic due to the lack of (or ambiguous) tex-
Although kinematic modeling of motor systems pro- ture in man-made environments. This can be alleviated
vide good initial estimates of changes in camera pose, with the use of active illumination [23]. Projecting a
this is generally insufficiently accurate to be used by known pattern, rather than uniform lighting, into the
itself to update camera calibration. Thus, autocalibra- scene enables the estimation of a more dense disparity
tion becomes an important task within active stereo field using standard stereo disparity estimation algo-
vision. Approaches to autocalibration are outlined in rithms due to the added texture in textureless regions
[17, 19]. In [17], the autocalibration algorithm operates (see [24]). The illumination may be controlled actively
on pairs of stereo images taken in sequence. A pro- depending on perceived scene texture, the desired
jective reconstruction for motion and structure of the range, or the ambient light intensity of the environ-
scene is constructed. This is performed for each pair ment. The illumination may be within the visible light
of stereo images individually for the same set of fea- spectrum or in the infrared spectrum as most camera
tures (thus they must be matched in the stereo pairs sensors are sensitive to IR light. This has the added
as well as tracked temporally). The projective solu- advantage that humans in the environment are not
tions can be upgraded to an affine solution (ambiguous affected by the additional illumination.
A 12 Active Stereo Vision

Application 11. Grosso E, Tistarelli M, Sandini G (1992) Active/dynamic


stereo for navigation. In: Second European conference on
computer vision, Santa Margherita Ligure, pp 516–525
Active stereo vision is characterized by the use of 12. Maimone MW, Leger PC, Biesiadecki JJ (2007) Overview
visual feedback in multi-camera systems to control of the mars exploration rovers autonomous mobility and
the intrinsics and extrinsics of the cameras (or vehic- vision capabilities. In: IEEE international conference on
ular platforms). Active stereo vision systems find a robotics and autonomsou (ICRA) space robotics workshop,
Rome
wide range of application in autonomous vehicle nav- 13. Wallner F, Dillman R (1995) Real-time map refinement by
igation, gaze tracking, and surveillance. A host of use of sonar and active stereo-vision. Robot Auton Syst
hardware systems exist and commonly utilize two 16(1):47–56. Intelligent robotics systems SIRS’94
cameras for binocular stereo and motors to control the 14. Murray D, Little JJ (2000) Using real-time stereo
vision for mobile robot navigation. Auton Robot 8(2):
gaze/orientation of the system. Visual attentive pro- 161–171
cesses (e.g., [25, 26]) may be used to determine the 15. Diebel J, Reutersward K, Thrun S, Davis J, Gupta
next viewpoint for a particular task, and dense stereo R (2004) Simultaneous localization and mapping with
algorithms can be used for estimating 3D structure of active stereo vision. In: IEEE/RSJ international confer-
ence on intelligent robots and systems, vol 4, Sendai,
the scene. Fundamental computational issues include pp 3436–3443
autocalibration of the sensor with changes in its con- 16. Jung BS, Choi SB, Ban SW, Lee M (2004) A biologi-
figuration and the development of active stereo control cally inspired active stereo vision system using a bottom-
and reconstruction algorithms. up saliency map model. In: Rutkowski L, Siekmann J,
Tadeusiewicz R, Zadeh LA (eds) Artificial intelligence
and soft computing – ICAISC 2004. Volume 3070 of lec-
ture notes in computer science. Springer, Berlin/Heidelberg,
References pp 730–735
17. Hartley RI, Zisserman A (2000) Multiple view geom-
1. Vieville T (1997) A few steps towards 3d active vision. etry in computer vision. Cambridge University Press,
Springer, New York/Secaucus Cambridge
2. Leung C, Appleton B, Lovell B, Sun C (2004) An energy 18. Faugeras O, Luong QT, Papadopoulou T (2001) The geom-
minimisation approach to stereo-temporal dense reconstruc- etry of multiple images: the laws that govern the formation
tion. In: Proceedings of the 17th international conference on of images of a scene and some of their applications. MIT,
pattern recognition, vol 4, Cambridge, pp 72–75 Cambridge
3. Min D, Yea S, Vetro A (2010) Temporally consistent stereo 19. Horaud R, Csurka G (1998) Self-calibration and euclidean
matching using coherence function. In: 3DTV-conference: reconstruction using motions of a stereo rig. In: Sixth
the true vision – capture, transmission and display of 3D international conference on computer vision, Bombay,
video (3DTV-CON), 2010, Tampere, pp 1–4 pp 96–103
4. Scharstein D, Szeliski R (2002) A taxonomy and evaluation 20. Triggs B, McLauchlan P, Hartley R, Fitzgibbon A (2000)
of dense two-frame stereo correspondence algorithms. Int J Bundle adjustment – a modern synthesis. In: Triggs B,
Comput Vis 47(1–3):7–42 Zisserman A, Szeliski R (eds) Vision algorithms: theory and
5. Olson CF, Matthies LH, Schoppers M, Maimone MW practice. Volume 1883 of lecture notes in computer science.
(2001) Stereo ego-motion improvements for robust rover Springer, New York, pp 298–372
navigation. In: Proceedings IEEE international conference 21. Williamson T, Thorpe C (1999) A trinocular stereo sys-
on robotics and automation, vol 2, Seoul, pp 1099–1104 tem for highway obstacle detection. In: IEEE interna-
6. Davison AJ (1998) Mobile robot navigation using active tional conference on robotics and automation, Detroit,
vision. PhD thesis, University of Oxford pp 2267–2273
7. Hogue A, Jenkin M (2006) Development of an underwater 22. Schleicher D, Bergasa LM, Ocaña M, Barea R, López E
vision sensor for 3d reef mapping. In: IEEE/RSJ interna- (2010) Real-time hierarchical stereo visual slam in large-
tional conference on intelligent robots and systems, Beijing, scale environments. Robot Auton Syst 58:991–1002
pp 5351–5356 23. Se S, Jasiobedzki P, Wildes R (2007) Stereo-vision based 3d
8. Se S, Jasiobedzki P (2005) Instant scene modeler for crime modeling of space structures. In: Proceedings of the SPIE
scene reconstruction. In: 2005 IEEE computer society con- conference on sensors and systems for space applications,
ference on computer vision and pattern recognition work- vol 6555, Orlando
shop on safety and security applications, vol III, San Diego, 24. Rusinkiewicz S, Hall-Holt O, Levoy M (2002) Real-time 3d
pp 123–123 model acquisition. ACM Trans Graph 21(3):438–446
9. Ahuja N, Abbott A (1993) Active stereo: integrating dis- 25. Frintrop S, Rome E, Christensen HI (2010) Computational
parity, vergence, focus, aperture and calibration for surface visual attention systems and their cognitive foundations: a
estimation. IEEE Trans Pattern Anal Mach Intell 15(10): survey. ACM Trans Appl Percept 7(1):1–39
1007–1029 26. Tsotsos JK (2001) Motion understanding: task-directed
10. Everett HR (1995) (ECCV) Sensors for mobile robots: attention and representations that link perception with
theory and application. A. K. Peters, Ltd., Natick action. Int J Comput Vis 45(3):265–280
Activity Recognition 13 A
usually works under controlled environment to capture
Active Vision the three-dimensional (3D) joint locations or angles
of human bodies; multiple camera systems provide A
Animat Vision a way to reconstruct 3D body models from multiple
viewpoint images. Both MOCAP and multiple cam-
Activity Analysis era systems have physical limitations on their use, and
single camera systems are probably more practical for
Multi-camera Human Action Recognition many applications. The latter, however, captures least
visual information and, hence, is the most challeng-
ing setting for activity recognition. In the past decade,
research in activity recognition has mainly focused
Activity Recognition on single camera systems. Recently, the release of
commodity depth cameras, such as Microsoft Kinect
Wanqing Li1 , Zicheng Liu2 and Zhengyou Zhang3 Sensors, provides a new feasible and economic way
1
University of Wollongong, Wollongong, NSW, to capture simultaneously two-dimensional color infor-
Australia mation and depth information of the human move-
2
Microsoft Research, Microsoft Corporation, ment and, hence, could potentially advance the activity
Redmond, WA, USA recognition significantly.
3
Microsoft Research, Redmond, WA, USA Regardless of which capturing device is used, a use-
ful activity recognition system has to be independent
of anthropometric differences among the individuals
Synonyms who perform the activities, independent of the speed at
which the activities are performed, robust against vary-
Action recognition ing acquisition settings and environmental conditions
(for instance, different viewpoints and illuminations),
scalable to a large number of activities, and capa-
Related Concepts
ble of recognizing activities in a continuous manner.
Since a human body is usually viewed as an articu-
Gesture Recognition
lated system of the rigid links or segments connected
by joints, human motion can be considered as a con-
Definition tinuous evolution of the spatial configuration of the
segments or body posture, and effective representation
Activity recognition refers to the process of identifying of the body configuration and its dynamics over time
the types of movement performed by humans over has been the central to the research of human activity
a certain period of time. It is also known as action recognition.
recognition when the period of time is relatively short.

Theory
Background
Let O D fo1 ; o2 ;    ; on g be a sequence of obser-
The classic study on visual analysis of biologi- vations of the movement of a person over a period
cal motion using moving light display (MLD) [1] of time. The observations can be a sequence of joint
has inspired tremendous interests among the com- angles, a sequence of color images or silhouettes, a
puter vision researchers in the problem of recog- sequence of depth maps, or a combination of them. The
nizing human motion through visual information. task of activity recognition is to label O into one of the
The commonly used devices to capture human move- L classes C D fc1 ; c2 ;    ; cL g. Therefore, solutions to
ment include human motion capture (MOCAP) with the problem of activity recognition are often based on
or without markers, multiple video camera systems, machine learning and pattern recognition approaches,
and single video camera systems. A MOCAP device and an activity recognition system usually involves
A 14 Activity Recognition

extracting features from the observation sequence O, by classifying the STIPs into a set of vocabulary (i.e.,
learning a classifier from training samples, and clas- a bag of visual words) and calculating the histogram
sifying O using the trained classifier. However, the of the occurrence of the vocabulary within the obser-
spatial and temporal complexity of human activities vation sequence O. There are two commonly used
has led researchers to cast the problem from differ- STIP extraction techniques. One extends Harris corner
ent perspectives. Specifically, the existing techniques detection and automatic scale selection in 2D space to
for activity recognition can be divided into two cate- 3D space and time [6] and the other is based on a pair
gories based on whether the dynamics of the activities of one-dimensional (1D) Gabor filters applied tempo-
is implicitly or explicitly modeled. rally and spatially [7]. Recently, another STIP detector
In the first category [2–9], the problem of activity has been proposed by decomposing an image sequence
recognition is cast from a temporal classification prob- into spatial components and motion components using
lem to a static classification one by representing activ- nonnegative matrix factorization and detecting STIPs
ities using descriptors. A descriptor is extracted from in 2D spatial and 1D motion space using difference of
the observation sequence O, which intends to capture Gaussian (DoG) detectors [8]. In terms of the classifier
both spatial and temporal information of the activ- for STIP-based descriptors, besides SVM and KNN,
ity and, hence, to model the dynamics of the activity latent topic models such as the probabilistic latent
implicitly. Activity recognition is achieved by a con- semantic analysis (pLSA) model and latent Dirich-
ventional classifier such as Support Vector Machines let allocation (LDA) were used in [9]. STIP-based
(SVM) or K-nearest neighborhood (KNN). There are descriptors have a few practical advantages including
three commonly used approaches to extract activity being applicable to image sequences in realistic condi-
descriptors. tions, not requiring foreground/background separation
The first approach builds motion energy images or human tracking, and having the potential to deal
(MEI) and motion history images (MHI), proposed with partial occlusions [10]. In many realistic appli-
by Bobick and Davis [2], by stacking a sequence cations, an activity may occupy only a small portion
of silhouettes to capture where and how the motion of the entire space-time volume of a video sequence.
is performed. Activity descriptors are extracted from In such situations, it does not make sense to clas-
the MEI and MHI. For instance, seven Hu moments sify the entire video. Instead, one needs to locate the
were extracted in [2] to serve as action descrip- activity in space and time. This is commonly known
tors and recognition was based on the Mahalanobis as the activity detection or action detection problem.
distance between the moment descriptors of the trained Techniques have been developed for activity detection
activities and the input activity. using interest points [11].
The second approach considers a sequence of In the second category [12–17], the proposed meth-
silhouettes as a spatiotemporal volume, and an activ- ods usually follow the concept that an activity is
ity descriptor is computed from the volume. Typi- a temporal evolution of the spatial configuration of
cal examples are the work by Yilmaz and Shah [3] the body parts and, hence, emphasize more on the
which computes the differential geometric surface dynamics of the activities than the methods in the
properties (i.e., Gaussian curvature and mean curva- first category. They usually extract a sequence of fea-
ture); the work by Gorelick et al. [4] which extracts ture vectors, each feature vector being extracted from
space-time saliency, action dynamics, and shape struc- a frame, or a small neighborhood, of the observation
ture and orientation; and the work by Mokhber sequence O. The two commonly used approaches are
et al. [5] which calculates the 3D moments of the temporal templates and graphical models.
volume. The temporal-template-based approach, typically,
The third approach describes an activity using a set directly represents the dynamics through exem-
of spatiotemporal interest points (STIPs). The general plar sequences and adopts dynamic time warp-
concept is first to detect STIPs from the observations ing (DTW) to compare an input sequence with
O which is usually a video sequence. Features are then the exemplar sequences. For instance, Wang and
extracted from a local volume around each STIP, and Suter [18] employed locality preserving projection
a descriptor can be formed by simply aggregating the (LPP) to project a sequence of silhouettes into a low-
local features together to become a bag of features or dimensional space to characterize the spatiotemporal
Activity Recognition 15 A
property of the activity and used DTW and temporal features extracted at a variety of scales on the sil-
Hausdorff distance for similarity matching. houettes and 3D joint angles. The results have shown
In the graphical model-based approach, both gener- that CRFs outperform the HMM and are also robust A
ative and discriminative models have been extensively against the variability of the test sequences with respect
studied for activity recognition. The most prominent to the training samples. More recently, Wang and
generative model is the hidden Markov model (HMM), Mori [17] modeled a human action by a flexible
where sequences of observed features are grouped into constellation of parts conditioned on image observa-
similar configuration, i.e., states, and both the prob- tions using hidden conditional random fields (HCRF)
ability distribution of the observations at each state and achieved highly accurate frame-based action
and the temporal transitional functions between these recognition.
states are learned from training samples. The first Despite the extensive effort and progress in activ-
work on action recognition using HMM is probably by ity recognition research in the past decade, continu-
Yamato et al. [12], where a discrete HMM is used to ous recognition of activities under realistic conditions,
represent sequences over a set of vector-quantized sil- such as with viewpoint invariance and large number of
houette features of tennis footage. HMM is a powerful activities, remains challenging.
tool to model a small number of short-term activi-
ties since a practical HMM is usually a fixed- and
low-order Markov chain. Notable early extensions to Application
overcome this drawback of the HMM are the variable-
length Markov models (VLMM) and layered HMM. Activity recognition has many potential applications.
For details, the readers are referred to [13, 14], respec- It is one of the key enabling technologies in security
tively. Recently, a more general generative graphi- and surveillance for automatic monitoring of human
cal model, referred to as an action graph, has been activities in a public space and of activities of daily
established in [15], where nodes of the action graph living of elderly people at home. Robust understand-
represents salient postures that are used to character- ing and interpretation of human activities also allow
ize activities and shared by different activities, and a natural way for humans to interact with machines.
weight between two nodes measures the transitional A proper modeling of the spatial configuration and
probability between the two postures represented by dynamics of human motion would enable realistic
the two nodes. An activity is encoded by one or mul- synthesis of human motion for gaming and movie
tiple paths in the action graph. Due to the sharing industry and help train humanoid robots in a flexi-
mechanism, the action graph can be trained and also ble and economic way. In sports, activity recognition
easily expanded to new actions with a small number of technology has also been used in training and in the
training samples. In addition, the action graph does not retrieval of video sequences.
need special nodes representing beginning and ending
postures of the activities and, hence, allows continuous
recognition. References
The generative graphical models often rely on an
assumption of statistical independence of observations 1. Johansson G (1973) Visual perception of biological motion
and a model for its analysis. Percept Psychophys 14(2):
to compute the joint probability of the states and the
201–211
observations. This makes it hard to model the long- 2. Bobick A, Davis J (2001) The recognition of human move-
term contextual dependencies which is important to ment using temporal templates. IEEE Trans Pattern Anal
the recognition of activities over a long period of time. Mach Intell 23(3):257–267
3. Yilmaz A, Shah M (2008) A differential geometric approach
The discriminative models, such as conditional random
to representing the human actions. Comput Vision Image
fields (CRF), offer an effective way to model long- Underst 109(3):335–351
term dependency and compute the conditional proba- 4. Gorelick L, Blank M, Shechtman E, Irani M, Basri R (2007)
bility that maps the observations to the motion class Actions as space-time shapes. IEEE Trans Pattern Anal
Mach Intell 29(12):2247–2253
labels. The linear chain CRF was employed in [16]
5. Mokhber A, Achard C, Milgram M (2008) Recognition of
to recognize ten different human activities using fea- human behavior by space-time silhouette characterization.
tures of combined shape-context and pair-wise edge Pattern Recogn 29(1):81–89
Another Random Document on
Scribd Without Any Related Topics
BOBBINS.
EUROPE.
We have already seen how an increase in the number of
correspondences between objects from distant countries increases the
weight of their evidence in favor of contact or communication between
the peoples. If it should be found upon comparison that the bobbins on
which thread is to be wound, as well as the spindle-whorls with which it
is made, had been in use during prehistoric times in the two
hemispheres, it would add to the evidence of contact or communication.
The U. S. National Museum possesses a series of these bobbins, as they
are believed to have been, running from large to small, comprising about
one dozen specimens from Italy, one from Corneto and the others from
Bologna, in which places many prehistoric spindle whorls have been
found (figs. 367 and 368). These are of the type Villanova. The end as
well as the side view is represented. The former is one of the largest,
the latter of middle size, with others smaller forming a graduating series.
The latter is engraved on the end by dotted incisions in three parallel
lines arranged in the form of a Greek cross. A similar bobbin from
Bologna bears the sign of the Swastika on its end (fig. 193).[314] It was
found by Count Gozzadini and forms part of his collection in Bologna.

Fig. 367.
BOBBIN OR SPOOL FOR WINDING THREAD (?).
Type Villanova. Corneto, Italy.
U. S. National Museum.
Fig. 368.
TERRA-COTTA BOBBIN OR SPOOL FOR WINDING THREAD (?).
Type Villanova. Bologna, Italy.
Cat. No. 101771, U. S. N. M.

UNITED STATES.
The three following figures represent clay and stone bobbins, all from
the State of Kentucky. Fig. 369 shows a bobbin elaborately decorated,
from a mound near Maysville, Ky. It has a hole drilled longitudinally
through the center. The end shows a cross of the Greek form with this
hole in the center of the cross. Fig. 370 shows a similar object from
Lexington, Ky., sent by the Kentucky University. It is of fine-grained
sandstone, is drilled longitudinally through the center and decorated as
shown. The end view shows a series of concentric circles with rows of
dots in the intervals. Fig. 371 shows a similar object of fine-grained
sandstone from Lewis County, Ky. It is also drilled longitudinally, and is
decorated with rows of zigzag lines as shown. The end view represents
four consecutive pentagons laid one on top of the other, which increase
in size as they go outward, the hole through the bobbin being in the
center of these pentagons, while the outside line is decorated with
spikes or rays extending to the periphery of the bobbin, all of which is
said to represent the sun. The specimen shown in fig. 372, of fine-
grained sandstone, is from Maysville, Ky. The two ends are here
represented because of the peculiarity of the decoration. In the center is
the hole, next to it is a rude form of Greek cross which on one end is
repeated as it goes farther from the center; on the other, the decoration
consists of three concentric circles, one interval of which is divided by
radiating lines at regular intervals, each forming a rectangle. Between
the outer lines and the periphery are four radiating rays which, if
completed all around, might form a sun symbol. Bobbins of clay have
been lately discovered in Florida by Mr. Clarence B. Moore and noted by
Professor Holmes.

Fig. 369.
BOBBIN (?) FROM A MOUND NEAR MAYSVILLE, KENTUCKY.
Cat. No. 16748, U. S. N. M.

Fig. 370.
BOBBIN (?) FROM LEXINGTON, KENTUCKY.
Cat. No. 16691, U. S. N. M.

Fig. 371.
BOBBIN (?) OF FINE-GRAINED SANDSTONE.
Lewis County, Kentucky.
Cat. No. 59681, U. S. N. M.
Thus we find some of the same objects which in Europe were made and
used by prehistoric man and which bore the Swastika mark have
migrated to America, also in prehistoric times, where they were put to
the same use and served the same purpose. This is certainly no
inconsiderable testimony in favor of the migration of the sign.
VIII.—Similar Prehistoric Arts,
Industries, and Implements in Europe and
America as Evidence of the Migration of
Culture.

The prehistoric objects described in the foregoing chapter are not the
only ones common to both Europe and America. Related to the spindle-
whorls and bobbins is the art of weaving, and it is perfectly susceptible
of demonstration that this art was practiced in the two hemispheres in
prehistoric times. Woven fabrics have been found in the Swiss lake
dwellings, in Scandinavia, and in nearly all parts of Europe. They
belonged to the Neolithic and Bronze ages.

Fig. 372.
VIEW SHOWING BOTH ENDS OF A BOBBIN(?) OF FINE-GRAINED SANDSTONE.
Maysville, Kentucky. Cat. No. 16747, U. S. N. M.
Figs. 373 and 374 illustrate textile fabrics in the Bronze Age. Both
specimens are from Denmark, and the National Museum possesses
another specimen (Cat. No. 136615) in all respects similar. While
prehistoric looms may not have been found in Europe to be compared
with the looms of modern savages in America, yet these specimens of
cloth, with the hundreds of others found in the Swiss lake dwellings,
afford the most indubitable proof of the use of the looms in both
countries during prehistoric times.
Complementary to this, textile fabrics have been found in America, from
the Pueblo country of Utah and Colorado, south through Mexico, Central
and South America, and of necessity the looms with which they were
made were there also. It is not meant to be said that the looms of the
two hemispheres have been found, or that they or the textile fabrics are
identical. The prehistoric looms have not been found in Europe, and
those in America may have been affected by contact with the white
man. Nor is it meant to be said that the textile fabrics of the two
hemispheres are alike in thread, stitch, or pattern. But these at best are
only details. The great fact remains that the prehistoric man of the two
hemispheres had the knowledge to spin fiber into thread, to wind it on
bobbins, and to weave it into fabrics; and whatever differences there
may have been in pattern, thread, or cloth, they were finally and
substantially the same art, and so are likely to have been the product of
the same invention.
While it is not the intention to continue this examination among the
prehistoric objects of the two hemispheres in order to show their
similarity and thus prove migration, contact, or communication, yet it
may be well to mention some of them, leaving the argument or proof to
a future occasion.
The polished stone hatchets of the two hemispheres are substantially
the same. There are differences of material, of course, for in each
country the workman was obliged to use such material as was
obtainable. There are differences in form between the polished stone
hatchets of the two hemispheres, but so there are differences between
different localities in the same hemisphere. Some hatchets are long,
others short, some round, others flat, some have a pointed end, others a
square or nearly square or unfinished end; some are large, others small.
But all these differences are to be found equally well pronounced within
each hemisphere.
Scrapers have also been found in both
hemispheres and in all ages. There are the
same differences in material, form, and
appearance as in the polished stone hatchet.
There is one difference to be mentioned of
this utensil—i. e., in America the scraper has
been sometimes made with a stem and with
notches near the base, after the manner of
arrow- and spear-heads, evidently intended
to aid, as in the arrow- and spear-head, in
fastening the tool in its handle. This
peculiarity is not found in Europe, or, if
found, is extremely rare. It is considered
that this may have been caused by the use
of a broken arrow- or spear-head, which
seems not to have been done in Europe. But
this is still only a difference in detail, a
difference slight and insignificant, one which
occurs seldom and apparently growing out
of peculiar and fortuitous conditions.

Fig. 373. The art of drilling in stone was known over


WOMAN’S WOOLEN DRESS an extended area in prehistoric times, and
FOUND IN AN OAK COFFIN we find innumerable examples which must
AT BORUM-ESHOI, DENMARK. have been performed in both hemispheres
Bronze Age. Report of the substantially in the same manner and with
Smithsonian Institution the same machine.
(U. S. National Museum), The art of sawing stone was alike practiced
1892, pl. ci, fig 2. during prehistoric times in the two
hemispheres. Many specimens have been
found in the prehistoric deposits of both.
The aboriginal art of making pottery was also carried on in the same or
a similar manner in both hemispheres. The examples of this art are as
numerous as the leaves on the trees. There were differences in the
manipulation and treatment, but the principal fact remains that the art
was the same in both countries. Not only were the products greatly
similar, but the same style of geometric decoration by incised lines is
common to both. Greater progress in making
pottery was made in the Western than in the
Eastern Hemisphere during prehistoric times.
The wheel was unknown in both hemispheres,
and in both the manipulation of clay was by
hand. True, in the Western Hemisphere there
was greater dexterity and a greater number of
methods employed. For example, the vase might
be built up with clay inside a basket, which
served to give both form and decoration; it was
coiled, the damp clay being made in a string and
so built up by a circular movement, drawing the
side in or out as the string of clay was laid
thereon, until it reached the top; it may have
been decorated by the pressure of a textile
fabric, real or simulated, into the damp clay. A
few years ago it would have been true to have Fig. 374.
said that pottery decorated in this manner was DETAIL OF DRESS SHOWN
peculiar to the Western Hemisphere, and that it IN THE PRECEDING FIGURE.
had never been found in the Eastern
Hemisphere, but Prince Poutjatine has lately found on his property,
Bologoje, in the province of Novgorod, midway between Moscow and St.
Petersburg, many pieces of prehistoric pottery which bear evidence of
having been made in this manner, and while it may be rare in the
Eastern Hemisphere, it is similar in these respects to thousands of pieces
of prehistoric pottery in North America.
One of the great puzzles for archæologists has been the prehistoric jade
implements found in both countries. The raw material of which these
were made has never been found in sufficient quantities to justify
anyone in saying that it is indigenous to one hemisphere and not to the
other. It may have been found in either hemisphere and exported to the
other. But of this we have no evidence except the discovery in both of
implements made of the same material. This material is dense and hard.
It is extremely difficult to work, yet the operations of sawing, drilling,
carving, and polishing appear to have been conducted in both
hemispheres with such similarity as that the result is practically the
same.
Prehistoric flint-chipping was also carried on in both hemispheres with
such similarity of results, even when performing the most difficult and
delicate operations, as to convince one that there must have been some
communication between the two peoples who performed them.
The bow and arrow is fairly good evidence of prehistoric migration,
because of the singularities of the form and the intricacies of the
machinery, and because it is probably the earliest specimen of a
machine of two separate parts, by the use of which a missile could be
sent at a greater distance and with greater force than if thrown by hand.
It is possible that the sling was invented as early as the bow and arrow,
although both were prehistoric and their origin unknown.
The bow and arrow was the greatest of all human inventions—greatest
in that it marked man’s first step in mechanics, greatest in adaptation of
means to the end, and as an invented machine it manifested in the most
practical and marked manner the intellectual and reasoning power of
man and his superiority over the brute creation. It, more than any other
weapon, demonstrated the triumph of man over the brute, recognizing
the limitations of human physical capacity in contests with the brute.
With this machine, man first successfully made up for his deficiency in
his contests with his enemies and the capture of his game. It is useless
to ask anything of history about the beginnings of the bow and arrow;
wherever history appears it records the prior existence, the almost
universal presence, and the perfected use of the bow and arrow as a
weapon. Yet this machine, so strange and curious, of such intricacy of
manufacture and difficulty of successful performance, had with all its
similarities and likenesses extended in prehistoric times almost
throughout the then inhabited globe. It is useless to specify the time, for
the bow and arrow existed earlier than any time of which we know; it is
useless for us to specify places, for it was in use throughout the world
wherever the world was occupied by neolithic man.
Imitative creature as was man, and slow and painful as were his steps in
progress and in invention during his infancy on earth, when he knew
nothing and had everything yet to learn, it is sufficiently wonderful that
he should have invented the bow and arrow as a projectile machine for
his weapons; but it becomes doubly and trebly improbable that he
should have made duplicate and independent inventions thereof in the
different hemispheres. If we are to suppose this, why should we be
restricted to a separate invention for each hemisphere, and why may we
not suppose that he made a separate invention for each country or each
distant tribe within the hemisphere? Yet we are met with the astonishing
but, nevertheless, true proposition that throughout the entire world the
bow and arrow existed in the early times mentioned, and was
substantially the same machine, made in the same way, and serving the
same purpose.
CONCLUSION.

The argument in this paper on the migration of arts or symbols, and


with them of peoples in prehistoric times, is not intended to be
exhaustive. At best it is only suggestive.
There is no direct evidence available by which the migration of symbols,
arts, or peoples in prehistoric times can be proved, because the events
are beyond the pale of history. Therefore we are, everybody is, driven to
the secondary evidence of the similarity of conditions and products, and
we can only subject them to our reason and at last determine the truth
from the probabilities. In proportion as the probabilities of migration
increase, it more nearly becomes a demonstrated fact. It appears to the
author that the probabilities of the migration of the Swastika to America
from the Old World is infinitely greater than that it was an independent
invention.
The Swastika is found in America in such widely separated places,
among such different civilizations, as much separated by time as by
space, that if we have to depend on the theory of separate inventions to
explain its introduction into America we must also depend upon the
same theory for its introduction into the widely separated parts of
America. The Swastika of the ancient mound builders of Ohio and
Tennessee is similar in every respect, except material, to that of the
modern Navajo and Pueblo Indian. Yet the Swastikas of Mississippi and
Tennessee belong to the oldest civilization we know in America, while
the Navajo and Pueblo Swastikas were made by men still living. A
consideration of the conditions bring out these two curious facts: (1)
That the Swastika had an existence in America prior to any historic
knowledge we have of communication between the two hemispheres;
but (2) we find it continued in America and used at the present day,
while the knowledge of it has long since died out in Europe.
The author is not unaware of the new theories concerning the
parallelism of human development by which it is contended that
absolute uniformity of man’s thoughts and actions, aims and methods, is
produced when he is in the same degree of development, no matter in
what country or in what epoch he lives. This theory has been pushed
until it has been said, nothing but geographical environment seems to
modify the monotonous sameness of man’s creations. The author does
not accept this theory, yet he does not here controvert it. It may be true
to a certain extent, but it surely has its limitations, and it is only
applicable under special conditions. As a general proposition, it might
apply to races and peoples but not to individuals. If it builds on the
hereditary human instincts, it does not take into account the will, energy,
and reasoning powers of man. Most of all, it leaves out the egoism of
man and his selfish desire for power, improvement, and happiness, and
all their effects, through the individual, on human progress. In the
author’s opinion the progress of peoples through consecutive stages of
civilization is entirely compatible with his belief that knowledge of
specific objects, the uses of material things, the performance of certain
rites, the playing of certain games, the possession of certain myths and
traditions, and the carrying on of certain industries, passed from one
country to another by migration of their peoples, or by contact or
communication between them; and that the knowledge, by separate
peoples, of the same things, within reasonable bounds of similarity of
action and purpose, and with corresponding difficulty of performance,
may well be treated as evidence of such migration, contact, or
communication. Sir John Lubbock expresses the author’s belief when he
says,[315] “There can be no doubt but that man originally crept over the
earth’s surface, little by little, year by year, just, for instance, as the
weeds of Europe are now gradually but surely creeping over the surface
of Australia.” The word migration has been used by the author in any
sense that permitted the people, or any number thereof, to pass from
one country to another country, or from one section of a country to
another section of the same country, by any means or in any numbers
as they pleased or could.
The theory (in opposition to the foregoing) is growing in the United
States that any similarity of culture between the two hemispheres is held
to be proof of migration of peoples. It appears to the author that these
schools both run to excess in propagating their respective theories, and
that the true condition of affairs lies midway between them. That is to
say, there was certain communication between the two hemispheres, as
indicated by the similarities in culture and industry, the objects of which
could scarcely have been the result of independent invention; while
there are too many dissimilar arts, habits, customs, and modes of life
belonging to one hemisphere only, not common to both, to permit us to
say there was continuous communication between them. These
dissimilarities were inventions of each hemisphere independent of the
other.
An illustration of the migration to America is the culture of Greece. We
know that Greek art and architecture enter into and form an important
part of the culture of Americans of the present day; yet the people of
America are not Greek, nor do they possess any considerable share of
Greek culture or civilization. They have none of the blood of the Greeks,
nor their physical traits, nor their manners, habits, customs, dress,
religion, nor, indeed, anything except their sculpture and architecture.
Now, there was undoubtedly communication between the two countries
in so far as pertains to art and architecture; but it is equally true that
there has been no migration of the other elements of civilization
mentioned.
The same thing may be true with regard to the migrations of prehistoric
civilization. There may have been communication between the countries
by which such objects as the polished stone hatchet, the bow and arrow,
the leaf-shaped implement, chipped arrow- and spear-heads, scrapers,
spindle-whorls, the arts of pottery making, of weaving, of drilling and
sawing stone, etc., passed from one to the other, and the same of the
Swastika; yet these may all have been brought over in sporadic and
isolated cases, importing simply the germ of their knowledge, leaving
the industry to be independently worked out on this side. Certain
manifestations of culture, dissimilar to those of the Old World, are found
in America; we have the rude notched ax, the grooved ax, stemmed
scraper, perforator, mortar and pestle, pipes, tubes, the ceremonial
objects which are found here in such infinite varieties of shape and form,
the metate, the painted pottery, etc., all of which belong to the American
Indian civilization, but have no prototype in the prehistoric Old World.
These things were never brought over by migration or otherwise. They
are indigenous to America.
Objects common to both hemispheres exist in such numbers, of such
infinite detail and difficulty of manufacture, that the probabilities of their
migration or passage from one country to another is infinitely greater
than that they were the result of independent invention. These common
objects are not restricted to isolated cases. They are great in number
and extensive in area. They have been the common tools and utensils
such as might have belonged to every man, and no reason is known why
they might not have been used by, and so represent, the millions of
prehistoric individuals in either hemisphere. This great number of
correspondences between the two hemispheres, and their similarity as
to means and results is good evidence of migration, contact, or
communication between the peoples; while the extent to which the
common industries were carried in the two continents, their delicacy and
difficulty of operation, completes the proof and forces conviction.
It is not to be understood in the few foregoing illustrations that the
number is thereby exhausted, or that all have been noted which are
within the knowledge of the author. These have been cited as illustrative
of the proposition and indicating possibilities of the argument. If a
completed argument in favor of prehistoric communication should be
prepared, it would present many other illustrations. These could be
found, not only among the objects of industry, utensils, etc., but in the
modes of manufacture and of use which, owing to their number and the
extent of territory which they cover, and the difficulty of
accomplishment, would add force to the argument.
BIBLIOGRAPHY OF THE SWASTIKA.
ABBOTT, Charles C. Primitive Industry: | or | Illustrations of the
Handiwork, | in stone, bone and clay, | of the | Native Races | of |
the Northern Atlantic Seaboard of America. | By Charles C. Abbott,
M. D. | Cor. Member Boston Soc. Nat. Hist., | Fellow Royal Soc. | of
Antiq. of the North. Copenhagen. etc., etc., | Salem, Mass.: |
George A. Bates. | 1881.
8º, pp. v-vi, 1-560, fig. 429.
Grooved ax, Pemberton, N. J. Inscription of Swastika denounced as a
fraud, p. 32.
ALLEN, E. A. The | Prehistoric World | or | Vanished Races | by | E. A.
Allen, | author of “The Golden Gems of Life.” | Each of the Following
well-known Scholars reviewed one or more | Chapters, and made
valuable suggestions: | C. C. Abbott, M. D., | Prof. F. W. Putnam, |
A. F. Bandelier, | Prof. Chas. Rau, | Alexander Winchell, LL. D., |
Cyrus Thomas, Ph. D. | G. F. Wright. | Cincinnati: | Central
Publishing House. | 1885.
8º, pp. i-vi, 1-820.
Swastika regarded as an ornament in the Bronze Age, p. 233.
AMERICAN ANTIQUARIAN and Oriental Journal.
Vol. vi, Jan., 1884, p. 62.
Swastika found in a tessellated Mosaic pavement of Roman ruins at
Wivelescombe, England; reported by Cornelius Nicholson, F. G. S., cited
in Munro’s “Ancient Scottish Lake Dwellings,” note, p. 132.
AMERICAN ENCYCLOPEDIA.
Title, Cross.
AMERICAN JOURNAL of Archæology and of the History of Fine Arts.
Vol. xi. No. 1, Jan.-March, 1896, p. 11, fig. 10. Andokides, a Greek vase
painter (525 B. C.), depicted Athena on an amphora with her dress
decorated with many ogee and meander Swastikas. The specimen is in
the Berlin Museum.
ANDERSON, Joseph. Scotland in Early Christian Times.
The Swastika, though of Pagan origin, became a Christian symbol from
the fourth to the fourteenth century, A. D. Vol. ii, p. 218.
Cited in “Munro’s Ancient Scottish Lake Dwellings,” note, p. 132.
BALFOUR, Edward. Cyclopædia of India | and of | Eastern and Southern
Asia, | Commercial, Industrial, and Scientific: | Products of the |
Mineral, Vegetable and Animal Kingdoms, | Useful Arts and
Manufactures; | edited by | Edward Balfour, L. R. C. S. E., |
Inspector General of Hospitals, Madras Medical Department, |
Fellow of the University of Madras, | Corresponding Member of the
Imperial Geologic Institute, Vienna. | Second Edition. | Vol. V. |
Madras: | Printed at the Lawrence and Adelphi Presses, | 1873. |
Copyright.
8º, pp. 1-956.
Title, Swastika, p. 656.
BARING-GOULD, S. Curious Myths | of | the Middle Ages. | By | S.
Baring-Gould, M. A., | New York: | Hurst & Co., Publishers, | No.
122 Nassau street.
12º, pp. 1-272.
Title, “Legends of the Cross,” pp. 159-185.
BERLIN SOCIETY for Anthropology, Ethnology, and Prehistoric
Researches, Sessional report of—.
iii, 1871; viii, July 15, 1876, p. 9.
BLAKE, Willson W. The Cross, | Ancient and Modern. | By | Willson W.
Blake. | (Design) | New York: | Anson D. F. Randolph and Company.
| 1888.
8º, pp. 1-52.
BRASH, Richard Rolt. The | Ogam Inscribed Monuments | of the |
Gaedhil | in the | British Islands | with a dissertation on the Ogam
character, &c. | Illustrated with fifty Photolithographic plates | by
the late | Richard Rolt Brash, M. R. I. A., F. S. A. Scot. | Fellow of
the Royal Society of | Ireland; and author of “The Ecclesiastical |
Architecture of Ireland.” | Edited by George M. Atkinson | London: |
George Bell & Sons, York street, Covent Garden | 1879.
4º, pp. i-xvi, 1-425.
Swastikas on Ogam stone at Aglish (Ireland), pl. xxiv, pp. 187-189; on
Newton stone Aberdeenshire, (Scot.), pl. xlix, p. 359; Logie stone,
(Scot.), pl. xlviii, p. 358; Bressay, (Scot.), pl. xlvii.
BRINTON, Daniel G. The Ta Ki, the Swastika, and the Cross in America.
Proceedings American Philosophical Society, xxvi, 1889, pp. 177-187.
—— The | Myths of the New World: | A treatise | on the | Symbolism
and Mythology | of the | Red Race of America. | By | Daniel G.
Brinton, A. M., M. D., | Member of the Historical Society of
Pennsylvania, of the Numismatic | and Antiquarian Society of
Philadelphia; Corresponding Member | of the American Ethnological
Society; Author of “Notes | on the Floridian Peninsula,” etc. |
(Design) | New York: | Leypoldt & Holt. | 1868.
8º, pp. i-viii, 1-307.
The cross of Mexico, pp. 95-97, 183-188.
—— American | Hero-Myths. | A study of the Native Religions | of the
Western Continent. | By | Daniel G. Brinton, M. D., | Member of the
American Philosophical Society; the American | Antiquarian Society;
the Numismatic and Antiquarian | Society of Phila., etc.; Author of
“The Myths of | the New World;” “The Religious Senti- | ment,” etc.
| Philadelphia: | H. C. Watts & Co., | 506 Minor Street, | 1882.
8º, pp. i-xvi, 1-251.
Symbol of the cross in Mexico. The rain god, the tree of life, and the god
of strength, p. 122; in Palenque, the four rain gods, p. 155; the
Muscayas, light, sun, p. 222.
BROWNE, G. F. Basket-work figures of men on sculptured stones.
Triquetra.
Archæologia, Vol. l, 1887, pt. 2, p. 291, pl. xxiii, fig. 7.
BURGESS, James. Archæological Survey of Western India. Vol. iv. | Report
| on the | Buddhist Cave Temples | and | Their Inscriptions | Being
Part of | The Results of the Fourth, Fifth, and Sixth Seasons’
Operations | of the Archæological Survey of Western India, | 1876-
77, 1877-78, 1878-79. | Supplementary to the Volume on “Cave
Temples of India.” | By | Jas. Burgess, LL. D., F. R. G. S., | Member
of the Royal Asiatic Society, of the Société Asiatique, &c. |
Archæological Surveyor and Reporter to Government | for Western
and Southern India, | London: | Trübner & Co., Ludgate Hill. | 1883.
| (All rights reserved.)
Folio, pp. 140.
Inscriptions with Swastika, vol. iv, pls. xliv, xlvi, xlvii, xlix, l, lii, lv; vol. v,
pl. li.
—— The | Indian Antiquary, | A Journal of Oriental Research | in |
Archæology, History, Literature, Languages, Folk-Lore, &c., &c., |
Edited by | Jas. Burgess, M. R. A. S., F. R. G. S. | 3 vols., 1872-74, |
Bombay: | Printed at the “Times of India” Office. | London: Trübner
& Co. Paris: E. Leroux. Berlin: Asher & Co. Leipzig: F. A. Brockhaus.
| New York: Westermann & Co. Bombay: Thacker, Vining & Co.
4º, Vols. i-iii.
Twenty-four Jain Saints, Suparsva, son of Pratishtha by Prithoi, one of
which signs was the Swastika. Vol. ii, p. 135.
BURNOUF, Emile. Le | Lotus de la Bonne Loi, | Traduit du Sanscrit, |
Accompagné d’un Commentaire | et de Vingt et un Mémoires
Relatifs au Buddhisme, | par M. E. Burnouf, | Secrétaire Perpétuel
de l’Académie des Inscriptions et Belles Lettres. | (Picture) | Paris. |
Imprimé par Autorisation du Gouvernement | à l’Imprimerie
Nationale. | MDCCCLII.
Folio, pp. 1-897.
Svastikaya, Append. viii, p. 625.
Nandavartaya, p. 626.
—— The | Science of Religions | by Emile Burnouf | Translated by Julie
Liebe | with a preface by | E. J. Rapson, M. A., M. R. A. S. | Fellow
of St. John’s College, Cambridge | London | Swan, Sonnenschein,
Lowrey & Co., | Paternoster Square. | 1888.
Swastika, its relation to the myth of Agni, the god of fire, and its alleged
identity with the fire-cross, pp. 165, 253-256, 257.
BURTON, Richard F. The | Book of the Sword | by | Richard F. Burton |
Maître d’Armes (Brevette) | (Design) | With Numerous Illustrations |
London | Chatto and Windus, Piccadilly | 1884 | (All rights
reserved).
4º, pp. 299.
Swastika sect, p. 202, note 2.
CARNAC, H. Rivett. Memorandum on Clay Disks called “Spindle-whorls”
and votive Seals found at Sankisa, Behar, and other Buddhist ruins
in the Northwestern provinces of India. (With three plates).
Journal Asiatic Society of Bengal, Vol. xlix, pt. 1, 1880, pp. 127-137.
CARTAILHAC, Émile. Résultats d’Une Mission Scientifique | du | Ministère
de l’Instruction Publique | Les | âges Préhistoriques | de | l’Espagne
et du Portugal | par | M. Émile Cartailhac, | Directeur des Matériaux
pour l’Histoire primitive de l’homme | Préface par M. A. De
Quatrefages, de l’Institut | Avec Quatre Cent Cinquante Gravures et
Quatre Planches | Paris | Ch. Reinwald, Libraire | 15, Rue des Saints
Pères, 15 1886 | Tous droits réservés.
4º, pp. i-xxxv, 1-347.
Swastika, p. 285.
Triskelion, p. 286.
Tetraskelion, p. 286.
Swastika in Mycenæ and Sabraso.—Are they of the same antiquity?, p.
293.
CENTURY DICTIONARY.
Titles, Swastika, Fylfot.
CESNOLA, Louis Palma Di. Cyprus: | Its Ancient Cities, Tombs, and
Temples. | A Narrative of Researches and Excavations During | Ten
Years’ Residence in that Island. | By | General Louis Palma Di
Cesnola, | * * * | * * | With Maps and Illustrations. * * | New York:
| Harper Brothers, Publishers, | Franklin Square. | 1877.
8º, pp. 1-456.
Swastika on Cyprian pottery, pp. 210, 300, 404, pls. xliv, xlv, xlvii.

CHAILLU, Paul B. Du. The Viking Age | The Early History | Manners and
Customs of the Ancestors | of the English-Speaking Nations |
Illustrated from | The Antiquities Discovered in Mounds, Cairns, and
Bogs, | As Well as from the Ancient Sagas and Eddas. | By | Paul B.
Du Chaillu | Author of “Explorations in Equatorial Africa,” “Land of
the Midnight Sun,” etc. | With 1366 Illustrations and Map. | In Two
Volumes * * | New York: | Charles Scribner’s Sons. | 1889.
8º, i, pp. i-xx, 1-591; ii, pp. i-viii, 1-562.
Swastika in Scandinavia. Swastika and triskelion, Vol. i, p. 100, and note
1; Vol. ii, p. 343. Swastika, Cinerary urn, Bornholm, Vol. i, fig. 210, p.
138. Spearheads with runes, Swastika and Triskelion, Torcello, Venice,
fig. 335, p. 191. Tetraskelion on silver fibula, Vol. i, fig. 567, p. 257, and
Vol. ii, fig. 1311, p. 342. Bracteates with Croix swasticale, Vol. ii, p. 337,
fig. 1292.
CHANTRE, Ernest. Études Paléoethnologiques | dans le Bassin du Rhône
| Âge du Bronze | Recherches | sur l’Origine de la Métallurgie en
France | Par | Ernest Chantre | Première Partie | Industrie de l’Âge
du Bronze | Paris, | Librairie Polytechnique de J. Baudry | 15, Rue
Des Saints-Pères, 15 | mdccclxxv.
Folio, pp. 1-258.
—— Deuxième Partie. Gisements de l’Âge du Bronze. pp. 321.
—— Troisième Partie. Statistique. pp. 245.
Swastika migration, p. 206. Oriental origin of the prehistoric Sistres or
tintinnabula found in Swiss lake dwellings, Vol. i, p. 206.
Spirals, Vol. ii, fig. 186, p. 301.
—— Notes Anthropologiques: De l’Origine Orientale de la Métallurgie. In-
8, avec planches. Lyon, 1879.
—— Notes Anthropologiques. Relations entre les Sistres Bouddhiques et
certains Objets Lacustres de l’Age du Bronze. In-8. Lyon, 1879.
—— L’Âge de la Pierre et l’Âge du Bronze en Troade et en Grèce. In-8.
Lyon, 1874.
—— L’Âge de la Pierre et l’Âge du Bronze dans l’Asie Occidentale. (Bull.
Soc. Anth., Lyon, t. I, fasc. 2, 1882.)
—— Prehistoric Cemeteries in Caucasus. (Nécropoles préhistoriques du
Caucase, renferment des crânes macrocéphales.)

Matériaux, seizième année (16), 2e série, xii, 1881.


Swastika, p. 166.
CHAVERO, D. Alfredo. Mexico | A Través de los Siglos | Historia General
y Completa del Desenvolvimiento Social, | Político, Religioso, Militar,
Artistico, Científico, y Literario de México desde la Antigüedad | Más
Remota hasta la Época Actual | * * | Publicada bajo la Dirección del
General | D. Vicente Riva Palacio | * | * | * | * | * | Tomo Primero |
Historia Antigua y de la Conquista | Escrita por el Licenciado | D.
Alfredo Chavero. | México | Ballesca y Comp.a, Editores | 4, Amor
de Dios, 4.
Folio, pp. i-lx, 1-926.
Ciclo de 52 años. (Atlas del P. Diego Duran, p. 386.) Swastika worked on
shell (Fains Island), “labrado con los cuatro puntos del Nahui Ollin.” p.
676.
CLAVIGERO, C. F. Storia Antica del Messico. Cesena, 1780.
Swastika, ii, p. 192, fig. A. Cited in Hamy’s Decades Américanæ,
Première Livraison, 1884, p. 67.
CONDER, Maj. C. R. Notes on Herr Schick’s paper on the Jerusalem
Cross.
Palestine Exploration Fund, Quarterly Statement, London, July, 1894, pp.
205, 206.
CROOKE, W. An Introduction | to the | Popular Religion and Folk-lore | of
| Northern India | By W. Crooke, B. A. | Bengal Civil Service. |
Honorary Director of the Ethnographical Survey, Northwestern |
Provinces and Oudh | Allahabad | Government Press | 1894.
8º, pp. i-ii, 1-420.
Swastika, pp. 7, 58, 104, 250.
CROSS, The. The Masculine Cross, or History of Ancient and Modern
Crosses, and their Connection with the Mysteries of Sex Worship;
also an account of the Kindred Phases of Phallic Faiths and
Practices.
In Cat. 105 of Ed. Howell, Church street, Liverpool.
D’ALVIELLA, le Comte Goblet. La | Migration des Symboles | par | Le
Comte Goblet d’Alviella, | Professeur d’Histoire des Religions à
l’Université de Bruxelles, | Membre de l’Académie Royale de
Belgique, | Président de la Société d’Archéologie de Bruxelles |
(Design, Footprint of Buddha) | Paris | Ernest Leroux, Editeur | Rue
Bonaparte, 28 | 1891.
8º, pp. 1-343.
Cross, pp. 16, 110, 113, 164, 250, 264, 330, 332.
Crux ansata, pp. 22, 106, 107, 114, 186, 221, 229, 250, 265, 332.
Cross of St. Andrew, p. 125.
Swastika cross, Cap. II, passim, pp. 41-108, 110, 111, 225, 271, 339.
Tetraskelion. Same references.
Triskele, triskelion, or triquetrum, pp. 27, 28, 61, 71, 72, 83, 90, 100,
221-225, 271, 339.
Reviewed in Athenæum, No. 3381, Aug. 13, 1892, p. 217.
Favorably criticised in Reliquary Illustrated Archæologist (Lond.), Vol. i,
No. 2, Apr. 1895, p. 107.
DAVENPORT.——Aphrodisiacs.
The author approves Higgins’ views of the Cross and its Relation to the
Lama of Tibet.
DENNIS, G. The | Cities and Cemeteries | of | Etruria. | Parva Tyrrhenum
per aequor vela darem. Horat. | (Picture) | By George Dennis. |
Third Edition. | In two volumes | * * * | With maps, plans, and
illustrations. | London: | John Murray, Albemarle Street. | 1883.
8º, two vols.: (1), pp. i-cxxviii, 1-501; (2) pp. i-xv, 1-579.
Archaic Greek vase, British Museum. Four different styles of Swastikas
together on one specimen. Vol. i, p. xci.
Swastika, common form of decoration, p. lxxxix.
Primitive Greek Lebes, with Swastika in panel, left, p. cxiii, fig. 31.
Swastika on bronze objects in Bologna foundry. Vol. ii, p. 537.
D’EICHTAL, G. Etudes sur les origines bouddhiques de la civilization
américaine, 1re partie. Paris, Didier, 1862.
Swastika, p. 36 et suiv. Cited in Hamy’s Decades Américanæ, Première
Livraison, 1884, p. 59.
DICTIONNAIRE DES SCIENCES Anthropologiques. Anatomie, Crâniologie,
Archéologie Préhistorique, Ethnographie (Mœurs, Arts, Industrie),
Démographie, Langues, Religions. Paris, Octave Doin, Éditeur, 8,
Place de l’Odéon, Marpon et Flammarion, Libraires 1 à 7, Galeries de
l’Odéon.
4º, pp. 1-1128.
Title, Swastika, Philippe Salmon, p. 1032.
DORSEY, J. Owen. Swastika, Ogee (tetraskelion), symbol for wind-song
on Sacred Chart of Kansa Indians.
Am. Naturalist, xix (1885), p. 676, pl. xx, fig. 4.
DULAURE, J. A. Histoire Abrégée | de | Différens Cultes. | Des Cultes |
qui ont précédé et amené l’Idolatrie | ou | l’Adoration des figures
humaines | par J. A. Dulaure; seconde édition | revue, corrigée et
augmentée | Paris | Guillaume, Libraire-Editeur | rue Hautefeuille
14. | 1825.
Two vols.: (1), pp. i-x, 11-558; (2), pp. i-xvi, 17-464.
Origin of symbols, works of art and not natural things, Vol. i, pp. 25, 26.
Another result of a combination of ideas, p. 45.
The cross represents the phallus, Vol. ii, pp. 58, 59, 167, 168.
DUMOUTIER, Gustave Le. Swastika et la roue Solaire en Chine.
Revue d’ Ethnographie, Paris, iv, 1885, pp. 327-329.
Review by G. De Mortillet, Matériaux pour l’Histoire Primitive et Naturelle
de L’Homme, ii, p. 730.
EMERSON, Ellen Russell. Indian Myths | or | Legends, Traditions, and
Symbols of the | Aborigines of America | Compared with those of
other Countries, including Hindostan, Egypt, Persia | Assyria and
China | by Ellen Russell Emerson | Member of the Société
Américaine de France | illustrated | Second Edition | London |
Trübner & Company | Ludgate Hill | Printed in the U. S. A.
8º, pp. i-x, 1-425.
ENCYCLOPÆDIC DICTIONARY.
Titles, Ansated Cross (Crux ansata), p. 230, Vol. I; Cross, p. 1302, Vol.
II; Crux, p. 1378, Vol. II; Fylfot, p. 2240, Vol. II; Gammadion, p. 2256,
Vol. II.
ENCYCLOPEDIA BRITANNICA.
Title, Cross, 4º, pp. 539-542.
ENGLEHARDT, C. Influence Classique sur | le Nord Pendant l’Antiquité |
par | C. Englehardt. | Traduit par | E. Beauvois. | Copenhague, |
Imprimerie de Thiele. | 1876.
8º, pp. 199-318.
Solar disks, fig. 44, p. 240. Crosses, figs. 64, 65, p. 252.
ETHNOLOGY, Reports of the Bureau of. Second Annual Report, 1880-81.
Art in Shell of the Ancient Americans, by W. H. Holmes. pp. 179-305, pls.
xxi-lxxvii.

Collections made in New Mexico and Arizona in 1879, by James


Stevenson. pp. 307-422, figs. 347-697.
Third Annual Report, 1881-82.
Catalogue of Collections made in 1881, by W. H. Holmes. pp.
427-510, figs. 116-200.
Fourth Annual Report, 1882-83.
Ancient Pottery of the Mississippi Valley, by W. H. Holmes. pp.
361-436, figs. 361-463.
Fifth Annual Report, 1883-84.
Burial Mounds of Northern Sections of the United States, by
Cyrus Thomas. pp. 3-119, pls. i-vi, figs. 1-49.
The Mountain Chant, by Washington Matthews. pp. 379-407,
pls. x-xviii, figs. 50-59.
Sixth Annual Report, 1884-85.
Ancient Art in the Province of Chiriqui, by W. H. Holmes. pp.
3-187, pl. i, figs. 1-285.
Tenth Annual Report, 1888-89.
Picture writing of the American Indians, by Garrick Mallery.
pp. 3-807, pls. i-liv, figs. 1-1290.
Twelfth Annual Report, 1890-91.
Mound Explorations, by Cyrus Thomas. pp. 3-730, pls. i-xlii,
figs. 1-344.
EVANS, John. The Ancient | Bronze Implements, | Weapons, and
Ornaments, | of | Great Britain | and | Ireland. | By | John Evans, D.
C. L., LL. D., F. R. S., | F. S. A., F. G. S., Pres. Num. Soc., &c., |
London: | Longmans, Green & Co. | 1881. | (All rights reserved.)
8º, pp. i-xix. 1-509.
—— The Ancient | Stone Implements, | Weapons, and Ornaments, | of |
Great Britain, | by | John Evans, F. R. S., F. S. A. | Honorary
Secretary of the Geological and Numismatic Societies of | London,
etc., etc., etc. | London: | Longmans, Green, Reader, and Dyer. |
1872. | (All rights reserved.)
8º, pp. l-xvi, 1-640.
FAIRHOLT, F. W. A Dictionary | of | Terms in Art. | Edited and Illustrated
by | F. W. Fairholt, F. S. A. | with | Five Hundred Engravings | On
Wood | (Design) | Daldy, Isbister & Co. | 56, Ludgate Hill, London.
12º, pp. i-vi, 1-474.
Titles, Cross, Fret, Fylfot, Symbolism.
FERGUSSON, James. Rude Stone Monuments | in | All Countries; | Their
Ages and Uses. | By James Fergusson, D. C. L., F. R. S, | V. P. R. A.
S., F. R. I. B. A., &c. | (Picture.) | With Two Hundred and Thirty-four
Illustrations. |London: | John Murray, Albemarle Street. | 1872. |
The Right of translation reserved.
8º, pp. i-xix, 1-559.
Crosses, Celtic and Scottish, pp. 270-273.
FORRER, R. Die | Graeber- und Textilfunde | von | Achmim-Panopolis |
von | R. Forrer | mit 16 Tafeln: 250 Abbildungen | in Photographie,
Autographie, Farbendruck und theilweisem Handcolorit, nebst
Clinché-Abbildungen | im Text; Text und Tafeln auf Cartonpapier. |
Nur in wenigen nummerirten Exemplaren hergestellt. | (Design.) |
Strassburg, 1891 | Druck von Emil Birkhäuser, Basel. | Photographie
von Mathias Gerschel, Strassburg. | Autographie und Farbendruck
von R. Fretz, Zürich. | Nicht im Buchhandel.
Folio, pp. 1-27.
Swastika, ornament at Achmin-Panopolis, Egypt, p. 20, pl. xi, fig. 3.
FRANKLIN, Colonel. [Swastika an emblem used in the worship of
specified sects in India.]
The Jeyrees and Boodhists, p. 49, cited in “Ogam Monuments,” by
Brash, p. 189.
FRANKS, Augustus W. Horæ ferales. Pl. 30, fig. 19.
GARDNER, Ernest A. Naukratis. Part II. | By | Ernest A. Gardner, M. A., |
Fellow of Gonville and Caius College, Craven student and formerly
Worts student of the University of Cambridge; | Director of the
British School of Archæology at Athens. | With an Appendix | by | F.
L. L. Griffith, B. A., | of the British Museum, formerly student of the
Egyptian Exploration Fund. | Sixth Memoir of | the Egypt Exploration
Fund. | Published by order of the committee. | London: etc.
Folio, pls. 1-24, pp. 1-92. Swastika in Egypt, Pottery, Aphrodite. Pl. v,
figs. 1, 7; pl. vi, fig. 1; pl. viii, fig. 1.
GREG, P. R. Fret or Key Ornamentation in Mexico and Peru.
Archæologia, Vol. xlvii, 1882, pt. 1, pp. 157-160, pl. vi.
—— Meaning and Origin of Fylfot and Swastika.
Archæologia, Vol. xlviii, 1885, pt. 2, pp. 293, 326, pls. xix, xx, xxi.

GOODYEAR, William H. The Grammar of | the Lotus | A new History of


Classic Ornament | as a | development of Sun Worship | with
Observations on the Bronze Culture of Prehistoric Europe as derived
| from Egypt; based on the study of Patterns | by | Wm. H.
Goodyear, M. A. (Yale, 1867) | Curator Department of Fine Arts in
the Brooklyn Institute of Arts and Sciences | * * * | London: |
Sampson, Low, Marston & Company | Limited | St. Dunstan’s
House, Fitter Lane, Fleet Street, E. C., | 1891.
Chapters on Lotus and Swastika.
GOULD, S. C. The Master’s Mallet or the Hammer of Thor.
Notes and Queries, (Manchester, N. H.), Vol. iii (1886), pp. 93-108.
HADDON, Alfred C. Evolution in Art: | As Illustrated by the | Life-
Histories of Designs. | By | Alfred C. Haddon, | Professor of Zoology,
Royal College of Science, Corresponding | Member of the Italian
Society of Anthropology, etc. | With 8 Plates, and 130 Figures in the
Text. | London: | Walter Scott, Ltd., Paternoster Square. | Charles
Scribner’s Sons, | 153-157 Fifth Avenue, New York. | 1895.
The meaning and distribution of the Fylfot, pp. 282-399.
HAMPEL, Joseph. Antiquités préhistoriques de la Hongrie; Erstegom,
1877. No. 3, pl. xx.
—— Catalogue de l’Exposition préhistorique des Musées de Province;
Budapest, 1876, p. 17.
HAMY, Dr. E. T. Decades Américanæ | Mémoires | d’Archéologie et
d’Ethnographie | Américaines | par | le Dr. E.-T. Hamy |
Conservateur du Musée d’Ethnographie du Trocadéro. | Première
Livraison | (Picture) | Paris | Ernest Leroux, Editeur | Libraire de la
Société Asiatique | de l’École des Langues Orientales Vivantes, etc. |
28, Rue Bonaparte, 28 | 1884.
8º, pp. 1-67.
Le Svastika et la roue solaire en Amérique, pp. 59-67.
HEAD, Barclay V. Synopsis of the Contents | of the | British Museum. |
Department of | Coins and Medals. | A Guide | to the principal gold
and silver | Coins of the Ancients, | from circa B. C. 700 to A. D. 1. |
With 70 Plates. | By | Barclay V. Head, Assistant Keeper of Coins. |
Second Edition. | London: | Printed by order of the Trustees. |
Longmans & Co., Paternoster Row; B. Quaritch, 15, Piccadilly; | A.
Asher & Co., 13, Bedford Street, Convent Garden, and at Berlin; |
Trübner & Co., 57 and 59, Ludgate Hill. | C. Rollin & Feuardent, 61,
Great Russell Street, and 4, Rue de Louvois, Paris. | 1881.
8º, pp. i-viii, 1-128, pl. 70.
Triskelion, (Lycian coins), three cocks’ heads, pl. 3, fig. 35.
Punch-marks on ancient coins representing squares, etc., and not
Swastika. Pl. 1, figs. 1, 3; pl. 4, fig. 24; pl. 4, figs. 7, 8, 10; pl. 5, fig. 16;
pl. 6, figs. 30, 31; pl. 12, figs. 1, 3, 6.
HIGGINS, Godfrey. Anacalypsis | or | attempts to draw aside the veil | of
| the Saitic Isis | or, | an inquiry into the origin | of | Languages,
Nations, and Religions | by | Godfrey Higgins, Esq. | F. S. A., F. R.
Asiat. Soc., F. R. Ast. S. | of Skellow Grange, near Doncaster. |
London | Longman, &c., &c., Paternoster Row | 1836.
Vols. I, II.
Origin of the Cross, Lambh or Lama; official name for Governor is
Ancient Tibetan for Cross. Vol. I, p. 230.
HIRSCHFELD, G. Vasi arcaici Ateniesi. Roma, 1872. Tav. xxxix and xl.

HOLMES, W. H. Art in Shell of the Ancient Americans.


Second Ann. Rep. Bureau of Ethnology, 1880-81.
The cross, pls. xxxvi, lii, liii. Spirals, pls. liv, lv, lvi. Swastika, (shell
gorget, the bird,) pls. lviii, lix. Spider, pl. lxi. Serpent, pls. lxiii, lxiv.
Human face, pl. lxix. Human figure, pls. lxxi, lxxii, lxxiii. Fighting figures,
pl. lxxiv.
—— Catalogue of Bureau Collections made in 1881.
Third Ann. Rep. Bureau of Ethnology, 1881-82.
Fighting figures, fig. 128, p. 452.
Swastika in shell, from Fains Island, fig. 140, p. 466.
Spider, same, fig. 141.
Spirals on pottery vase, fig. 165, p. 484.
—— Ancient Pottery of the Mississippi Valley.
Fourth Ann. Rep. Bureau of Ethnology, 1882-83.
Spirals on pottery, figs. 402, p. 396; 413, p. 403; 415, 416, p. 404; 435,
p. 416; 442, p. 421; in basketry, fig. 485, p. 462.
Maltese cross, fig. 458, p. 430.
—— Ancient Art in the Province of Chiriqui.
Sixth Ann. Rep. Bureau of Ethnology, 1884-85.
Conventional alligator, series of derivations showing stages of
simplification of animal characters, figs. 257 to 528, pp. 173-181.
Spindle-whorls, Chiriqui, figs. 218-220, p. 149.
—— The Cross used as a Symbol by the Ancient Americans.
Trans. Anthrop. Soc., Washington, D. C., ii, 1883.
HUMPHREYS, H. Noel. The | Coin Collector’s Manual, | or guide to the
numismatic student in the formation of | A Cabinet of Coins: |
Comprising | An Historical and Critical Account of the Origin and
Progress | of Coinage from the Earliest Period to the | Fall of the
Roman Empire; | with | Some Account of the Coinages of Modern
Europe, | More especially of Great Britain. | By H. Noel Humphreys,
| Author of “The Coins of England,” “Ancient Coins and Medals,” |
etc., etc. | With above one hundred and fifty illustrations | on Wood
and Steel. | In two volumes. | London: | H. G. Bohn, York Street,
Convent Garden. | 1853.
12º, (1), pp. i-xxiv, 1-352; (2), pp. 353-726.
Punch-marks on ancient coins, Vol. i. pls. 2, 3, 4. Triquetrum, triskele or
triskelion on coins of Sicily, Vol. i, p. 57, and note.
KELLER, Ferdinand. The | Lake Dwellings | of | Switzerland and Other
Parts of Europe. | By | Dr. Ferdinand Keller | President of the
Antiquarian Association of Zürich | Second Edition, Greatly Enlarged
| Translated and Arranged | by | John Edward Leo, F. S. A., F. G. S. |
Author of Isca Silurum etc. | In Two Volumes | Vol. I. (Vol. II) |
London | Longmans, Green and Co. | 1878 | All rights reserved.
8º, Vol. i, text, pp. i-xv, 1-696; Vol. ii, pls. ccvi.

Swastika, Lake Bourget, pattern-stamp and pottery imprint, p. 339, note


1, pl. clxi, figs. 3, 4.
LANGDON, Arthur G. Ornaments of Early Crosses of Cornwall.
Royal Institute of Cornwall, Vol. x, pt. 1, May, 1890, pp. 33-96.
LE PLONGEON, Augustus. Sacred Mysteries | Among | the Mayas and the
Quiches, | 11,500 Years Ago. | Their Relation to the Sacred
Mysteries | of Egypt, Greece, Chaldea and India. | Free Masonry |
In Times Anterior to The Temple of Solomon. | Illustrated. | By
Augustus Le Plongeon, | Author of “Essay on | the Causes of
Earthquakes;” “Religion of Jesus Compared with the | Teachings of
the Church;” “The Monuments of Mayas and | their Historical
Teachings.” | New York: | Robert Macoy, 4 Barclay Street. | 1886.
8º, pp. 163.
Cross and Crux ansata, p. 128.
—— Mayapan and Maya Inscriptions.
Proc. Am. Antiq. Soc., Worcester, Mass., April 21, 1881.
Also printed as a separate. See pp. 15, 17, and figs. 7, 13, and
frontispiece.
LITTRÉ’S FRENCH DICTIONARY. Title, Svastika.
McADAMS, William. Records | of | Ancient Races | in the | Mississippi
Valley; | Being an account of some of the Pictographs, sculptured |
hieroglyphics, symbolic devices, emblems, and tra- | ditions of the
prehistoric races of America, with | some suggestions as to their
origin. | With cuts and views illustrating over three hundred objects
| and symbolic devices. | By Wm. McAdams, | Author of * | * | * | *
| * | St. Louis: | C. R. Barns Publishing Co. | 1887.
4º, pp. i-xii, 1-120.
Mound vessels with painted symbols, sun symbols, cross symbols, cross
with bent arms (Swastika), etc., Chap. xv, pp. 62-68.
Cites Lord Kinsborough, “Antiquities of Mexico,” for certain forms of the
cross, of which the first is the Swastika and the third the Nandavartaya
Chap. xvii, pp. 62-68.
MACRICHIE, David. Ancient | and | Modern Britons: | A Retrospect. |
London: | Kegan Paul, Trench & Co., | 1 Paternoster Square. | 1884.
Two vols., 8º. (1), pp. i-viii, 1-401; (2), i-viii, 1-449.
Sculptured stones of Scotland (p. 115), the Newton stone, a compound
of Oriental and western languages (pp. 117-118). Ethnologic
resemblances between old and new world peoples considered. Vol. ii
(app.).
MALLERY, Garrick. Picture writing of the American Indians.
Tenth Ann. Rep. Bureau of Ethnology, 1888-89, pp. 1-807, pls. i-liv, figs.
1-1290.
Sun and star symbols, figs. 1118-1129, pp. 694-697. Human form
(cross) symbols, figs. 1164-1173, pp. 705-709. Cross symbols, figs.
1225-1234, pp. 724-730. Piaroa color stamps, fret pattern, fig. 982, p.
621.
MARCH, H. Colley. The Fylfot and the Futhore Tir.
Cited in Transactions of the Lancashire and Cheshire Antiquarian Society,
1886.
MASSON, ——. [The Swastika found on large rock near Karachi.]
Balochistan, Vol. iv, p. 8, cited in Ogam Monuments, by Brash, p. 189.
MATÉRIAUX pour l’Histoire Primitive et Naturelle de l’Homme. Revue
mensuelle illustrée. (Fondée par M. G. De Mortillet, 1865 à 1868.)
Dirigée par M. Émile Cartailhac. * * *
Swastika, Vol. xvi, 1881.
Prehistoric Cemeteries in Caucasus, by E. Chantre, pp. 154-166.
Excavations at Cyprus, by General di Cesnola, p. 416.
Signification of the Swastika, by M. Girard de Reale, p. 548.
Swastika, Vol. xviii, 1884.
Étude sur quelques Nécropoles Halstattiennes de l’Autriche et de l’Italie.
By Ernest Chantre, Swastika on Archaic Vase, fig. 5, p. 8. Croix Gammée,
figs. 12 and 13, p. 14. Cross, p. 122. Swastika, pp. 137-139. Swastika
sculpté sur pierre, Briteros, Portugal, fig. 133, p. 294.
Necropolis of Halstatt, pp. 13, 14; p. 139, fig. 84; p. 280, Report of
spearhead with Swastika and runic inscription, found at Torcello, near
Venice, by Undset.
Swastika, Vol. xx, 1886.
Frontispiece of January number. Swastika from Museum, Mayence.
MATTHEWS, Washington. The Mountain Chant.
Fifth Ann. Rep. Bureau of Ethnology, 1883-84, pp. 379-467, pls. x-xviii,
figs. 50-59.
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookbell.com

You might also like