100% found this document useful (3 votes)
44 views62 pages

Instant Access To Advances in Computer Vision: Proceedings of The 2019 Computer Vision Conference (CVC), Volume 1 Kohei Arai Ebook Full Chapters

Proceedings

Uploaded by

pyssontrepka
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (3 votes)
44 views62 pages

Instant Access To Advances in Computer Vision: Proceedings of The 2019 Computer Vision Conference (CVC), Volume 1 Kohei Arai Ebook Full Chapters

Proceedings

Uploaded by

pyssontrepka
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Download the full version of the textbook now at textbookfull.

com

Advances in Computer Vision: Proceedings of


the 2019 Computer Vision Conference (CVC),
Volume 1 Kohei Arai

https://textbookfull.com/product/advances-in-
computer-vision-proceedings-of-the-2019-computer-
vision-conference-cvc-volume-1-kohei-arai/

Explore and download more textbook at https://textbookfull.com


Recommended digital products (PDF, EPUB, MOBI) that
you can download immediately if you are interested.

Proceedings of the Future Technologies Conference FTC 2018


Volume 1 Kohei Arai

https://textbookfull.com/product/proceedings-of-the-future-
technologies-conference-ftc-2018-volume-1-kohei-arai/

textbookfull.com

Proceedings of the Future Technologies Conference (FTC)


2020, Volume 1 Kohei Arai

https://textbookfull.com/product/proceedings-of-the-future-
technologies-conference-ftc-2020-volume-1-kohei-arai/

textbookfull.com

Intelligent Computing Proceedings of the 2020 Computing


Conference Volume 1 Kohei Arai

https://textbookfull.com/product/intelligent-computing-proceedings-of-
the-2020-computing-conference-volume-1-kohei-arai/

textbookfull.com

Computational Methods in Synthetic Biology Mario Andrea


Marchisio

https://textbookfull.com/product/computational-methods-in-synthetic-
biology-mario-andrea-marchisio/

textbookfull.com
Gerontological Social Work and the Grand Challenges
Focusing on Policy and Practice Sara Sanders

https://textbookfull.com/product/gerontological-social-work-and-the-
grand-challenges-focusing-on-policy-and-practice-sara-sanders/

textbookfull.com

ISO 9001 2015 Internal Audits Made Easy Tools Techniques


and Step by Step Guidelines for Successful Internal Audits
Ann W. Phillips
https://textbookfull.com/product/iso-9001-2015-internal-audits-made-
easy-tools-techniques-and-step-by-step-guidelines-for-successful-
internal-audits-ann-w-phillips/
textbookfull.com

Immersive Learning Research Network Second International


Conference iLRN 2016 Santa Barbara CA USA June 27 July 1
2016 Proceedings 1st Edition Colin Allison
https://textbookfull.com/product/immersive-learning-research-network-
second-international-conference-ilrn-2016-santa-barbara-ca-usa-
june-27-july-1-2016-proceedings-1st-edition-colin-allison/
textbookfull.com

The Vanishing American Adult Our Coming of Age Crisis and


How to Rebuild a Culture of Self Reliance Ben Sasse

https://textbookfull.com/product/the-vanishing-american-adult-our-
coming-of-age-crisis-and-how-to-rebuild-a-culture-of-self-reliance-
ben-sasse/
textbookfull.com

Probability and Statistics for Engineering and the


Sciences Jay L. Devore

https://textbookfull.com/product/probability-and-statistics-for-
engineering-and-the-sciences-jay-l-devore/

textbookfull.com
Bossing the Cowboy 1st Edition Kennedy Fox

https://textbookfull.com/product/bossing-the-cowboy-1st-edition-
kennedy-fox/

textbookfull.com
Advances in Intelligent Systems and Computing 943

Kohei Arai
Supriya Kapoor Editors

Advances in
Computer
Vision
Proceedings of the 2019 Computer
Vision Conference (CVC), Volume 1
Advances in Intelligent Systems and Computing

Volume 943

Series Editor
Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences,
Warsaw, Poland

Advisory Editors
Nikhil R. Pal, Indian Statistical Institute, Kolkata, India
Rafael Bello Perez, Faculty of Mathematics, Physics and Computing,
Universidad Central de Las Villas, Santa Clara, Cuba
Emilio S. Corchado, University of Salamanca, Salamanca, Spain
Hani Hagras, Electronic Engineering, University of Essex, Colchester, UK
László T. Kóczy, Department of Automation, Széchenyi István University,
Gyor, Hungary
Vladik Kreinovich, Department of Computer Science, University of Texas
at El Paso, El Paso, TX, USA
Chin-Teng Lin, Department of Electrical Engineering, National Chiao
Tung University, Hsinchu, Taiwan
Jie Lu, Faculty of Engineering and Information Technology,
University of Technology Sydney, Sydney, NSW, Australia
Patricia Melin, Graduate Program of Computer Science, Tijuana Institute
of Technology, Tijuana, Mexico
Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro,
Rio de Janeiro, Brazil
Ngoc Thanh Nguyen, Faculty of Computer Science and Management,
Wrocław University of Technology, Wrocław, Poland
Jun Wang, Department of Mechanical and Automation Engineering,
The Chinese University of Hong Kong, Shatin, Hong Kong
The series “Advances in Intelligent Systems and Computing” contains publications
on theory, applications, and design methods of Intelligent Systems and Intelligent
Computing. Virtually all disciplines such as engineering, natural sciences, computer
and information science, ICT, economics, business, e-commerce, environment,
healthcare, life science are covered. The list of topics spans all the areas of modern
intelligent systems and computing such as: computational intelligence, soft comput-
ing including neural networks, fuzzy systems, evolutionary computing and the fusion
of these paradigms, social intelligence, ambient intelligence, computational neuro-
science, artificial life, virtual worlds and society, cognitive science and systems,
Perception and Vision, DNA and immune based systems, self-organizing and
adaptive systems, e-Learning and teaching, human-centered and human-centric
computing, recommender systems, intelligent control, robotics and mechatronics
including human-machine teaming, knowledge-based paradigms, learning para-
digms, machine ethics, intelligent data analysis, knowledge management, intelligent
agents, intelligent decision making and support, intelligent network security, trust
management, interactive entertainment, Web intelligence and multimedia.
The publications within “Advances in Intelligent Systems and Computing” are
primarily proceedings of important conferences, symposia and congresses. They
cover significant recent developments in the field, both of a foundational and
applicable character. An important characteristic feature of the series is the short
publication time and world-wide distribution. This permits a rapid and broad
dissemination of research results.

** Indexing: The books of this series are submitted to ISI Proceedings,


EI-Compendex, DBLP, SCOPUS, Google Scholar and Springerlink **

More information about this series at http://www.springer.com/series/11156


Kohei Arai Supriya Kapoor

Editors

Advances in
Computer Vision
Proceedings of the 2019 Computer Vision
Conference (CVC), Volume 1

123
Editors
Kohei Arai Supriya Kapoor
Saga University The Science and Information
Saga, Saga, Japan (SAI) Organization
Bradford, West Yorkshire, UK

ISSN 2194-5357 ISSN 2194-5365 (electronic)


Advances in Intelligent Systems and Computing
ISBN 978-3-030-17794-2 ISBN 978-3-030-17795-9 (eBook)
https://doi.org/10.1007/978-3-030-17795-9
© Springer Nature Switzerland AG 2020
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, expressed or implied, with respect to the material contained
herein or for any errors or omissions that may have been made. The publisher remains neutral with regard
to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

It gives us the great pleasure to welcome all the participants of the Computer Vision
Conference (CVC) 2019, organized by The Science and Information
(SAI) Organization, based in the UK. CVC 2019 offers a place for participants to
present and to discuss their innovative recent and ongoing research and their
applications. The prestigious conference was held on 25–26 April 2019 in
Las Vegas, Nevada, USA.
Computer vision is a field of computer science that works on enabling the
computers to identify, see and process information in a similar way that humans do
and provide an appropriate result. Nowadays, computer vision is developing at a
fast pace and has gained enormous attention.
The volume and quality of the technical material submitted to the conference
confirm the rapid expansion of computer vision and CVC’s status as its flagship
conference. We believe the research presented at CVC 2019 will contribute to
strengthen the great success of computer vision technologies in industrial, enter-
tainment, social and everyday applications. The participants of the conference were
from different regions of the world, with the background of either academia or
industry.
The published proceedings has been divided into two volumes, which covered a
wide range of topics in Machine Vision and Learning, Computer Vision
Applications, Image Processing, Data Science, Artificial Intelligence, Motion and
Tracking, 3D Computer Vision, Deep Learning for Vision, etc. These papers are
selected from 371 submitted papers and have received the instruction and help from
many experts, scholars and participants in proceedings preparation. Here, we would
like to give our sincere thanks to those who have paid great efforts and support
during the publication of the proceeding. After rigorous peer review, 118 papers
were published including 7 poster papers.
Many thanks go to the Keynote Speakers for sharing their knowledge and
expertise with us and to all the authors who have spent the time and effort to
contribute significantly to this conference. We are also indebted to the organizing
committee for their great efforts in ensuring the successful implementation of the

v
vi Preface

conference. In particular, we would like to thank the technical committee for their
constructive and enlightening reviews on the manuscripts in the limited timescale.
We hope that all the participants and the interested readers benefit scientifically
from this book and find it stimulating in the process. See you in next SAI
Conference, with the same amplitude, focus and determination.

Regards,
Kohei Arai
Contents

Deep Learning for Detection of Railway Signs and Signals . . . . . . . . . . 1


Georgios Karagiannis, Søren Olsen, and Kim Pedersen
3D Conceptual Design Using Deep Learning . . . . . . . . . . . . . . . . . . . . . 16
Zhangsihao Yang, Haoliang Jiang, and Lan Zou
The Effect of Color Channel Representations on the Transferability
of Convolutional Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Javier Diaz-Cely, Carlos Arce-Lopera, Juan Cardona Mena,
and Lina Quintero
Weakly Supervised Deep Metric Learning for Template Matching . . . . 39
Davit Buniatyan, Sergiy Popovych, Dodam Ih, Thomas Macrina,
Jonathan Zung, and H. Sebastian Seung
Nature Inspired Meta-heuristic Algorithms for Deep Learning:
Recent Progress and Novel Perspective . . . . . . . . . . . . . . . . . . . . . . . . . 59
Haruna Chiroma, Abdulsalam Ya’u Gital, Nadim Rana,
Shafi’i M. Abdulhamid, Amina N. Muhammad, Aishatu Yahaya Umar,
and Adamu I. Abubakar
Transfer Probability Prediction for Traffic Flow with Bike
Sharing Data: A Deep Learning Approach . . . . . . . . . . . . . . . . . . . . . . 71
Wenwen Tu and Hengyi Liu
CanvasGAN: A Simple Baseline for Text to Image Generation
by Incrementally Patching a Canvas . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Amanpreet Singh and Sharan Agrawal
Unsupervised Dimension Reduction for Image Classification
Using Regularized Convolutional Auto-Encoder . . . . . . . . . . . . . . . . . . . 99
Chaoyang Xu, Ling Wu, and Shiping Wang

vii
viii Contents

ISRGAN: Improved Super-Resolution Using Generative


Adversarial Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Vishal Chudasama and Kishor Upla
Deep Learning vs. Traditional Computer Vision . . . . . . . . . . . . . . . . . . 128
Niall O’Mahony, Sean Campbell, Anderson Carvalho,
Suman Harapanahalli, Gustavo Velasco Hernandez, Lenka Krpalkova,
Daniel Riordan, and Joseph Walsh
Self-localization from a 360-Degree Camera Based on the Deep
Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Shintaro Hashimoto and Kosuke Namihira
Deep Cross-Modal Age Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Ali Aminian and Guevara Noubir
Multi-stage Reinforcement Learning for Object Detection . . . . . . . . . . . 178
Jonas König, Simon Malberg, Martin Martens, Sebastian Niehaus,
Artus Krohn-Grimberghe, and Arunselvan Ramaswamy
Road Weather Condition Estimation Using Fixed and Mobile
Based Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
Koray Ozcan, Anuj Sharma, Skylar Knickerbocker, Jennifer Merickel,
Neal Hawkins, and Matthew Rizzo
Robust Pedestrian Detection Based on Parallel Channel
Cascade Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Jiaojiao He, Yongping Zhang, and Tuozhong Yao
Novel Scheme for Image Encryption and Decryption Based
on a Hermite-Gaussian Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Mohammed Alsaedi
MAP Interpolation of an Ising Image Block . . . . . . . . . . . . . . . . . . . . . 237
Matthew G. Reyes, David L. Neuhoff, and Thrasyvoulos N. Pappas
Volumetric Data Exploration with Machine Learning-Aided
Visualization in Neutron Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Yawei Hui and Yaohua Liu
License Plate Character Recognition Using Binarization
and Convolutional Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Sandeep Angara and Melvin Robinson
3D-Holograms in Real Time for Representing Virtual Scenarios . . . . . . 284
Jesús Jaime Moreno Escobar, Oswaldo Morales Matamoros,
Ricardo Tejeida Padilla, and Juan Pablo Francisco Posadas Durán
Contents ix

A Probabilistic Superpixel-Based Method for Road Crack


Network Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
J. Josiah Steckenrider and Tomonari Furukawa
Using Aerial Drone Photography to Construct 3D Models of Real
World Objects in an Effort to Decrease Response Time
and Repair Costs Following Natural Disasters . . . . . . . . . . . . . . . . . . . . 317
Gil Eckert, Steven Cassidy, Nianqi Tian, and Mahmoud E. Shabana
Image Recognition Model over Augmented Reality Based on
Convolutional Neural Networks Through Color-Space Segmentation . . . . 326
Andrés Ovidio Restrepo-Rodríguez, Daniel Esteban Casas-Mateus,
Paulo Alonso Gaona-García, and Carlos Enrique Montenegro-Marín
License Plate Detection and Recognition: An Empirical Study . . . . . . . . 339
Md. J. Rahman, S. S. Beauchemin, and M. A. Bauer
Automatic Object Segmentation Based on GrabCut . . . . . . . . . . . . . . . . 350
Feng Jiang, Yan Pang, ThienNgo N. Lee, and Chao Liu
Vertebral Body Compression Fracture Detection . . . . . . . . . . . . . . . . . . 361
Ahmet İlhan, Şerife Kaba, and Enver Kneebone
PZnet: Efficient 3D ConvNet Inference on Manycore CPUs . . . . . . . . . . 369
Sergiy Popovych, Davit Buniatyan, Aleksandar Zlateski, Kai Li,
and H. Sebastian Seung
Evaluating Focal Stack with Compressive Sensing . . . . . . . . . . . . . . . . . 384
Mohammed Abuhussein and Aaron L. Robinson
SfM Techniques Applied in Bad Lighting and Reflection
Conditions: The Case of a Museum Artwork . . . . . . . . . . . . . . . . . . . . . 394
Laura Inzerillo
Fast Brain Volumetric Segmentation from T1 MRI Scans . . . . . . . . . . . 402
Ananya Anand and Namrata Anand
No-reference Image Denoising Quality Assessment . . . . . . . . . . . . . . . . . 416
Si Lu
Plant Leaf Disease Detection Using Adaptive
Neuro-Fuzzy Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
Hiteshwari Sabrol and Satish Kumar
Fusion of CNN- and COSFIRE-Based Features with Application
to Gender Recognition from Face Images . . . . . . . . . . . . . . . . . . . . . . . 444
Frans Simanjuntak and George Azzopardi
Visit https://textbookfull.com
now to explore a rich
collection of eBooks, textbook
and enjoy exciting offers!
x Contents

Standardization of the Shape of Ground Control Point (GCP)


and the Methodology for Its Detection in Images for UAV-Based
Mapping Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
Aman Jain, Milind Mahajan, and Radha Saraf
Non-linear-Optimization Using SQP for 3D Deformable Prostate
Model Pose Estimation in Minimally Invasive Surgery . . . . . . . . . . . . . 477
Daniele Amparore, Enrico Checcucci, Marco Gribaudo,
Pietro Piazzolla, Francesco Porpiglia, and Enrico Vezzetti
TLS-Point Clouding-3D Shape Deflection Monitoring . . . . . . . . . . . . . . 497
Gichun Cha, Byungjoon Yu, Sehwan Park, and Seunghee Park
From Videos to URLs: A Multi-Browser Guide to Extract
User’s Behavior with Optical Character Recognition . . . . . . . . . . . . . . . 503
Mojtaba Heidarysafa, James Reed, Kamran Kowsari,
April Celeste R. Leviton, Janet I. Warren, and Donald E. Brown
3D Reconstruction Under Weak Illumination Using
Visibility-Enhanced LDR Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
Nader H. Aldeeb and Olaf Hellwich
DynFace: A Multi-label, Dynamic-Margin-Softmax Face
Recognition Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
Marius Cordea, Bogdan Ionescu, Cristian Gadea, and Dan Ionescu
Towards Resolving the Kidnapped Robot Problem: Topological
Localization from Crowdsourcing and Georeferenced Images . . . . . . . . 551
Sotirios Diamantas
Using the Z-bellSM Test to Remediate Spatial Deficiencies
in Non-Image-Forming Retinal Processing . . . . . . . . . . . . . . . . . . . . . . . 564
Clark Elliott, Cynthia Putnam, Deborah Zelinsky, Daniel Spinner,
Silpa Vipparti, and Abhinit Parelkar
Learning of Shape Models from Exemplars of Biological Objects
in Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
Petra Perner
A New Technique for Laser Spot Detection and Tracking
by Using Optical Flow and Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . 600
Xiuli Wang, Ming Yang, Lalit Gupta, and Yang Bai
Historical Document Image Binarization Based on Edge
Contrast Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614
Zhenjiang Li, Weilan Wang, and Zhengqi Cai
Development and Laboratory Testing of a Multipoint Displacement
Monitoring System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629
Darragh Lydon, Su Taylor, Des Robinson, Necati Catbas, and Myra Lydon
Contents xi

Quantitative Comparison of White Matter Segmentation


for Brain MR Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639
Xianping Li and Jorgue Martinez
Evaluating the Implementation of Deep Learning in LibreHealth
Radiology on Chest X-Rays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
Saptarshi Purkayastha, Surendra Babu Buddi, Siddhartha Nuthakki,
Bhawana Yadav, and Judy W. Gichoya
Illumination-Invariant Face Recognition by Fusing Thermal
and Visual Images via Gradient Transfer . . . . . . . . . . . . . . . . . . . . . . . 658
Sumit Agarwal, Harshit S. Sikchi, Suparna Rooj,
Shubhobrata Bhattacharya, and Aurobinda Routray
An Attention-Based CNN for ECG Classification . . . . . . . . . . . . . . . . . . 671
Alexander Kuvaev and Roman Khudorozhkov
Reverse Engineering of Generic Shapes Using Quadratic Spline
and Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
Misbah Irshad, Munazza Azam, Muhammad Sarfraz,
and Malik Zawwar Hussain
Bayesian Estimation for Fast Sequential Diffeomorphic
Image Variability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
Youshan Zhang
Copyright Protection and Content Authentication Based on
Linear Cellular Automata Watermarking for 2D Vector Maps . . . . . . . 700
Saleh AL-ardhi, Vijey Thayananthan, and Abdullah Basuhail
Adapting Treemaps to Student Academic Performance Visualization . . . 720
Samira Keivanpour
Systematic Mobile Device Usage Behavior and Successful
Implementation of TPACK Based on University Students Need . . . . . . . 729
Syed Far Abid Hossain, Yang Ying, and Swapan Kumar Saha
Data Analysis of Tourists’ Online Reviews on Restaurants
in a Chinese Website . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747
Meng Jiajia and Gee-Woo Bock
Body of Knowledge Model and Linked Data Applied
in Development of Higher Education Curriculum . . . . . . . . . . . . . . . . . 758
Pablo Alejandro Quezada-Sarmiento, Liliana Enciso, Lorena Conde,
Monica Patricia Mayorga-Diaz, Martha Elizabeth Guaigua-Vizcaino,
Wilmar Hernandez, and Hironori Washizaki
Building Adaptive Industry Cartridges Using a Semi-supervised
Machine Learning Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774
Lucia Larise Stavarache
xii Contents

Decision Making with Linguistic Information for the Development


of New Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 789
Zapata C. Santiago, Escobar R. Luis, and Ponce N. Alvaro
Researcher Profile Ontology for Academic Environment . . . . . . . . . . . . 799
Maricela Bravo, José A. Reyes-Ortiz, and Isabel Cruz
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819
Deep Learning for Detection of Railway
Signs and Signals

Georgios Karagiannis1,2(B) , Søren Olsen1 , and Kim Pedersen1


1
Department of Computer Science, University of Copenhagen,
2200 Copenhagen, Denmark
{geka,ingvor,kimstp}@di.ku.dk
2
COWI A/S, Parallelvej 2, 2800 Lyngby, Denmark
[email protected]

Abstract. Major railway lines need advance management systems


based on accurate maps of their infrastructure. Asset detection is an
important tool towards automation of processes and improved decision
support on such systems. Due to lack of available data, limited research
exists investigating railway asset detection, despite the rise of Artifi-
cial Neural Networks and the numerous investigations on autonomous
driving. Here, we present a novel dataset used in real world projects for
mapping railway assets. Also, we implement Faster R-CNN, a state of
the art deep learning object detection method, for detection of signs and
signals on this dataset. We achieved 79.36% on detection and a 70.9%
mAP. The results were compromised by the small size of the objects, the
low resolution of the images and the high similarity across classes.

Keywords: Railway · Object detection · Object recognition ·


Deep learning · Faster R-CNN

1 Introduction
The ever increasing modernisation of signal systems and electrification of major
railway lines lead to increasingly complex railway environments. These environ-
ments require advanced management systems which incorporate a detailed rep-
resentation of the network, its assets and surroundings. The aim of such systems
is to facilitate automation of processes and improved decision support, minimis-
ing the requirement for expensive and inefficient on-site activities. Fundamental
requirements are detailed maps and databases of railway assets, such as poles,
signs, wires, switches, cabinets, signalling equipment, as well as the surrounding
environment including trees, buildings and adjacent infrastructure. The detailed
maps may also form the basis for a simulation of the railway as seen from the
train operator’s viewpoint. Such simulations/videos are used in the training of
train operators and support personnel. Ideally, the maps should be constantly
updated to ensure currency of the databases as well as to facilitate detailed
documentation and support of maintenance and construction processes in the
c Springer Nature Switzerland AG 2020
K. Arai and S. Kapoor (Eds.): CVC 2019, AISC 943, pp. 1–15, 2020.
https://doi.org/10.1007/978-3-030-17795-9_1
2 G. Karagiannis et al.

networks. However, with currently available methods, mapping of railway assets


is largely a manual and highly labour intensive process, limiting the possible
levels of detail and revisit times. The response to this challenge is to automate
railway asset mapping based on different sensor modalities (2D images or 3D
point clouds) acquired from ground or air.
Despite the high demand for automatic asset detection along railways, there
is very little research on this field [1]. Here, we present an approach on detection
of signs and signals as a first step towards automatic generation and update of
maps of railway environments. We implement an object detection model based
on Faster R-CNN (Region-based Convolutional Neural Network) presented by
Ren et al. in [2] on a dataset used to map a railway of 1,700 km in 2015. The
mapping was carried out manually from a private company1 with a lot of expe-
rience in such projects. Currently, many such projects exist around the world
and to the best of our knowledge are still carried out manually (people going
through all images and mark objects of interest). Our approach aims to show the
performance of an advanced object detection algorithm, such as Faster R-CNN,
on a novel dataset used in a real world project.

2 Literature Review

2.1 Previous Approaches

The research on automatic object detection along railways is sparse, compared


to the analogous, popular field of road furniture detection, mainly due to the
lack of available railway traffic data [1]. Most of the research is focused on pas-
senger detection [3,4] or track detection [5–9] for different purposes. The limited
research existing focused on detection of only a single type of object (sign recog-
nition [10], sign detection [11] or wire detection [12]).
Marmo et al. [10] presented a classical approach for railway signal detection.
It is focused on detecting a specific type of signals (single element) in video
frames and classify it according to the colour of the light (green-pass, red-no
pass). The implementation is based on simple image processing techniques such
as histogram analysis, template matching and shape feature extraction. The
method resulted in 96% detection accuracy and 97% classification accuracy in a
total of 955 images which is impressing for this type of approaches. The advan-
tage of this method is efficiency, however it is focused on a very specific type of
signals and the examples presented are scenes of low complexity.
Arastounia [12] presented an approach for detection of railway infrastructure
using 3D LiDAR data. The approach is focused on detection of cables related to
the railway (i.e. catenary, contact or return current cables), track bed, rail tracks
and masts. The data covers about 550 m of Austrian railways. The approach is
based mainly on the topology of the objects and their spatial properties. Points
on the track bed are first detected from a spatially local statistical analysis. All
the other objects of interest are recognised depending on their spatial relation
1
Second Affiliation.
Railway Signs and Signals 3

with the track bed. The overall average detection achieved by this method is
96.4%. The main drawback of this approach is that it depends on a sophisticated
type of data that needs special equipment to capture and it is more complicated
to process.
Agudo et al. in [1] presented a real-time railway speed limit and warning
signs recognition method on videos. After noise removal, Canny edge detection
is applied and an optimised Hough voting scheme detects the sign region of
interest on the edge image. Based on gradient directions and distances of points
on the edge of a shape, candidate central points of signs occur. Then, recognition
is achieved by applying shape criteria since the signs are either circular, square
or rectangular. The method scored 95.83% overall accuracy on classification.
However, even though the dataset had more than 300,000 video frames, only
382 ground truth signs existed and the authors do not provide any score for the
detection accuracy.

2.2 Convolutional Neural Networks


All of the above approaches are using traditional image analysis methods to
solve object detection problems. To the best of our knowledge, there are no
published methods that attempt to solve object detection in railways based on
Convolutional Neural Networks (CNNs). CNNs represent the state of the art
concept in Computer Vision for classification, detection and semantic segmenta-
tion. Regarding object detection, we can divide most CNN-based methods into
two categories: region-based and single shot methods. The most characteristic
representative of the first is Region-based CNN (R-CNN) and its descendants
Fast, Faster and the recent Mask R-CNNs [2,13–15]. From the second category,
most representative methods are You Only Look Once (YOLO) [16] and Sin-
gle Shot multibox Detector (SSD) [17]. In general, region based methods are
considerably slower but more effective. Also, region based methods show better
performance on smaller objects [2,16]. Given the performance shown in com-
petitive challenges and the fact that our dataset consists mainly of very small
objects, we Faster R-CNN [2] consider more suitable for our problem.

3 Data Analysis
3.1 Dataset
In our case, the dataset consists of 47,912 images acquired in 2013 and show the
railway from Brisbane to Melbourne in Australia. The images were acquired in
three different days in the morning with similar sunlight conditions. The camera
used is a Ladybug3 spherical camera system and the images are panoramic views
of size 5400 × 2700 pixels. The images were processed manually by the produc-
tion team of the company2 for annotations and resulted in 121,528 instances of
railway signs and signals.
2
Second Affiliation.
4 G. Karagiannis et al.

Fig. 1. Instances of signals and signs. First row from left to right (class name in paren-
theses when it is different from the description): speed sign (Sign S), unit sign (Sign
U), speed standard letter (Sign Other), position light main (PL 2W), position light
separately (PL 2W& 1R), signal with two elements (Signal2 F). Second row: diverging
left-right speed sign (Sign LR), diverging left speed sign (Sign L), signal number (Sign
Signal), signal with one element (Signal1 F), signal with three elements (Signal3 F),
signal with four elements (Signal4 F).

Fig. 2. Instances of speed signs at different lighting conditions, viewpoints and scales.

The samples are originally separated into twenty five classes. Each class is in
fact a subclass of two parent classes, signs and signals. Fifteen classes correspond
to signals: three different types of position lights with their back and side view,
signals with one, two, three or four elements (lights), their side and back view
and other type of signals. Also, ten classes correspond to different types of signs:
speed signs, diverging left speed signs, diverging right speed signs, diverging
left-right speed signs, unit signs, signal number signs, other signs, back views of
circular signs, back views of diamond signs and back views of rectangular signs.
From the total amount of samples, 67,839 correspond to signals and 53,689 to
signs. Figure 1 shows some instances of signs and signals. Each of these instances
correspond to a different class. We can see the high similarity among the classes
both for signs and signals. Specifically, diverging left, diverging right, diverging
left-right and regular speed signs are very similar and especially when they are
smaller than the examples shown in Fig. 1. Similarly, even for humans it is often
hard to distinguish between signals with three or four elements when they are
small. All examples shown here are larger than their average size on the dataset
for clarity.
Figure 2 shows examples of regular speed signs with different viewpoint, size
and illumination. These examples illustrate the scale, viewpoint and illumination
variation of the objects but at the same time the high similarity among the
classes.
Railway Signs and Signals 5

Fig. 3. Instances of signals with one element (Signal1). All four examples belong to
the same class even though they have different characteristics. From left to right: some
have long (first and second), no top cover at all (third) or short cover (last). Also, some
have no back cover (first), other have circular (second and third) or semicircular (last)

Figure 3 shows examples of the same class of signals (Signal1). It is important


to note that the class is one of the least represented in the dataset with only a
few hundred samples available. Though, despite the low availability we can see
that there is significant intra-class variability. The same is observed in all the
other classes of signals except the ones corresponding to position lights.
Figure 4 shows the amount of available samples for each class. These quan-
tities vary widely for the different classes. For instance, there are about 23,000
samples available for the front view of 4-lamp signals but only a few hundreds
for position lights with two lamps or for diverging left-right speed signs. Our
dataset reflects the real distribution of objects along a railway, which means
that in a railway there exist very few position lights with two lamps and diverg-
ing left-right speed signs. Therefore, this level of imbalance among the classes
is unavoidable in real applications. However, in deep learning, large amounts of
samples are necessary to train a robust model.
A common workaround to ensure adequate balance among classes is to apply
some data augmentation techniques (e.g. apply random crops, rotations, scal-
ing, illumination changes etc. on the existing samples). However, such tech-
niques cannot solve the problem, without causing bias to the dataset, in our
case because the difference in available samples is too high. Past observations
[18], have found that non-iconic samples may be included during training only
if the overall amount of samples is large enough to capture such variability. In
any other case, these samples may act as noise and pollute the model. Thus,
it is necessary for some classes with similar characteristics to be merged. By
merging all signs except speed and signal number signs to a general class signs
other, we get a class with about 30,000 samples. Also, we merged all position
lights to a single class, resulting in about 10,000 samples for this class. Finally,
the front and side views of each signal class were merged into a single class. The
back views of all signals remained a separate class because there was no specific
information available on which type of signal each back view belonged to. After
these operations, we end up with ten classes of at least a few thousand samples
each (Fig. 5). This way, the misrepresentation problem is softened, however we
introduce high intra-class variability. The least represented class is the single
element signals (Signal 1) with about 6,000 samples which is still about four
times less than the most dominant class, but more manageable.
6 G. Karagiannis et al.

Fig. 4. Amount of sample instances per class before merging some classes. The imbal-
ance among the classes is too high.

Fig. 5. Amount of sample instances per class. After merging some classes with similar
characteristics we end up with a more balanced dataset.

Another important aspect of the samples is their size. Figure 6 is a histogram


of the size of the samples in square pixels. About 65% of the samples have area
less than 1000 pixels (≈322 ) and 89% less than 2500 pixels (502 ). Given the
size of the panoramic images, a 502 sample corresponds to 0.018% of the whole
image. In COCO [18], one of the most challenging datasets in Computer Vision,
the smallest objects correspond to 4% of the entire image. The winners for the
2017 COCO Detection: Bounding Box challenge achieved less than 55% accuracy.
This is an indication of the difficulty of our problem, in terms of relative size of
objects.
Railway Signs and Signals 7

Fig. 6. Amount of samples according to their size in pixel2 . Most samples (89%) are
smaller than 502 pixels.

A reason behind the high amount of very small objects in our dataset is that
the data was acquired by driving on a single track. However, in many sectors
along the railway there are multiple parallel tracks and the signs and signals
corresponding to these tracks appear in the dataset only in small sizes since
the camera never passed close to them. One way to limit the small object size
problem in our dataset is to split each panoramic image into smaller patches with
high overlap, small enough to achieve a less challenging relative size between
objects and image. Specifically, in our approach, each image is split into 74
patches of size 6002 pixels with 200 pixels overlap on each side. Even at this
level of fragmentation, a 502 object corresponds to 0.69% of a patch size. A
consequence of splitting the images into smaller patches is that the same object
may now appear in more than one patches due to overlap. In fact, while on the
panoramic images there exist 121,528 object instances, on the patches that were
extracted, there exist 203,322 instances. The numbers shown in Fig. 6 correspond
to the objects existing on the patches.

3.2 Railway vs Road Signs and Signals

Here, it is important to highlight the difference between the problem described in


this paper and the more popular topic of the detection of road signs and signals.
The most important difference is the size of the objects. The height of a road
sign varies from 60 to 150 cm depending on the type of road [19] while in railways
it is usually less than 40 cm high [20]. Given also that most of the times the signs
are located only a few centimetres from the ground supported by a very short
pole, it much harder to detect. Also, in railways, signs are often very similar but
have different meaning like the first two examples of the second row in Fig. 1.
At the same time, it is very common along the same railway objects of the same
class to look different as shown in Fig. 3. Thus, a detector of railway signs and
Visit https://textbookfull.com
now to explore a rich
collection of eBooks, textbook
and enjoy exciting offers!
8 G. Karagiannis et al.

signals needs to be able to distinguish objects based on fine details. Finally, in


railways the signs and the signals are often combined creating more complex
structures posing an extra challenge on a detection algorithm (e.g. the detected
signals shown in Fig. 10). Given the above differences, we consider railway signs
a more challenging detection problem.

4 Methodology

4.1 Faster R-CNN

For the detection of signs and signals, we applied the Faster R-CNN presented
by Ren et al. in [2] using ResNet-101 [21] as feature extractor. We decided to
implement this approach mainly motivated by its high performance on competi-
tive datasets such as Pascal VOC 2007 (85.6% mAP), Pascal VOC 2012 (83.8%
mAP) and COCO (59% mAP). The main drawback of this approach compared
to other acknowledged object detection methods such as YOLO [16]) or SSD is
its high processing time (three to nine times slower depending on the implemen-
tation [17]). However, the sacrifice in time pays off in accuracy, especially in this
dataset since this method performs better on small objects [2].
Here we will present some key points of Faster R-CNN. First, Faster R-
CNN is the descendant of Fast R-CNN [2] which in turn is the descendant of
R-CNN [13]. As their names imply, Fast and Faster R-CNNs are more efficient
implementations of the original concept in [13], R-CNN. The main elements of
Faster R-CNN are: (1) the base network, (2) the anchors, (3) the Region Proposal
Network (RPN) and (4) the Region based Convolutional Neural Network (R-
CNN). The last element is actually Fast R-CNN, so with a slight simplification
we can state that Faster R-CNN = RPN + Fast R-CNN.
The base network is a, usually deep, CNN. This network consists of multi-
ple convolutional layers that perform feature extraction by applying filters at
different levels. A common practice [2,14,16] is to initialize training using a
pre-trained network as a base network. This helps the network to have a more
realistic starting point compared to random initialization. Here we use ResNet
[21]. The second key point of this method is the anchors, a set of predefined
possible bounding boxes at different sizes and aspect ratios. The goal of using
the anchors is to catch the variability of scales and sizes of objects in the images.
Here we used nine anchors consisting of three different sizes (152 , 302 and 602
pixels) and three different aspect ratios 1:1, 1:2 and 2:1.
Next, we use RPN, a network trained to separate the anchors into fore-
ground and background given the Intersection over Union (IoU) ratio between
the anchors and a ground-truth bounding box (foreground if IoU > 0.7 and
background if IoU < 0.1). Thus, only the most relevant anchors for our dataset
are used. It accepts as input the feature map output of the base model and
creates two outputs: a 2 × 9 box-classification layer containing the foreground
and background probability for each of the nine different anchors and a 4 × 9
box-regression layer containing the offset values on x and y axis of the anchor
Railway Signs and Signals 9

Fig. 7. Right: the architecture of Faster R-CNN. Left: the region proposal network
(RPN). Source: Figs. 2 and 3 of [2].

bounding box compared to the ground-truth bounding boxes. To reduce redun-


dancy, due to overlapping bounding boxes, non-maximum suppression is used
on the proposed bounding boxes based on their score on the box-classification
output. A threshold of 0.7 on the IoU is used resulting in about 1,800 proposal
regions per image in our case (about 2,000 in the original paper).
Afterwards, for every region proposal we apply max pooling on the features
extracted from the last layer of the base network. Finally, the Fast RCNN is
implemented, mainly by two fully-connected layers as described originally in
[14]. This network outputs a 1 × (N + 1) vector (a probability for each of the N
number of classes plus one for the background class) and a 4 × N matrix (where
4 corresponds to the bounding box offsets across x and y axis and N on the
number of classes). Figure 7 shows the structure of Faster R-CNN and the RPN.
The RPN and R-CNN are trained according to the 4-step alternate training
to learn shared features. At first, the RPN is initialized with ResNet and fine
tuned on our data. Then, the region proposals are used to train the R-CNN
separately, again initialized by the pre-trained ResNet. Afterwards, the RPN is
initialized by the trained R-CNN with the shared convolutional layers fixed and
the non-shared layers of RPN are fine tuned. Finally, with the shared layers
fixed, the non-shared layers of R-CNN are fine tuned. Thus, the two networks
are unified.

4.2 Evaluation Method

For the evaluation of detection, an overlap criterion between the ground truth
and predicted bounding box is defined. If the Intersection over Union (IoU) of
these two boxes is greater than 0.5, the prediction is considered as True Positive
(TP). Multiple detections of the same ground truth objects are not considered
true positives, each predicted box is either True-Positive or False-Positive (FP).
Predictions with IoU smaller than 0.5 are ignored and count as False Negatives
(FN). Precision is defined as the fraction of the correct detections over the total
10 G. Karagiannis et al.

detections ( T PT+F
P
P ). Recall is the fraction of correct detections over the total
amount of ground truth objects ( T PT+F P
N ) [22]. For the evaluation of classifica-
tion and overall accuracy of our approach, we adopted mean Average Precision
(mAP) [23] as a widely accepted metric [22]. For each class, the predictions
satisfying the overlap criterion are assigned to ground truth objects in descend-
ing order by the confidence output. The precision/recall curve is computed and
the average precision(AP) is the mean value of interpolated precision at eleven
equally spaced levels of recall [22]:
1 
AP = pinterp (r) (1)
11
r∈{0,0.1,...,1}

where
pinterp (r) = maxr̃:r̃≥r p(r̃) (2)
Then, the mean of all APs across the classes is the mAP metric. This metric
was used as evaluation method for the Pascal VOC 2007 detection challenge
and from that time is the most common evaluation metric on object detection
challenges [18].

5 Results
The network was trained on a single Titan-X GPU implementation for approx-
imately two days. We trained it for 200k iterations with a learning rate of 0.003

Fig. 8. Percentage of objects detected according to their size in pixels2 . The algorithm
performed very well on detecting objects larger than 400 pixels (more than 79% in
all size groups). On the other hand, it performed poorly on very small objects (less
than 200 pixels) detecting only 24% of them. The performance was two times better
on objects with size 200–400 pixels (57%), which is the most dominant size group with
about 45,000 samples (see Fig. 6).
Railway Signs and Signals 11

and for 60k iterations with learning rate 0.0003. The model achieved an overall
precision of 79.36% on the detection task and a 70.9% mAP.
The precision level of detection is considered high given the challenges
imposed by the dataset as they are presented in Sect. 3. Figure 8 shows the
detection performance of the algorithm with respect to the size of the samples.
We can see that the algorithm fails to detect very small objects. About 76%
of objects smaller than 200 pixels are not detected. A significantly better, but
still low, performance is observed for objects of 200–400 pixels size (57% detec-
tion rate). On the other hand, the algorithm performs uniformly well for objects

Fig. 9. Example of detection. Two successful detections and one false positive. We
can see the illumination variance even in a single image with areas in sunlight and in
shadow. The sign is detected successfully despite its small size and the partial occlusion
from the bush. The entrance of the tunnel was falsely detected as a signal with three
elements.
12 G. Karagiannis et al.

larger than 400 pixels (79% for sizes 400–600 and more than 83% for larger than
600 pixels). These results show that, in terms of object size, there is a threshold
above which the performance of Faster-RCNN is stable and it is unaffected by
the size of the objects. Therefore, if there are enough instances and the objects
are large enough, object size is not a crucial factor. In our case, this threshold
is about 500 pixels.
An interesting metric for our problem would be to measure the amount of
unique physical objects detected. The goal of object detection in computer vision,
usually, is to detect all the objects that appear in an image. In mapping, most
of the times, it is important to detect the locations of the physical objects. If we
have multiple images showing the same physical object from different point of
views and at different scales, in mapping, it would be sufficient to detect it at

Fig. 10. Example of detection. A successful detection of more complex object struc-
tures. Different signals mounted on top of each other did not confuse the algorithm.
Random documents with unrelated
content Scribd suggests to you:
The Project Gutenberg eBook of Elements of
agricultural chemistry and geology
This ebook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this ebook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.

Title: Elements of agricultural chemistry and geology

Author: Jas. F. W. Johnston

Release date: April 19, 2024 [eBook #73427]

Language: English

Original publication: New York: Wiley and Putnam, 1842

Credits: The Online Distributed Proofreading Team at


https://www.pgdp.net (This file was produced from
images generously made available by The Internet
Archive)

*** START OF THE PROJECT GUTENBERG EBOOK ELEMENTS OF


AGRICULTURAL CHEMISTRY AND GEOLOGY ***
Wiley & Putnam’s New Publications.

LECTURES
ON
AGRICULTURAL
CHEMISTRY AND GEOLOGY;
TO WHICH ARE ADDED,

SUGGESTIONS FOR EXPERIMENTS


IN PRACTICAL AGRICULTURE.
BY
JAS. F. W. JOHNSTON, M.A., F.R.SS. L. & E.
Fellow of the Geological Society, Honorary Member of the Royal
Agricultural Society, &c. &c.; Reader in Chemistry and
Mineralogy in the University of Durham, &c.

These Lectures will be divided into four Parts, of


which the First is now ready; the others are in course
of publication, and the whole will be completed in two
volumes.

Outline of Part I.—“On the Organic Constituents of Plants.”—


Lecture I. Elementary substances of which plants subsist. II. and III.
Compound substances which minister to the growth of plants. IV.
Sources from which plants immediately derive their elementary
constituents. V. How the food enters into the circulation of plants—
general structure of plants. VI. Into what substances the food is
changed in the interior of plants—substances of which plants chiefly
consist. VII. Chemical changes by which the substances of which
plants chiefly consist are formed from those on which they live. VIII.
How the supply of food for plants is kept up in the general
vegetation of the globe.
Outline of Part II.—“On the Inorganic Constituents of Plants—the
Origin, Classification, and Chemical Constitution of Soils—General
and Special Relations of Geology to Agriculture—Origin, Constitution,
Analyses, and Methods of Improving Soils in different Districts and
under unlike conditions.—Lecture IX. Kind and proportion of
inorganic matter contained in plants. X. Properties of the inorganic
compounds which exist in vegetable substances, or which promote
their growth. XI. Of the nature, origin, and classification of soils—
Structure of the earth’s crust—Classification and general characters
of the stratified rocks—Agricultural capabilities of the soils derived
from them. XII. Granite and trap rocks, and the soils derived from
them—Superficial accumulations. XIII. On the exact chemical
constitution, the analysis, and the physical properties of soils.
Part III.—Methods of improving the soil by mechanical and by
chemical means—Manures, their nature, composition, and mode of
action—theory of their application in different localities.
Part IV.—The results of vegetation—the nature, constitution, and
nutritive properties of different kinds of produce, and by different
modes of cultivation—the feeding of cattle, the making of cheese,
&c. &c. The constitution and differences of various kinds of wood,
and the circumstances which favour their growth.

CRITICAL NOTICES.
“A valuable and interesting course of lectures.”—
Quarterly Review.
“But it is unnecessary to make large extracts from a
book which we hope and trust will soon be in the
hands of nearly all our readers. Considering it as
unquestionably the most important contribution that
has recently been made to popular science, and as
destined to exert an extensively beneficial influence in
this country, we shall not fail to notice the forthcoming
portions as soon as they appear from the press.”—
Silliman’s American Journal of Science. Notice of Part I
of the American reprint.
“We think it no compliment to Professor Johnston
to say, that among our own writers of the present day
who have recently been endeavouring to improve our
agriculture by the aid of science, there is probably no
other who has been more eminently successful, or
whose efforts have been more highly appreciated.”—
County Herald.
“Prof. Johnston is one who has himself done so
much already for English agriculture, that to behold
him still in hot pursuit of the inquiry into what can be
done, supplies of itself a stimulus to further exertion
on the part of others.”—Berwick Warder.
ELEMENTS
OF
AGRICULTURAL
CHEMISTRY AND GEOLOGY.
BY
JAS. F. W. JOHNSTON, M.A., F.R.S.,
HONORARY MEMBER OF THE ROYAL ENGLISH AGRICULTURAL
SOCIETY, AND AUTHOR OF “LECTURES ON AGRICULTURAL
CHEMISTRY AND GEOLOGY.”
NEW-YORK:
WILEY AND PUTNAM.
MDCCCXLII.
J. P. Wright, Printer,
18 New Street, N. Y.
INTRODUCTION.
The scientific principles upon which the art of culture depends,
have not hitherto been sufficiently understood or appreciated by
practical men. Into the causes of this I shall not here inquire. I may
remark, however, that if Agriculture is ever to be brought to that
comparative state of perfection to which many other arts have
already attained, it will only be by availing itself, as they have done,
of the many aids which Science offers to it; and that, if the practical
man is ever to realize upon his farm all the advantages which
Science is capable of placing within his reach, it will only be when he
has become so far acquainted with the connection that exists
between the art by which he lives and the sciences, especially of
Chemistry and Geology, as to be prepared to listen with candour to
the suggestions they are ready to make to him, and to attach their
proper value to the explanations of his various processes which they
are capable of affording.
The following little Treatise is intended to present a familiar
outline of the subjects of Agricultural Chemistry and Geology, as
treated of more at large in my Lectures, of which the first Part is now
before the public. What in this work has necessarily been taken for
granted, or briefly noticed, is in the Lectures examined, discussed, or
more fully detailed.
Durham, 8th April, 1842.
CONTENTS.
CHAPTER I. page
Distinction between Organic and Inorganic Substances
—The Ash of Plants—Constitution of the Organic
Parts of Plants—Preparation and Properties of
Carbon, Oxygen, Hydrogen, and Nitrogen—
Meaning of Chemical Combination. 13

CHAPTER II.
Form in which these different Substances enter into
Plants—Properties of the Carbonic, Humic, and
Ulmic Acids; of Water, of Ammonia, and of Nitric
Acid—Constitution of the Atmosphere. 25

CHAPTER III.
Structure of Plants—Mode in which their Nourishment
is obtained—Growth and Substance of Plants—
Production of their Substance from the Food they
imbibe—Mutual Transformations of Starch, Sugar,
and Woody Fibre. 38

CHAPTER IV.
Of the Inorganic Constituents of Plants—Their
immediate Source—Their Nature—Quantity of
each in certain common Crops. 49

CHAPTER V.
Of Soils—Their Organic and Inorganic Portions—Saline
Matter in Soils—Examination and Classification of
Soils—Diversities of Soils and Subsoils. 67
CHAPTER VI.
Direct Relations of Geology to Agriculture—Origin
of Soils—Causes of their Diversity—Relation to
the Rocks on which they rest—Constancy in the
Relative Position and Character of the Stratified
Rocks—Relation of this Fact to Practical
Agriculture—General Characters of the Soils
upon these Rocks. 78

CHAPTER VII.
Soils of the Granitic and Trap Rocks—Accumulations
of Transported Sands, Gravels, and Clays—Use
of Geological Maps in reference to Agriculture
—Physical Characters and Chemical Constitution
of Soils—Relation between the Nature of the
Soil and the Kind of Plants that naturally grow
upon it. 103

CHAPTER VIII.
Of the Improvement of the Soil—Mechanical and Chemical
Methods—Draining—Subsoiling—Ploughing, and
Mixing of Soils—Use of Lime, Marl, and Shell-sand—
Manures—Vegetable, Animal, and Mineral Manures. 133

CHAPTER IX.
Animal Manures—Their Relative Value and Mode of
Action—Difference between Animal and Vegetable
Manures—Cause of this Difference—Mineral Manures—
Nitrates of Potash and Soda—Sulphate of Soda,
Gypsum, Chalk, and Quicklime—Chemical Action of
these Manures—Artificial Manures—Burning and
Irrigation of the Soil—Planting and Laying Down
to Grass. 165
CHAPTER X.
The Products of Vegetation—Importance of Chemical
quality as well as quantity of Produce—Influence
of different Manures on the quantity and quality
of the Crop—Influence of the Time of Cutting—
Absolute quantity of Food yielded by different Crops
—Principles on which the Feeding of Animals depends
—Theoretical and Experimental Value of different kinds
of Food for Feeding Stock—Concluding Observations. 216
ELEMENTS
OF
AGRICULTURAL CHEMISTRY, &c.
CHAPTER I.
Distinction between Organic and Inorganic Substances.—
The Ash of Plants.—Constitution of the Organic Parts of
Plants.—Preparation and Properties of Carbon,
Hydrogen, and Nitrogen.—Meaning of Chemical
Combination.

The object of the practical farmer is to raise from a given extent


of land the largest quantity of the most valuable produce at the least
cost, and with the least permanent injury to the soil. The sciences
either of chemistry or geology throw light on every step he takes or
ought to take, in order to effect this main object.

SECTION I.—OF THE VEGETABLE AND EARTHY


OR THE ORGANIC AND INORGANIC
PARTS OF PLANTS.
In the prosecution of his art, two distinct classes of substances
engage his attention—the living crops he raises, and the dead earth
from which they are gathered. If he examine any fragment of an
animal or vegetable, either living or dead, he will observe that it
exhibits pores of various kinds arranged in a certain order—that it
has a species of internal structure—that it has various parts or
organs—in short, that it is what physiologists term organized. If he
examine, in like manner, a lump of earth or rock, he will perceive no
such structure. To mark this distinction, the parts of animals and
vegetables, either living or dead—whether entire or in a state of
decay, are called organic bodies, while earthy and stony substances
are called inorganic bodies.
Organic substances are also more or less readily burned and
dissipated by heat in the open air; inorganic substances are
generally fixed and permanent in the fire.
But the crops which grow upon it, and the soil in which they are
rooted, contain a portion of both of these classes of substances. In
all fertile soils, there exists from 3 to 10 per cent. of vegetable or
other matter of organic origin; while, on the other hand, all
vegetables, as they are collected for food, leave, when burned, from
one-half to twenty per cent. of inorganic ash.
If we heat a portion of soil to redness in the open air, the organic
matter will burn away, and, in general, the soil, if previously dry, will
not be materially diminished in bulk. But if a handful of wheat, or of
wheat straw, or of hay, be burned in the same manner, the
proportion that disappears is so great, that in most cases a
comparatively minute quantity only remains behind. Every one is
familiar with this fact who has seen the small bulk of ash that is left
when weeds, or thorns, or trees, are burned in the field, or when a
hay or corn-stack is accidentally consumed. Yet the ash thus left is a
very appreciable quantity, and the study of its true nature throws
much light, as we shall hereafter see, on the practical management
of the land on which any given crop is to be made to grow.
Thus the quantity of ash left by a ton of wheat straw is
sometimes as much as 360 lbs.; by a ton of oat straw as much as
200 lbs.; while a ton of the grain of wheat leaves only about 40 lbs.;
of the grain of oats about 90 lbs.; and of oak wood only 4 or 5 lbs.
The quantities of inorganic matter, therefore, though comparatively
small, yet, in some cases, amount to a considerable weight in an
entire crop. The nature, source and uses of this earthy matter will be
explained in a subsequent chapter.

SECTION II.—CONSTITUTION OF THE


ORGANIC
PART OF PLANTS AND ANIMALS.
The organic part of plants, when in a perfectly dry state,
constitutes therefore from 85 to 99 per cent. of their whole weight.
Of those parts of plants which are cultivated for food, it is only hay
and straw, and a very few others, that contain as much as 10 per
cent. of inorganic matter.
This organic part consists of four substances, known to chemists
by the names of carbon, hydrogen, oxygen, and nitrogen. The first
of these, carbon, is a solid substance, the other three are gases or
peculiar kinds of air.
1. Carbon. When wood is burned in a covered heap, as is done
by the charcoal burners, or is distilled in iron retorts, as in making
wood-vinegar, it is charred and converted into common wood
charcoal. This charcoal is the most usual and best known variety of
carbon. It is black, soils the fingers, and is more or less porous
according to the kind of wood from which it has been formed. Coke
obtained by charring or distilling coal is another variety. It is
generally denser or heavier than the former, though less pure. Black
lead is a third variety, still heavier and more impure. The diamond is
the only form in which carbon occurs in nature in a state of perfect
purity.
This latter fact, that the diamond is pure carbon—that it is
essentially the same substance with the finest and purest lamp-black
—is very remarkable; but it is only one of many striking
circumstances that every now and then present themselves before
the inquiring chemist.
Charcoal, the diamond, lamp-black, and all the other forms of
carbon, burn away more or less slowly when heated in the air, and
are converted into a kind of gas known by the name of carbonic
acid. The impure varieties leave behind them a greater or less
proportion of ash.
2. Hydrogen.—If oil of vitriol (sulphuric acid) be mixed with
twice its bulk of water, and then poured upon iron filings, the
mixture will speedily begin to boil up, and bubbles of gas will rise to
the surface of the liquid in great abundance. These are bubbles of
hydrogen gas.
If the experiment be performed in a bottle, the hydrogen which
is produced will gradually drive out the atmospheric air it contained,
and will itself take its place. If a bit of wax taper be tied to the end
of a wire, and when lighted be introduced into the bottle, it will be
instantly extinguished; while the hydrogen will take fire, and burn at
the mouth of the bottle with a pale yellow flame. If the taper be
inserted before the common air is all expelled, the mixture of
hydrogen and common air will burn with an explosion more or less
violent, and may even shatter the bottle and produce serious
accidents. This experiment, therefore, ought to be made with care.
It may be safely made in an open tumbler, covered by a plate or a
piece of paper, till a sufficient quantity of hydrogen is collected,
when, on the introduction of the taper, the light will be extinguished,
and the hydrogen will burn with a less violent explosion.
This gas is also an exceedingly light substance, rising through
common air as wood does through water. Hence, when confined in a
bag made of silk, or other light tissue, it is capable of sustaining
heavy substances in the air, and even of transporting them to great
heights. For this reason it is employed for filling and elevating
balloons.
Hydrogen gas is not known to occur anywhere in nature in any
sensible quantity. It is very abundant, as we shall hereafter see, in
what by chemists is called a state of combination.

3. Oxygen.—When strong oil of vitriol is poured upon black


oxide of manganese, and heated in a glass retort: or when red oxide
of mercury, or chlorate of potash, is so heated alone; or when
saltpetre, or the same oxide of manganese, is heated alone in an
iron bottle;—in all these cases a kind of air is given off, which, when
collected and examined by plunging a taper into it, is found to be
neither common air nor hydrogen gas. The taper, when introduced,
burns with great rapidity, and with exceeding brilliancy, and
continues to burn till either the whole of the gas disappears, or the
taper is entirely consumed. If a living animal is introduced, its
circulation and its breathing become quicker—it is speedily thrown
into a fever—it lives as fast as the taper burned—and, after a few
hours, dies from excitement and exhaustion. This gas is not light like
hydrogen, but is about one-ninth part heavier than common air.
In the atmosphere, oxygen exists in the state of gas. It forms
about one-fifth of the bulk of the air we breathe, and is the
substance which, in the air, supports all animal life and the
combustion of all burning bodies. Were it by any cause suddenly
removed from the atmosphere of our globe, every living thing would
perish, and all combustion would become impossible.
4. Nitrogen.—If a saucer be half filled with milk of lime, formed
by mixing slaked quicklime with water, a very small tea-cup
containing a little burning sulphur then placed in the middle, and a
common large tumbler inverted over the whole, the sulphur will burn
for a while, and will then gradually die out. On allowing the whole to
remain for some time, the fumes of the sulphur will be absorbed by
the milk of lime, which will rise a certain way into the tumbler. When
the absorption has ceased, a quantity of air will remain in the upper
part of the tumbler. This air is nitrogen gas.
If the whole be now introduced into a large basin of water, the
tumbler being held in the left hand, the cup and saucer may be
removed from beneath. The saucer may then be inverted and
introduced with its under side into the mouth of the tumbler, which
may thus be lifted out of the water and restored to its upright
position, the saucer serving the purpose of a cover. By carefully
removing this cover with the one hand, a lighted taper may be
introduced by the other. It will then be seen that the taper is
extinguished by this air, and that no other effect follows. Or if a
living animal be introduced into it, breathing will instantly cease, and
it will drop without signs of life.
This gas possesses no other remarkable property. It is a very
little lighter than common air, and is known to exist in large quantity
in the atmosphere only. Of the air we breathe it forms nearly four-
fifths of the entire bulk.
These three gases are incapable of being distinguished from
common air, or from each other, by the ordinary senses; but by the
aid of the taper they are readily recognised. Hydrogen extinguishes
the taper, but itself takes fire; nitrogen simply extinguishes it; while
in oxygen the taper burns with extraordinary brilliancy and rapidity.

Of this one solid substance, carbon, and these three gases,


hydrogen, oxygen, and nitrogen, all the organic part of vegetable
and animal substances is made up.
Into these substances, however, they enter in very different
proportions. Nearly one-half the weight of all vegetable productions
which are gathered as food for man or beast—in their dry state—
consists of carbon; the oxygen amounts to rather more than one-
third, the hydrogen to little more than five per cent., while the
nitrogen rarely exceeds two and a half or three per cent. of their
weight.
This will appear from the following table, which exhibits the
actual constitution by analysis of some varieties of the more
common crops when perfectly dry.
Carbon. Hydrogen. Oxygen. Nitrogen. Ash.
Hay, 458 50 387 15 90
Potatoes, 441 58 439 12 50
Wheat Straw, 485 52 389½ 3½ 70
Oats, 507 64 367 22 40

These numbers represent the weights of each element in


pounds, contained in 1000 lbs. of the dry hay, potatoes, &c.; but in
drying by a gentle heat, 1000 lbs. of hay from the stack, lost 158
lbs. of water, of potatoes wiped dry externally 722 lbs.,[1] wheat
straw 260 lbs., and oats 151 lbs.

SECTION III.—OF THE MEANING OF


CHEMICAL COMBINATION.
If the three kinds of air above spoken of be mixed together in a
bottle, no change will take place, and if charcoal in fine powder be
added to them, still no new substance will be produced. If we take
the ash left by a known weight of hay or wheat straw, and mix it
with the proper quantities of the four elementary substances,
carbon, hydrogen, &c., as shewn in the above table, we shall be
unable by this means to form either hay or wheat straw. The
elements of which vegetable substances consist, therefore, are not
merely mixed together—they are united in some closer and more
intimate manner. To this more intimate state of union, the term
chemical combination is applied—the elements are said to be
chemically combined.
Thus, when charcoal is burned in the air, it slowly disappears,
and forms, as already stated, a kind of air known by the name of
carbonic acid gas, which rises into the atmosphere and disappears.
Now, this carbonic acid is formed by the union of the carbon
(charcoal), while burning, with the oxygen of the atmosphere, and in
this new air the two elements, carbon and oxygen, are chemically
combined.
Again, if a piece of wood or a bit of straw, in which the elements
are already chemically combined, be burned in the air, these
elements are separated and made to assume new states of
combination, in which new states they escape into the air and
become invisible. When a substance is thus changed by the action of
heat, it is said to be decomposed, or if it gradually decay and perish
by exposure to the air and moisture, it undergoes slow
decomposition.
When, therefore, two or more substances unite together, so as to
form a third possessing properties different from both, they enter
into chemical union—they form a chemical combination or chemical
compound. When, on the other hand, one compound body is so
changed as to be converted into two or more substances different
from itself, it is decomposed. Carbon, hydrogen, &c., are chemically
combined in the interior of the plant during the formation of wood:
wood, again, is decomposed when by the vinegar-maker it is
converted among other substances into charcoal and wood-vinegar,
and the flour of grain when the brewer or distiller converts it into
ardent spirits.
CHAPTER II.
Form in which these different substances enter into Plants.
Properties of the Carbonic, Humic, and Ulmic Acids—of
Water, of Ammonia, and of Nitric Acid. Constitution of
the Atmosphere.

SECTION I.—FORM IN WHICH THE CARBON,


ETC.
ENTER INTO PLANTS.
It is from their food that plants derive the carbon, hydrogen,
oxygen, and nitrogen, of which their organic part consists. This food
enters partly by the minute pores of their roots, and partly by those
which exist in the green part of the leaf and of the young twig. The
roots bring up food from the soil, the leaves take it in directly from
the air.
Now, as the pores in the roots and leaves are very minute,
carbon (charcoal) cannot enter into either in a solid state; and as it
does not dissolve in water, it cannot, in the state of simple carbon,
be any part of the food of plants. Again, hydrogen gas neither exists
in the air nor usually in the soil—so that, although hydrogen is
always found in the substance of plants, it does not enter them in
the state of the gas above described. Oxygen exists in the air, and is
directly absorbed both by the leaves and by the roots of plants;
while nitrogen, though it forms a large part of the atmosphere, is
not supposed to enter directly into plants in any considerable
quantity.
The whole of the carbon and hydrogen, and the greater part of
the oxygen and nitrogen also, enter into plants in a state of chemical
combination with other substances; the carbon chiefly in the state of
carbonic acid, and of certain other soluble compounds which exist in
the soil; the hydrogen and oxygen in the form of water: and the
nitrogen in those of ammonia or nitric acid. It will be necessary
therefore briefly to describe these several compounds.

SECTION II.—OF THE CARBONIC, HUMIC,


AND ULMIC ACIDS.

1. Carbonic Acid.—If a few pieces of chalk or limestone be put


into the bottom of a tumbler, and a little spirit of salt (muriatic acid)
be poured upon them, a boiling up or effervescence will take place,
and a gas will be given off, which will gradually collect and fill the
tumbler; and when produced very rapidly, may even be seen to run
over its edges. This gas is carbonic acid. It cannot be distinguished
from common air by the eye; but if a taper be plunged into it, the
flame will immediately be extinguished, while the gas remains
unchanged. This kind of air is so heavy, that it may be poured from
one vessel into another, and its presence recognised by the taper. It
has also a peculiar odour, and is exceedingly suffocating, so that if a
living animal be introduced into it, life immediately ceases. It is
absorbed by water, a pint of water absorbing or dissolving a pint of
the gas.
Carbonic acid exists in the atmosphere; it is given off from the
lungs of all living animals while they breathe; it is also produced
largely during the burning of wood, coal, and all other combustible
bodies, so that an unceasing supply of this gas is poured into the air.
Decaying animal and vegetable substances also give off this gas,
and hence it is always present in greater or less abundance in the
soil, and especially in such soils as are rich in vegetable matter.
During the fermentation of malt liquors, or of the expressed juices of
different fruits,—the apple, the pear, the grape, the gooseberry—it is
produced, and the briskness of such fermented liquors is due to the
escape of this gas. From the dung and compost heap it is also given
off; and when put into the ground in a fermenting state, farm-yard
manure affords a rich supply of carbonic acid to the young plant.
Carbonic acid consists of carbon and oxygen only, combined
together in the proportion of 28 of the former to 72 of the latter, or
100 lbs. of carbonic acid contain 28 lbs. of carbon and 72 lbs. of
oxygen.

2. Humic and Ulmic Acids.—The soil always contains a portion


of vegetable matter (called humus by some writers), and such
matter is always added to it when it is manured from the farm-yard
or the compost heap. During the decay of this vegetable matter,
carbonic acid, as above stated, is given off in large quantity, but
other substances are also formed at the same time. Among these
are the two to which the names of humic and ulmic acids are
respectively given. They both contain much carbon, are both capable
of entering the roots of plants, and both, no doubt, in favourable
circumstances, help to feed the plant.
If the common soda of the shops be dissolved in water, and a
portion of a rich vegetable soil, or a bit of peat, be put into this
solution, and the whole boiled, a brown liquid is obtained. If to this
brown liquid, spirit of salt (muriatic acid) be added till it is sour to
the taste, a brown flocky powder falls to the bottom. This brown
substance is humic acid. But if in this process we use spirit of
hartshorn (liquid ammonia), instead of the soda, ulmic acid is
obtained.
These acids exist along with other substances in the rich brown
liquor of the farm-yard, which is so often allowed to run to waste;
they are also produced in greater or less quantity during the decay
of the manure after it is mixed with the soil, and no doubt yield to
the plant a portion of that supply of food which it must necessarily
receive from the soil.

SECTION III.—OF WATER, AMMONIA,


AND NITRIC ACID.
1. Water.—If hydrogen be prepared in a bottle, in the way
already described, and a gas-burner be fixed into its mouth, the
hydrogen may be lighted, and will burn as it escapes into the air.
Held over this flame a cold tumbler will become covered with dew,
or with little drops of water. This water is produced during the
burning of the hydrogen; and as it takes place in pure oxygen gas as
well as in the open air, this water must contain the hydrogen and
oxygen which disappear, or must consist of hydrogen and oxygen
only.
This is a very interesting fact; and were it not that chemists are
now familiar with many such, it could not fail to appear truly
wonderful that the two gases, oxygen and hydrogen, by their union,
should form so very different a substance as water is from either. It
consists of 1 of hydrogen to 8 of oxygen, or every 9 lbs. of water
contain 8 lbs. of oxygen and 1 lb. of hydrogen.
Water is so familiar a substance, that it is unnecessary to dwell
upon its properties. When pure, it has neither colour, taste, nor
smell. At 32° of Fahrenheit’s[2] scale (the freezing point), it solidifies
into ice, and at 212° it boils, and is converted into steam. There are
two others of its properties which are especially interesting in
connection with the growth of plants.
1st, If sugar or salt be put into water, they disappear or are
dissolved. Water has the power of thus dissolving numerous other
substances in greater or less quantity. Hence, when the rain falls and
sinks into the soil, it dissolves some of the soluble substances it
meets in its way, and rarely reaches the roots of plants in a pure
state. So waters that rise up in springs are rarely pure. They always
contain earthy and saline substances in solution, and these they
carry with them, when they are sucked in by the roots of plants.
It has been above stated, that water absorbs (dissolves) its own
bulk of carbonic acid; it dissolves also smaller quantities of the
oxygen and nitrogen of the atmosphere; and hence, when it meets
any of these gases in the soil, it becomes impregnated with them,
and conveys them into the plant, there to serve as a portion of its
food.
2d, Water is composed of oxygen and hydrogen; by certain
chemical processes it can readily be resolved or decomposed
artificially into these two gases. The same thing takes place naturally
in the interior of the living plant. The roots absorb the water, but if in
any part of the plant hydrogen be required, to make up the
substance which it is the function of that part to produce, a portion
of the water is decomposed and worked up, while the oxygen is set
free, or converted to some other use. So, also, in any case where
oxygen is required water is decomposed, the oxygen made use of,
and the hydrogen liberated. Water, therefore, which abounds in the
vessels of all growing plants, if not directly converted into the
substance of the plant, is yet a ready and ample source from which
a supply of either of the elements of which it consists may at any
time be obtained.
It is a beautiful adaptation of the properties of this all-pervading
compound (water), that its elements should be so fixedly bound
together as rarely to separate in external nature, and yet to be at
the command and easy disposal of the vital powers of the humblest
order of living plants.

2. Ammonia.—If the sal ammoniac of the shops be mixed with


quicklime, a powerful odour is immediately perceived, and an
invisible gas is given off which strongly affects the eyes. This gas is
ammonia. Water dissolves or absorbs it in very large quantity, and
this solution forms the common hartshorn of the shops. The white
solid smelling-salts of the shops are a compound of ammonia with
carbonic acid,—a solid formed by the union of two gases.
The gaseous ammonia consists of nitrogen and hydrogen only, in
the proportion of 14 of the former to 3 of the latter, or 17 lbs. of
ammonia contain 3 lbs. of hydrogen.
The chief natural source of this compound is, in the decay of
animal substances. During the putrefaction of dead animal bodies
ammonia is invariably given off. From the animal substances of the
farm-yard it is evolved, and from all solid and liquid manures of
animal origin. It is also formed in lesser quantity during the decay of
vegetable substances in the soil; and in volcanic countries, it escapes
from many of the hot lavas, and from the crevices in the heated
rocks.
It is produced artificially by the distillation of animal substances
(hoofs, horns, &c.), or of coal. Thousands of tons of the ammonia
present in the ammoniacal liquors of the gas-works, which might be
beneficially applied as a manure, are annually carried down by the
rivers, and lost in the sea.
The ammonia which is given off during the putrefaction of animal
substances rises partially into the air, and floats in the atmosphere,
till it is either decomposed by natural causes, or is washed down by
the rains. In our climate, cultivated plants derive a considerable
portion of their nitrogen from ammonia. It is supposed to be one of
the most valuable fertilizing substances contained in farm-yard
manure; and as it is present in greater proportion by far in the liquid
than in the solid contents of the farm-yard, there can be no doubt
that much real wealth is lost, and the means of raising increased
crops thrown away in the quantities of liquid manure which are
almost everywhere permitted to run to waste.

3. Nitric Acid—is a powerfully corrosive liquid known in the


shops by the familiar name of aquafortis. It is prepared by pouring
oil of vitriol (sulphuric acid) upon saltpetre, and distilling the mixture.
The aquafortis of the shops is a mixture of the pure acid with water.
Pure nitric acid consists of nitrogen and oxygen only; the union
of these two gases, so harmless in the air, producing the burning
and corrosive compound which this is known to be.
It never reaches the roots of plants in this free and corrosive
state. It exists in many soils, and is naturally formed in compost
heaps, and in most situations where vegetable matter is undergoing
decay in contact with the air; but it is always in a state of chemical
combination in these cases. With potash, it forms nitrate of potash
(saltpetre); with soda, nitrate of soda; and with lime, nitrate of lime;
and it is generally in one or other of these states of combination that
it reaches the roots of plants.
Nitric acid is also naturally formed, and in some countries
probably in large quantities, by the passage of electricity through the
atmosphere. The air, as has been already stated, contains much
oxygen and nitrogen mixed together, but when an electric spark is
passed through a quantity of air, a certain quantity of the two unite
together chemically, so that every spark that passes forms a small
portion of nitric acid. A flash of lightning is only a large electric
spark; and hence every flash that crosses the air produces along its
path a quantity of this acid. Where thunder-storms are frequent,
much nitric acid must be produced in this way in the air. It is washed
down by the rains, in which it has frequently been detected, and
thus reaches the soil, where it produces one or other of the nitrates
above mentioned.
It has been long observed that those parts of India are the most
fertile in which saltpetre exists in the soil in the greatest abundance.
Nitrate of soda, also, in this country, has been found wonderfully to
promote vegetation in many localities; and it is a matter of frequent
remark, that vegetation seems to be refreshed and invigorated by
the fall of a thunder-shower. There is, therefore, no reason to doubt
that nitric acid is really beneficial to the general vegetation of the
globe. And since vegetation is most luxuriant in those parts of the
globe where thunder or lightning are most abundant, it would
appear as if the natural production of this compound body in the air,
to be afterwards brought to the earth by the rains, were a wise and
beneficent contrivance by which the health and vigour of universal
vegetation is intended to be promoted.
It is from this nitric acid, thus universally produced and existing,
that plants appear to derive a large—probably, taking vegetation in
general, the largest—portion of their nitrogen. In all climates they
also derive a portion of this element from ammonia; but less from
this source in tropical than in temperate climates.[3]

SECTION IV.—OF THE CONSTITUTION OF


THE ATMOSPHERE.
The air we breathe, and from which plants also derive a portion
of their nourishment, consists of a mixture of oxygen and nitrogen
gases, with a minute quantity of carbonic acid, and a variable
proportion of watery vapour. Every hundred gallons of dry air contain
about 21 gallons of oxygen and 79 of nitrogen. The carbonic acid
amounts only to one gallon in 2500, while the watery vapour in the
atmosphere varies from 1 to 2½ gallons (of steam) in 100 gallons of
common air.
The oxygen in the air is necessary to the respiration of animals,
and to the support of combustion (burning of bodies). The nitrogen
serves principally to dilute the strength, so to speak, of the pure
oxygen, in which gas, if unmixed, animals would live and
combustibles burn with too great rapidity. The small quantity of
carbonic acid affords an important part of their food to plants, and
the watery vapour in the air aids in keeping the surfaces of animals
and plants in a moist and pliant state; while, in due season, it
descends also in refreshing showers, or studs the evening leaf with
sparkling dew.
There is a beautiful adjustment in the constitution of the
atmosphere to the nature and necessities of living beings. The
energy of the pure oxygen is tempered, yet not too much weakened,
by the admixture of nitrogen. The carbonic acid, which alone is
noxious to life, is mixed in so minute a proportion as to be harmless
to animals, while it is still beneficial to plants; and when the air is
overloaded with watery vapour, it is provided that it shall descend in
rain. These rains at the same time serve another purpose. From the
surface of the earth there are continually ascending vapours and
exhalations of a more or less noxious kind; these the rains wash out

You might also like