Instant Access To Advances in Computer Vision: Proceedings of The 2019 Computer Vision Conference (CVC), Volume 1 Kohei Arai Ebook Full Chapters
Instant Access To Advances in Computer Vision: Proceedings of The 2019 Computer Vision Conference (CVC), Volume 1 Kohei Arai Ebook Full Chapters
com
https://textbookfull.com/product/advances-in-
computer-vision-proceedings-of-the-2019-computer-
vision-conference-cvc-volume-1-kohei-arai/
https://textbookfull.com/product/proceedings-of-the-future-
technologies-conference-ftc-2018-volume-1-kohei-arai/
textbookfull.com
https://textbookfull.com/product/proceedings-of-the-future-
technologies-conference-ftc-2020-volume-1-kohei-arai/
textbookfull.com
https://textbookfull.com/product/intelligent-computing-proceedings-of-
the-2020-computing-conference-volume-1-kohei-arai/
textbookfull.com
https://textbookfull.com/product/computational-methods-in-synthetic-
biology-mario-andrea-marchisio/
textbookfull.com
Gerontological Social Work and the Grand Challenges
Focusing on Policy and Practice Sara Sanders
https://textbookfull.com/product/gerontological-social-work-and-the-
grand-challenges-focusing-on-policy-and-practice-sara-sanders/
textbookfull.com
https://textbookfull.com/product/the-vanishing-american-adult-our-
coming-of-age-crisis-and-how-to-rebuild-a-culture-of-self-reliance-
ben-sasse/
textbookfull.com
https://textbookfull.com/product/probability-and-statistics-for-
engineering-and-the-sciences-jay-l-devore/
textbookfull.com
Bossing the Cowboy 1st Edition Kennedy Fox
https://textbookfull.com/product/bossing-the-cowboy-1st-edition-
kennedy-fox/
textbookfull.com
Advances in Intelligent Systems and Computing 943
Kohei Arai
Supriya Kapoor Editors
Advances in
Computer
Vision
Proceedings of the 2019 Computer
Vision Conference (CVC), Volume 1
Advances in Intelligent Systems and Computing
Volume 943
Series Editor
Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences,
Warsaw, Poland
Advisory Editors
Nikhil R. Pal, Indian Statistical Institute, Kolkata, India
Rafael Bello Perez, Faculty of Mathematics, Physics and Computing,
Universidad Central de Las Villas, Santa Clara, Cuba
Emilio S. Corchado, University of Salamanca, Salamanca, Spain
Hani Hagras, Electronic Engineering, University of Essex, Colchester, UK
László T. Kóczy, Department of Automation, Széchenyi István University,
Gyor, Hungary
Vladik Kreinovich, Department of Computer Science, University of Texas
at El Paso, El Paso, TX, USA
Chin-Teng Lin, Department of Electrical Engineering, National Chiao
Tung University, Hsinchu, Taiwan
Jie Lu, Faculty of Engineering and Information Technology,
University of Technology Sydney, Sydney, NSW, Australia
Patricia Melin, Graduate Program of Computer Science, Tijuana Institute
of Technology, Tijuana, Mexico
Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro,
Rio de Janeiro, Brazil
Ngoc Thanh Nguyen, Faculty of Computer Science and Management,
Wrocław University of Technology, Wrocław, Poland
Jun Wang, Department of Mechanical and Automation Engineering,
The Chinese University of Hong Kong, Shatin, Hong Kong
The series “Advances in Intelligent Systems and Computing” contains publications
on theory, applications, and design methods of Intelligent Systems and Intelligent
Computing. Virtually all disciplines such as engineering, natural sciences, computer
and information science, ICT, economics, business, e-commerce, environment,
healthcare, life science are covered. The list of topics spans all the areas of modern
intelligent systems and computing such as: computational intelligence, soft comput-
ing including neural networks, fuzzy systems, evolutionary computing and the fusion
of these paradigms, social intelligence, ambient intelligence, computational neuro-
science, artificial life, virtual worlds and society, cognitive science and systems,
Perception and Vision, DNA and immune based systems, self-organizing and
adaptive systems, e-Learning and teaching, human-centered and human-centric
computing, recommender systems, intelligent control, robotics and mechatronics
including human-machine teaming, knowledge-based paradigms, learning para-
digms, machine ethics, intelligent data analysis, knowledge management, intelligent
agents, intelligent decision making and support, intelligent network security, trust
management, interactive entertainment, Web intelligence and multimedia.
The publications within “Advances in Intelligent Systems and Computing” are
primarily proceedings of important conferences, symposia and congresses. They
cover significant recent developments in the field, both of a foundational and
applicable character. An important characteristic feature of the series is the short
publication time and world-wide distribution. This permits a rapid and broad
dissemination of research results.
Editors
Advances in
Computer Vision
Proceedings of the 2019 Computer Vision
Conference (CVC), Volume 1
123
Editors
Kohei Arai Supriya Kapoor
Saga University The Science and Information
Saga, Saga, Japan (SAI) Organization
Bradford, West Yorkshire, UK
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
It gives us the great pleasure to welcome all the participants of the Computer Vision
Conference (CVC) 2019, organized by The Science and Information
(SAI) Organization, based in the UK. CVC 2019 offers a place for participants to
present and to discuss their innovative recent and ongoing research and their
applications. The prestigious conference was held on 25–26 April 2019 in
Las Vegas, Nevada, USA.
Computer vision is a field of computer science that works on enabling the
computers to identify, see and process information in a similar way that humans do
and provide an appropriate result. Nowadays, computer vision is developing at a
fast pace and has gained enormous attention.
The volume and quality of the technical material submitted to the conference
confirm the rapid expansion of computer vision and CVC’s status as its flagship
conference. We believe the research presented at CVC 2019 will contribute to
strengthen the great success of computer vision technologies in industrial, enter-
tainment, social and everyday applications. The participants of the conference were
from different regions of the world, with the background of either academia or
industry.
The published proceedings has been divided into two volumes, which covered a
wide range of topics in Machine Vision and Learning, Computer Vision
Applications, Image Processing, Data Science, Artificial Intelligence, Motion and
Tracking, 3D Computer Vision, Deep Learning for Vision, etc. These papers are
selected from 371 submitted papers and have received the instruction and help from
many experts, scholars and participants in proceedings preparation. Here, we would
like to give our sincere thanks to those who have paid great efforts and support
during the publication of the proceeding. After rigorous peer review, 118 papers
were published including 7 poster papers.
Many thanks go to the Keynote Speakers for sharing their knowledge and
expertise with us and to all the authors who have spent the time and effort to
contribute significantly to this conference. We are also indebted to the organizing
committee for their great efforts in ensuring the successful implementation of the
v
vi Preface
conference. In particular, we would like to thank the technical committee for their
constructive and enlightening reviews on the manuscripts in the limited timescale.
We hope that all the participants and the interested readers benefit scientifically
from this book and find it stimulating in the process. See you in next SAI
Conference, with the same amplitude, focus and determination.
Regards,
Kohei Arai
Contents
vii
viii Contents
1 Introduction
The ever increasing modernisation of signal systems and electrification of major
railway lines lead to increasingly complex railway environments. These environ-
ments require advanced management systems which incorporate a detailed rep-
resentation of the network, its assets and surroundings. The aim of such systems
is to facilitate automation of processes and improved decision support, minimis-
ing the requirement for expensive and inefficient on-site activities. Fundamental
requirements are detailed maps and databases of railway assets, such as poles,
signs, wires, switches, cabinets, signalling equipment, as well as the surrounding
environment including trees, buildings and adjacent infrastructure. The detailed
maps may also form the basis for a simulation of the railway as seen from the
train operator’s viewpoint. Such simulations/videos are used in the training of
train operators and support personnel. Ideally, the maps should be constantly
updated to ensure currency of the databases as well as to facilitate detailed
documentation and support of maintenance and construction processes in the
c Springer Nature Switzerland AG 2020
K. Arai and S. Kapoor (Eds.): CVC 2019, AISC 943, pp. 1–15, 2020.
https://doi.org/10.1007/978-3-030-17795-9_1
2 G. Karagiannis et al.
2 Literature Review
with the track bed. The overall average detection achieved by this method is
96.4%. The main drawback of this approach is that it depends on a sophisticated
type of data that needs special equipment to capture and it is more complicated
to process.
Agudo et al. in [1] presented a real-time railway speed limit and warning
signs recognition method on videos. After noise removal, Canny edge detection
is applied and an optimised Hough voting scheme detects the sign region of
interest on the edge image. Based on gradient directions and distances of points
on the edge of a shape, candidate central points of signs occur. Then, recognition
is achieved by applying shape criteria since the signs are either circular, square
or rectangular. The method scored 95.83% overall accuracy on classification.
However, even though the dataset had more than 300,000 video frames, only
382 ground truth signs existed and the authors do not provide any score for the
detection accuracy.
3 Data Analysis
3.1 Dataset
In our case, the dataset consists of 47,912 images acquired in 2013 and show the
railway from Brisbane to Melbourne in Australia. The images were acquired in
three different days in the morning with similar sunlight conditions. The camera
used is a Ladybug3 spherical camera system and the images are panoramic views
of size 5400 × 2700 pixels. The images were processed manually by the produc-
tion team of the company2 for annotations and resulted in 121,528 instances of
railway signs and signals.
2
Second Affiliation.
4 G. Karagiannis et al.
Fig. 1. Instances of signals and signs. First row from left to right (class name in paren-
theses when it is different from the description): speed sign (Sign S), unit sign (Sign
U), speed standard letter (Sign Other), position light main (PL 2W), position light
separately (PL 2W& 1R), signal with two elements (Signal2 F). Second row: diverging
left-right speed sign (Sign LR), diverging left speed sign (Sign L), signal number (Sign
Signal), signal with one element (Signal1 F), signal with three elements (Signal3 F),
signal with four elements (Signal4 F).
Fig. 2. Instances of speed signs at different lighting conditions, viewpoints and scales.
The samples are originally separated into twenty five classes. Each class is in
fact a subclass of two parent classes, signs and signals. Fifteen classes correspond
to signals: three different types of position lights with their back and side view,
signals with one, two, three or four elements (lights), their side and back view
and other type of signals. Also, ten classes correspond to different types of signs:
speed signs, diverging left speed signs, diverging right speed signs, diverging
left-right speed signs, unit signs, signal number signs, other signs, back views of
circular signs, back views of diamond signs and back views of rectangular signs.
From the total amount of samples, 67,839 correspond to signals and 53,689 to
signs. Figure 1 shows some instances of signs and signals. Each of these instances
correspond to a different class. We can see the high similarity among the classes
both for signs and signals. Specifically, diverging left, diverging right, diverging
left-right and regular speed signs are very similar and especially when they are
smaller than the examples shown in Fig. 1. Similarly, even for humans it is often
hard to distinguish between signals with three or four elements when they are
small. All examples shown here are larger than their average size on the dataset
for clarity.
Figure 2 shows examples of regular speed signs with different viewpoint, size
and illumination. These examples illustrate the scale, viewpoint and illumination
variation of the objects but at the same time the high similarity among the
classes.
Railway Signs and Signals 5
Fig. 3. Instances of signals with one element (Signal1). All four examples belong to
the same class even though they have different characteristics. From left to right: some
have long (first and second), no top cover at all (third) or short cover (last). Also, some
have no back cover (first), other have circular (second and third) or semicircular (last)
Fig. 4. Amount of sample instances per class before merging some classes. The imbal-
ance among the classes is too high.
Fig. 5. Amount of sample instances per class. After merging some classes with similar
characteristics we end up with a more balanced dataset.
Fig. 6. Amount of samples according to their size in pixel2 . Most samples (89%) are
smaller than 502 pixels.
A reason behind the high amount of very small objects in our dataset is that
the data was acquired by driving on a single track. However, in many sectors
along the railway there are multiple parallel tracks and the signs and signals
corresponding to these tracks appear in the dataset only in small sizes since
the camera never passed close to them. One way to limit the small object size
problem in our dataset is to split each panoramic image into smaller patches with
high overlap, small enough to achieve a less challenging relative size between
objects and image. Specifically, in our approach, each image is split into 74
patches of size 6002 pixels with 200 pixels overlap on each side. Even at this
level of fragmentation, a 502 object corresponds to 0.69% of a patch size. A
consequence of splitting the images into smaller patches is that the same object
may now appear in more than one patches due to overlap. In fact, while on the
panoramic images there exist 121,528 object instances, on the patches that were
extracted, there exist 203,322 instances. The numbers shown in Fig. 6 correspond
to the objects existing on the patches.
4 Methodology
For the detection of signs and signals, we applied the Faster R-CNN presented
by Ren et al. in [2] using ResNet-101 [21] as feature extractor. We decided to
implement this approach mainly motivated by its high performance on competi-
tive datasets such as Pascal VOC 2007 (85.6% mAP), Pascal VOC 2012 (83.8%
mAP) and COCO (59% mAP). The main drawback of this approach compared
to other acknowledged object detection methods such as YOLO [16]) or SSD is
its high processing time (three to nine times slower depending on the implemen-
tation [17]). However, the sacrifice in time pays off in accuracy, especially in this
dataset since this method performs better on small objects [2].
Here we will present some key points of Faster R-CNN. First, Faster R-
CNN is the descendant of Fast R-CNN [2] which in turn is the descendant of
R-CNN [13]. As their names imply, Fast and Faster R-CNNs are more efficient
implementations of the original concept in [13], R-CNN. The main elements of
Faster R-CNN are: (1) the base network, (2) the anchors, (3) the Region Proposal
Network (RPN) and (4) the Region based Convolutional Neural Network (R-
CNN). The last element is actually Fast R-CNN, so with a slight simplification
we can state that Faster R-CNN = RPN + Fast R-CNN.
The base network is a, usually deep, CNN. This network consists of multi-
ple convolutional layers that perform feature extraction by applying filters at
different levels. A common practice [2,14,16] is to initialize training using a
pre-trained network as a base network. This helps the network to have a more
realistic starting point compared to random initialization. Here we use ResNet
[21]. The second key point of this method is the anchors, a set of predefined
possible bounding boxes at different sizes and aspect ratios. The goal of using
the anchors is to catch the variability of scales and sizes of objects in the images.
Here we used nine anchors consisting of three different sizes (152 , 302 and 602
pixels) and three different aspect ratios 1:1, 1:2 and 2:1.
Next, we use RPN, a network trained to separate the anchors into fore-
ground and background given the Intersection over Union (IoU) ratio between
the anchors and a ground-truth bounding box (foreground if IoU > 0.7 and
background if IoU < 0.1). Thus, only the most relevant anchors for our dataset
are used. It accepts as input the feature map output of the base model and
creates two outputs: a 2 × 9 box-classification layer containing the foreground
and background probability for each of the nine different anchors and a 4 × 9
box-regression layer containing the offset values on x and y axis of the anchor
Railway Signs and Signals 9
Fig. 7. Right: the architecture of Faster R-CNN. Left: the region proposal network
(RPN). Source: Figs. 2 and 3 of [2].
For the evaluation of detection, an overlap criterion between the ground truth
and predicted bounding box is defined. If the Intersection over Union (IoU) of
these two boxes is greater than 0.5, the prediction is considered as True Positive
(TP). Multiple detections of the same ground truth objects are not considered
true positives, each predicted box is either True-Positive or False-Positive (FP).
Predictions with IoU smaller than 0.5 are ignored and count as False Negatives
(FN). Precision is defined as the fraction of the correct detections over the total
10 G. Karagiannis et al.
detections ( T PT+F
P
P ). Recall is the fraction of correct detections over the total
amount of ground truth objects ( T PT+F P
N ) [22]. For the evaluation of classifica-
tion and overall accuracy of our approach, we adopted mean Average Precision
(mAP) [23] as a widely accepted metric [22]. For each class, the predictions
satisfying the overlap criterion are assigned to ground truth objects in descend-
ing order by the confidence output. The precision/recall curve is computed and
the average precision(AP) is the mean value of interpolated precision at eleven
equally spaced levels of recall [22]:
1
AP = pinterp (r) (1)
11
r∈{0,0.1,...,1}
where
pinterp (r) = maxr̃:r̃≥r p(r̃) (2)
Then, the mean of all APs across the classes is the mAP metric. This metric
was used as evaluation method for the Pascal VOC 2007 detection challenge
and from that time is the most common evaluation metric on object detection
challenges [18].
5 Results
The network was trained on a single Titan-X GPU implementation for approx-
imately two days. We trained it for 200k iterations with a learning rate of 0.003
Fig. 8. Percentage of objects detected according to their size in pixels2 . The algorithm
performed very well on detecting objects larger than 400 pixels (more than 79% in
all size groups). On the other hand, it performed poorly on very small objects (less
than 200 pixels) detecting only 24% of them. The performance was two times better
on objects with size 200–400 pixels (57%), which is the most dominant size group with
about 45,000 samples (see Fig. 6).
Railway Signs and Signals 11
and for 60k iterations with learning rate 0.0003. The model achieved an overall
precision of 79.36% on the detection task and a 70.9% mAP.
The precision level of detection is considered high given the challenges
imposed by the dataset as they are presented in Sect. 3. Figure 8 shows the
detection performance of the algorithm with respect to the size of the samples.
We can see that the algorithm fails to detect very small objects. About 76%
of objects smaller than 200 pixels are not detected. A significantly better, but
still low, performance is observed for objects of 200–400 pixels size (57% detec-
tion rate). On the other hand, the algorithm performs uniformly well for objects
Fig. 9. Example of detection. Two successful detections and one false positive. We
can see the illumination variance even in a single image with areas in sunlight and in
shadow. The sign is detected successfully despite its small size and the partial occlusion
from the bush. The entrance of the tunnel was falsely detected as a signal with three
elements.
12 G. Karagiannis et al.
larger than 400 pixels (79% for sizes 400–600 and more than 83% for larger than
600 pixels). These results show that, in terms of object size, there is a threshold
above which the performance of Faster-RCNN is stable and it is unaffected by
the size of the objects. Therefore, if there are enough instances and the objects
are large enough, object size is not a crucial factor. In our case, this threshold
is about 500 pixels.
An interesting metric for our problem would be to measure the amount of
unique physical objects detected. The goal of object detection in computer vision,
usually, is to detect all the objects that appear in an image. In mapping, most
of the times, it is important to detect the locations of the physical objects. If we
have multiple images showing the same physical object from different point of
views and at different scales, in mapping, it would be sufficient to detect it at
Fig. 10. Example of detection. A successful detection of more complex object struc-
tures. Different signals mounted on top of each other did not confuse the algorithm.
Random documents with unrelated
content Scribd suggests to you:
The Project Gutenberg eBook of Elements of
agricultural chemistry and geology
This ebook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this ebook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.
Language: English
LECTURES
ON
AGRICULTURAL
CHEMISTRY AND GEOLOGY;
TO WHICH ARE ADDED,
CRITICAL NOTICES.
“A valuable and interesting course of lectures.”—
Quarterly Review.
“But it is unnecessary to make large extracts from a
book which we hope and trust will soon be in the
hands of nearly all our readers. Considering it as
unquestionably the most important contribution that
has recently been made to popular science, and as
destined to exert an extensively beneficial influence in
this country, we shall not fail to notice the forthcoming
portions as soon as they appear from the press.”—
Silliman’s American Journal of Science. Notice of Part I
of the American reprint.
“We think it no compliment to Professor Johnston
to say, that among our own writers of the present day
who have recently been endeavouring to improve our
agriculture by the aid of science, there is probably no
other who has been more eminently successful, or
whose efforts have been more highly appreciated.”—
County Herald.
“Prof. Johnston is one who has himself done so
much already for English agriculture, that to behold
him still in hot pursuit of the inquiry into what can be
done, supplies of itself a stimulus to further exertion
on the part of others.”—Berwick Warder.
ELEMENTS
OF
AGRICULTURAL
CHEMISTRY AND GEOLOGY.
BY
JAS. F. W. JOHNSTON, M.A., F.R.S.,
HONORARY MEMBER OF THE ROYAL ENGLISH AGRICULTURAL
SOCIETY, AND AUTHOR OF “LECTURES ON AGRICULTURAL
CHEMISTRY AND GEOLOGY.”
NEW-YORK:
WILEY AND PUTNAM.
MDCCCXLII.
J. P. Wright, Printer,
18 New Street, N. Y.
INTRODUCTION.
The scientific principles upon which the art of culture depends,
have not hitherto been sufficiently understood or appreciated by
practical men. Into the causes of this I shall not here inquire. I may
remark, however, that if Agriculture is ever to be brought to that
comparative state of perfection to which many other arts have
already attained, it will only be by availing itself, as they have done,
of the many aids which Science offers to it; and that, if the practical
man is ever to realize upon his farm all the advantages which
Science is capable of placing within his reach, it will only be when he
has become so far acquainted with the connection that exists
between the art by which he lives and the sciences, especially of
Chemistry and Geology, as to be prepared to listen with candour to
the suggestions they are ready to make to him, and to attach their
proper value to the explanations of his various processes which they
are capable of affording.
The following little Treatise is intended to present a familiar
outline of the subjects of Agricultural Chemistry and Geology, as
treated of more at large in my Lectures, of which the first Part is now
before the public. What in this work has necessarily been taken for
granted, or briefly noticed, is in the Lectures examined, discussed, or
more fully detailed.
Durham, 8th April, 1842.
CONTENTS.
CHAPTER I. page
Distinction between Organic and Inorganic Substances
—The Ash of Plants—Constitution of the Organic
Parts of Plants—Preparation and Properties of
Carbon, Oxygen, Hydrogen, and Nitrogen—
Meaning of Chemical Combination. 13
CHAPTER II.
Form in which these different Substances enter into
Plants—Properties of the Carbonic, Humic, and
Ulmic Acids; of Water, of Ammonia, and of Nitric
Acid—Constitution of the Atmosphere. 25
CHAPTER III.
Structure of Plants—Mode in which their Nourishment
is obtained—Growth and Substance of Plants—
Production of their Substance from the Food they
imbibe—Mutual Transformations of Starch, Sugar,
and Woody Fibre. 38
CHAPTER IV.
Of the Inorganic Constituents of Plants—Their
immediate Source—Their Nature—Quantity of
each in certain common Crops. 49
CHAPTER V.
Of Soils—Their Organic and Inorganic Portions—Saline
Matter in Soils—Examination and Classification of
Soils—Diversities of Soils and Subsoils. 67
CHAPTER VI.
Direct Relations of Geology to Agriculture—Origin
of Soils—Causes of their Diversity—Relation to
the Rocks on which they rest—Constancy in the
Relative Position and Character of the Stratified
Rocks—Relation of this Fact to Practical
Agriculture—General Characters of the Soils
upon these Rocks. 78
CHAPTER VII.
Soils of the Granitic and Trap Rocks—Accumulations
of Transported Sands, Gravels, and Clays—Use
of Geological Maps in reference to Agriculture
—Physical Characters and Chemical Constitution
of Soils—Relation between the Nature of the
Soil and the Kind of Plants that naturally grow
upon it. 103
CHAPTER VIII.
Of the Improvement of the Soil—Mechanical and Chemical
Methods—Draining—Subsoiling—Ploughing, and
Mixing of Soils—Use of Lime, Marl, and Shell-sand—
Manures—Vegetable, Animal, and Mineral Manures. 133
CHAPTER IX.
Animal Manures—Their Relative Value and Mode of
Action—Difference between Animal and Vegetable
Manures—Cause of this Difference—Mineral Manures—
Nitrates of Potash and Soda—Sulphate of Soda,
Gypsum, Chalk, and Quicklime—Chemical Action of
these Manures—Artificial Manures—Burning and
Irrigation of the Soil—Planting and Laying Down
to Grass. 165
CHAPTER X.
The Products of Vegetation—Importance of Chemical
quality as well as quantity of Produce—Influence
of different Manures on the quantity and quality
of the Crop—Influence of the Time of Cutting—
Absolute quantity of Food yielded by different Crops
—Principles on which the Feeding of Animals depends
—Theoretical and Experimental Value of different kinds
of Food for Feeding Stock—Concluding Observations. 216
ELEMENTS
OF
AGRICULTURAL CHEMISTRY, &c.
CHAPTER I.
Distinction between Organic and Inorganic Substances.—
The Ash of Plants.—Constitution of the Organic Parts of
Plants.—Preparation and Properties of Carbon,
Hydrogen, and Nitrogen.—Meaning of Chemical
Combination.