100% found this document useful (5 votes)
28 views

Advanced Machine Intelligence and Signal Processing Deepak Gupta pdf download

The document presents information about the book 'Advanced Machine Intelligence and Signal Processing' edited by Deepak Gupta and others, which compiles refereed papers from the 3rd International Conference on Machine Intelligence and Signal Processing. It covers various topics including data mining, artificial intelligence, and signal processing methodologies. The book is part of the Lecture Notes in Electrical Engineering series and aims to support education and professional training in electrical engineering fields.

Uploaded by

handybruentx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (5 votes)
28 views

Advanced Machine Intelligence and Signal Processing Deepak Gupta pdf download

The document presents information about the book 'Advanced Machine Intelligence and Signal Processing' edited by Deepak Gupta and others, which compiles refereed papers from the 3rd International Conference on Machine Intelligence and Signal Processing. It covers various topics including data mining, artificial intelligence, and signal processing methodologies. The book is part of the Lecture Notes in Electrical Engineering series and aims to support education and professional training in electrical engineering fields.

Uploaded by

handybruentx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

Advanced Machine Intelligence and Signal

Processing Deepak Gupta install download

https://ebookmeta.com/product/advanced-machine-intelligence-and-
signal-processing-deepak-gupta/

Download more ebook from https://ebookmeta.com


We believe these products will be a great fit for you. Click
the link to download now, or visit ebookmeta.com
to discover even more!

Coherence: In Signal Processing and Machine Learning


Ramírez

https://ebookmeta.com/product/coherence-in-signal-processing-and-
machine-learning-ramirez/

EEG Signal Processing and Machine Learning 2nd Edition


Saeid Sanei

https://ebookmeta.com/product/eeg-signal-processing-and-machine-
learning-2nd-edition-saeid-sanei/

Advanced Sensing in Image Processing and Iot 1st


Edition Rashmi Gupta

https://ebookmeta.com/product/advanced-sensing-in-image-
processing-and-iot-1st-edition-rashmi-gupta/

Teaching of Science 2022nd Edition B.Ed

https://ebookmeta.com/product/teaching-of-science-2022nd-edition-
b-ed/
Savage King New Worlds The Crystal Kingdom Book 5 1st
Edition Milly Taiden Taiden Milly

https://ebookmeta.com/product/savage-king-new-worlds-the-crystal-
kingdom-book-5-1st-edition-milly-taiden-taiden-milly/

Lives of the Musicians Good Times Bad Times and What


the Neighbors Thought Kathleen Krull

https://ebookmeta.com/product/lives-of-the-musicians-good-times-
bad-times-and-what-the-neighbors-thought-kathleen-krull/

Short Fiction 1st Edition Voltairine De Cleyre

https://ebookmeta.com/product/short-fiction-1st-edition-
voltairine-de-cleyre-2/

Good Night Mr Wodehouse A Novel 2nd Edition Sullivan


Faith

https://ebookmeta.com/product/good-night-mr-wodehouse-a-
novel-2nd-edition-sullivan-faith/

Hallowed Be This House Finding Signs of Heaven in Your


Home Thomas Howard

https://ebookmeta.com/product/hallowed-be-this-house-finding-
signs-of-heaven-in-your-home-thomas-howard/
Pocket Rough Guide Florence 4th Edition Rough Guides

https://ebookmeta.com/product/pocket-rough-guide-florence-4th-
edition-rough-guides/
Lecture Notes in Electrical Engineering 858

Deepak Gupta
Koj Sambyo
Mukesh Prasad
Sonali Agarwal Editors

Advanced
Machine
Intelligence
and Signal
Processing
Lecture Notes in Electrical Engineering

Volume 858

Series Editors

Leopoldo Angrisani, Department of Electrical and Information Technologies Engineering, University of Napoli
Federico II, Naples, Italy
Marco Arteaga, Departament de Control y Robótica, Universidad Nacional Autónoma de México, Coyoacán,
Mexico
Bijaya Ketan Panigrahi, Electrical Engineering, Indian Institute of Technology Delhi, New Delhi, Delhi, India
Samarjit Chakraborty, Fakultät für Elektrotechnik und Informationstechnik, TU München, Munich, Germany
Jiming Chen, Zhejiang University, Hangzhou, Zhejiang, China
Shanben Chen, Materials Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
Tan Kay Chen, Department of Electrical and Computer Engineering, National University of Singapore,
Singapore, Singapore
Rüdiger Dillmann, Humanoids and Intelligent Systems Laboratory, Karlsruhe Institute for Technology,
Karlsruhe, Germany
Haibin Duan, Beijing University of Aeronautics and Astronautics, Beijing, China
Gianluigi Ferrari, Università di Parma, Parma, Italy
Manuel Ferre, Centre for Automation and Robotics CAR (UPM-CSIC), Universidad Politécnica de Madrid,
Madrid, Spain
Sandra Hirche, Department of Electrical Engineering and Information Science, Technische Universität
München, Munich, Germany
Faryar Jabbari, Department of Mechanical and Aerospace Engineering, University of California, Irvine, CA,
USA
Limin Jia, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China
Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland
Alaa Khamis, German University in Egypt El Tagamoa El Khames, New Cairo City, Egypt
Torsten Kroeger, Stanford University, Stanford, CA, USA
Yong Li, Hunan University, Changsha, Hunan, China
Qilian Liang, Department of Electrical Engineering, University of Texas at Arlington, Arlington, TX, USA
Ferran Martín, Departament d’Enginyeria Electrònica, Universitat Autònoma de Barcelona, Bellaterra,
Barcelona, Spain
Tan Cher Ming, College of Engineering, Nanyang Technological University, Singapore, Singapore
Wolfgang Minker, Institute of Information Technology, University of Ulm, Ulm, Germany
Pradeep Misra, Department of Electrical Engineering, Wright State University, Dayton, OH, USA
Sebastian Möller, Quality and Usability Laboratory, TU Berlin, Berlin, Germany
Subhas Mukhopadhyay, School of Engineering & Advanced Technology, Massey University,
Palmerston North, Manawatu-Wanganui, New Zealand
Cun-Zheng Ning, Electrical Engineering, Arizona State University, Tempe, AZ, USA
Toyoaki Nishida, Graduate School of Informatics, Kyoto University, Kyoto, Japan
Luca Oneto, Dept. of Informatics, Bioengg., Robotics, University of Genova, Genova, Genova, Italy
Federica Pascucci, Dipartimento di Ingegneria, Università degli Studi “Roma Tre”, Rome, Italy
Yong Qin, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China
Gan Woon Seng, School of Electrical & Electronic Engineering, Nanyang Technological University,
Singapore, Singapore
Joachim Speidel, Institute of Telecommunications, Universität Stuttgart, Stuttgart, Germany
Germano Veiga, Campus da FEUP, INESC Porto, Porto, Portugal
Haitao Wu, Academy of Opto-electronics, Chinese Academy of Sciences, Beijing, China
Walter Zamboni, DIEM - Università degli studi di Salerno, Fisciano, Salerno, Italy
Junjie James Zhang, Charlotte, NC, USA
The book series Lecture Notes in Electrical Engineering (LNEE) publishes the
latest developments in Electrical Engineering - quickly, informally and in high
quality. While original research reported in proceedings and monographs has
traditionally formed the core of LNEE, we also encourage authors to submit books
devoted to supporting student education and professional training in the various
fields and applications areas of electrical engineering. The series cover classical and
emerging topics concerning:
• Communication Engineering, Information Theory and Networks
• Electronics Engineering and Microelectronics
• Signal, Image and Speech Processing
• Wireless and Mobile Communication
• Circuits and Systems
• Energy Systems, Power Electronics and Electrical Machines
• Electro-optical Engineering
• Instrumentation Engineering
• Avionics Engineering
• Control Systems
• Internet-of-Things and Cybersecurity
• Biomedical Devices, MEMS and NEMS

For general information about this book series, comments or suggestions, please
contact [email protected].
To submit a proposal or request further information, please contact the Publishing
Editor in your country:
China
Jasmine Dou, Editor ([email protected])
India, Japan, Rest of Asia
Swati Meherishi, Editorial Director ([email protected])
Southeast Asia, Australia, New Zealand
Ramesh Nath Premnath, Editor ([email protected])
USA, Canada:
Michael Luby, Senior Editor ([email protected])
All other Countries:
Leontina Di Cecco, Senior Editor ([email protected])
** This series is indexed by EI Compendex and Scopus databases. **

More information about this series at https://link.springer.com/bookseries/7818


Deepak Gupta · Koj Sambyo · Mukesh Prasad ·
Sonali Agarwal
Editors

Advanced Machine
Intelligence and Signal
Processing
Editors
Deepak Gupta Koj Sambyo
Department of Computer Science Department of Computer Science
and Engineering and Engineering
National Institute of Technology Arunachal National Institute of Technology Arunachal
Pradesh (NITAP) Pradesh (NITAP)
Itanagar, Arunachal Pradesh, India Itanagar, Arunachal Pradesh, India

Mukesh Prasad Sonali Agarwal


School of Computer Science, Faculty Department of Information Technology
of Engineering and IT Indian Institute of Information Technology
University of Technology (IIIT)
Sydney, NSW, Australia Allahabad, Uttar Pradesh, India

ISSN 1876-1100 ISSN 1876-1119 (electronic)


Lecture Notes in Electrical Engineering
ISBN 978-981-19-0839-2 ISBN 978-981-19-0840-8 (eBook)
https://doi.org/10.1007/978-981-19-0840-8

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature
Singapore Pte Ltd. 2022
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse
of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721,
Singapore
Preface

This book helps readers to interact with a selection of refereed papers presented
at the 3rd International Conference on Machine Intelligence and Signal Processing
(MISP-2021), National Institute of Technology Arunachal Pradesh, Jote, India, from
23 to 25 September 2021. This book coverage is concerned domains to explore
and discuss different aspects of data mining, artificial intelligence, optimization,
machine learning methods/algorithms, signal processing theory and methodologies,
and their applications. The significance, uniqueness, and technical excellence of all
contributions were considered. The technical programme committee was subjected
to a double-blind review process to ensure that the author names and affiliations were
unknown to them.
We appreciate the assistance of the advisory committee members, and we thank
all of the keynote speakers for sharing their knowledge and skills with us. Professor
Pinakeswar Mahanta, Director, NIT Arunachal Pradesh, deserves special gratitude
for his insightful advice and support. We would like to express our gratitude to the
organising committee as well as the many additional volunteers who helped make
this conference a success. We appreciate EasyChair’s assistance with the submission
and evaluation process. Finally, we would like to thank Springer for their enthusiastic
participation and prompt publication of the papers.

Itanagar, India Dr. Deepak Gupta


Itanagar, India Dr. Koj Sambyo
Sydney, Australia Dr. Mukesh Prasad
Allahabad, India Dr. Sonali Agarwal

v
Contents

Leukocyte Subtyping Using Convolutional Neural Networks


for Enhanced Disease Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Mulagala Sandhya, Tanmay Dhopavkar, Dilip Kumar Vallabhadas,
Jayaprakash Palla, Mulagala Dileep, and Sriramulu Bojjagani
Analysis of Fifteen Approaches to Automated COVID-19 Detection
Using Radiography Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Kartik Soni, Abhaya Kirtivasan, Rishwari Ranjan, and Somya Goyal
Human Emotion Classification Based on Speech Enhancement
Using Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Ch V. V. S. Srinivas, Srilakshmi Gubbala, and N. Durga Naga Lakshmi
OXGBoost: An Optimized eXtreme Gradient Boosting Algorithm
for Classification of Breast Cancer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Pullela SVVSR Kumar, Praveen Neti, Dirisala J. Nagendra Kumar,
G. S. N. Murthy, R. V. S. Lalitha, and Mylavarapu Kalyan Ram
Texture Unit Pattern Approach for Fabric Classification . . . . . . . . . . . . . . 61
Garapati S. N. Murthy, Pullela SVVSR Kumar, Tadi Satya Kumari,
T. Veerraju, Dirisala J. Nagendra Kumar, and Chakka SVVSN Murthy
Skeleton-Based Human Action Recognition Using Motion
and Orientation of Joints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Sampat Kumar Ghosh, M. Rashmi, Biju R. Mohan,
and Ram Mohana Reddy Guddeti
An Empirical Study on Graph-Based Clustering Algorithms Using
Schizophrenia Genes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Rajdeep Baruri, Tanmoy Kanti Halder, and Anindya Das
Hybrid Model of Multifactor Analysis with RNN-LSTM to Predict
Stock Price . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Neema Singh, Biju R. Mohan, and Nagaraj Naik

vii
viii Contents

Deep Learning Approach to Deal with E-Waste . . . . . . . . . . . . . . . . . . . . . . 123


Mehwish Naushin, Anant Saraswat, and Kumar Abhishek
Comparative Study on Different Classifiers for Gait-Based Human
Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Margaret Kathing, Rishang Kumar Brahma, and Sarat Saharia
Internet of Things: A Survey on Fused Machine Learning-Based
Intrusion Detection Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Priyanka Gupta, Lokesh Yadav, and Deepak Singh Tomar
Image Fusion-Based Watermarking in IWT-SVD Domain . . . . . . . . . . . . . 163
Om Prakash Singh and Amit Kumar Singh
On Twin Bounded Support Vector Machine with Pinball Loss . . . . . . . . . 177
P. Anagha and S. Balasundaram
Traffic Rule Violation Detection System: Deep Learning Approach . . . . . 191
Mandar Kathane, Shubham Abhang, Abhishek Jadhavar,
Amit D. Joshi, and Suraj T. Sawant
Template-Based Thinning Method for Handwritten Gujarati
Character’s Strokes and its Classification for Writer-Dependent
Gujarati Font Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Preeti P. Bhatt, Jitendra V. Nasriwala, and Rakesh R. Savant
A Model on Intrusion Detection Using Firefly Algorithm
and Classical Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Prabhu Ranjan and Sunil Kumar Singh
Deep Learning Framework Based on Audio–Visual Features
for Video Summarization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
M. Rhevanth, Rashad Ahmed, Vithik Shah, and Biju R. Mohan
A Comparative Study of Deep Learning Models for Word-Sense
Disambiguation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Arpit Jadiya, Thejaswini Dondemadahalli Manjunath,
and Biju R. Mohan
MultiYOLO: Learning New YOLO Categories Without Full
Retraining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Theofanis Gkaragkanis and Aristidis Likas
Vehicle Routing Problem Using Reinforcement Learning: Recent
Advancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
Syed Mohib Raza, Mohammad Sajid, and Jagendra Singh
Disease Detection of Plant Leaves with the Aid of Region Growing
and Neural Network: A Comparative Analysis . . . . . . . . . . . . . . . . . . . . . . . 281
Kalicharan and Sonajharia Minz
Contents ix

A Web Application for Early Prediction of Diabetes Using


Artificial Neural Network: A Comparative Study . . . . . . . . . . . . . . . . . . . . . 299
Deep Dodhiwala, Abhishek Swaminathan, Shagun Choudhari,
and Sheetal Chaudhari
Machine Learning Equipped Web-Based Disease Prediction
and Recommender System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Harish Rajora, Narinder Singh Punn, Sanjay Kumar Sonbhadra,
and Sonali Agarwal
Comparing the Predictive Accuracy of Machine Learning
Algorithms for Neonatal Mortality Risk Classification . . . . . . . . . . . . . . . . 325
A. Sivarajan, A. Bala Aditya, and E. Sivasankar
Extraction of Waterbody Using Object-Based Image Analysis
and XGBoost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
Aditya P. Chatufale, Priti P. Rege, and Abhishek Bhatt
Deep Learning-Based Image Retrieval in the JPEG Compressed
Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
Shrikant Temburwar, Bulla Rajesh, and Mohammed Javed
Fake News Detection Using Genetic Algorithm-Based Feature
Selection and Ensemble Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
K. M. Nikitha, Ryan Rozario, Chinmayan Pradeep,
and V. S. Ananthanarayana
Application of SPSS for Forecasting of Renewable Energy
as Future Energy in India . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Tapas Kumar Benia, Subhadip Goswami, and Abhik Banerjee
Robust Multi-task Least Squares Twin Support Vector Machines
for Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
Reshma Rastogi and Mustaffa Hussain
Detection of Plant Leaf Disease Directly in the JPEG Compressed
Domain Using Transfer Learning Technique . . . . . . . . . . . . . . . . . . . . . . . . . 407
Atul Sharma, Bulla Rajesh, and Mohammed Javed
Angle-Based Feature Learning in GNN for 3D Object Detection
Using Point Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
Md. Afzal Ansari, Md. Meraz, Pavan Chakraborty,
and Mohammed Javed
Increasing the Versatility of Leaky ReLU Using a Nonlinear
Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
Parminder Pal Singh Bedi, Hirdan Aggarwal, Siddharth Narula,
and Sejal Jain
x Contents

Automatic Recognition of Road Cracks Using Gray-Level


Co-occurrence Matrix and Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . 443
Deeksha Arya, Sanjay Kumar Ghosh, and Durga Toshniwal
Feature Extraction and Fusion of Multiple Convolutional Neural
Networks for Firearm Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
Anamika Dhillon and Gyanendra K. Verma
Real-Time Speech Recognition Using Convolutional Neural
Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
Mandar Nitin Kakade and Dharmraj B. Salunke
Hybrid Combination of Machine Learning Techniques
for Diagnosis of Liver Impairment Disease in Clinical Decision
Support System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
Likha Ganu and Biri Arun
Legal Text Analysis Using Pre-trained Transformers . . . . . . . . . . . . . . . . . . 493
M. P. Prajwal and M. Anand Kumar
Price Prediction of Agricultural Products Using Deep Learning . . . . . . . . 505
Mahesh Kankar and M. Anand Kumar
Classification of Brain Hemorrhage Using Fine-Tuned Transfer
Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
Arpita Ghosh, Badal Soni, Ujwala Baruah, and R. Murugan
Deep Learning-Based Emotion Classification of Hindi Text
from Social Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
Sruthi S. Kumar, S. Sachin Kumar, and K. P. Soman
Multilingual Speech Recognition for Indian Languages . . . . . . . . . . . . . . . 545
R. Priyamvada, S. Sachin Kumar, H. B. Barathi Ganesh, and K. P. Soman
Study of Machine Learning Techniques to Mitigate Fraudulent
Transaction in Credit Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
Sayan Sikder, Shubhasree Sarkar, Eric G. Varghese,
and Pritam Bhattacharjee
Exploring Unet Architecture for Semantic Segmentation
of the Brain MRI Scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569
Sakshi Goyal and Deepali M. Kotambkar
Analysis of Machine Learning Model-Based Cardiovascular
Disease Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
Smita and Ela Kumar
Cluster-Based Probabilistic Neural Networks for Outlier Detection
Via Autoencoder Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
Bhanu Chander and Kumaravelan
Contents xi

Long-Term Average Spectral Feature-Based Parkinson’s Disease


Detection from Speech . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
Anshul Lahoti, Gurugubelli Krishna, Juan Rafel Orozco Arroyave,
and Anil Kumar Vuppala
Automatic Detection of Lung Cancer from Lung CT Images Using
3D Convolution Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
Lakshipriya Gogoi and Md. Anwar Hussain
An Experiment on Speech-to-Speech Translation of Hindi
to English: A Deep Learning Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
Sakshi Singh, Thoudam Doren Singh, and Sivaji Bandyopadhyay
Identification of Biomarker Genes for Human Immunodeficiency
Virus Using Ensemble Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637
Bikash Baruah, Ishan Ayus, and Manash P. Dutta
Machine Learning Approaches for Rumor Detection on Social
Media Platforms: A Comprehensive Survey . . . . . . . . . . . . . . . . . . . . . . . . . . 649
Vaishali U. Gongane, Mousami V. Munot, and Alwin Anuse
Characterization of Common Thoracic Diseases from Chest X-ray
Images Using CNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665
Shardul Fating and Deepali M. Kotambkar
Deep Learning Models for Tomato Plant Disease Detection . . . . . . . . . . . . 679
Vishakha Kathole and Mousami Munot
A Deep Learning Framework for Anaphora Resolution from Social
Media Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
Baidya Nath Saha, Apurbalal Senapati, and Utpal Garain
Autoencoder-Based Speech Features for Manipuri Dialect
Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697
Thangjam Clarinda Devi and Kabita Thaoroijam
A Survey on Image Processing and Machine Learning Techniques
for Detection of Pulmonary Diseases Based on CT Images . . . . . . . . . . . . . 707
Priya Sawant and R. Sreemathy
Emotion Classification Using Xception and Support Vector
Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721
Arpan Phukan and Deepak Gupta
Energy-Based Least Squares Projection Twin SVM . . . . . . . . . . . . . . . . . . . 735
M. A. Ganaie and M. Tanveer
Task Scheduling Using Deep Q-Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 749
Gaurang Velingkar, Jason Krithik Kumar, Rakshita Varadarajan,
Sidharth Lanka, and M. Anand Kumar
xii Contents

Land Use Land Cover Classification Using Different ML


Algorithms on Sentinel-2 Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
Shiwani Bayas, Suraj Sawant, Ishwari Dhondge, Priyanka Kankal,
and Amit Joshi
Convolution Filter-Based Deep Neural Networks for Timely
Diagnosis of COVID-19 Disease with Chest Radiographs . . . . . . . . . . . . . . 779
Avnish Panwar, Devyani Rawat, Palak Aggarwal, and Siddharth Gupta
DNN Based Short Term Load Forecasting of Individual Household
with Real and Synthetic Data-Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 789
Praveen Tiwari, Pinakeswar Mahanta, and Gaurav Trivedi
Indian Currency Classification and Counterfeit Detection Using
Deep Learning and Image Processing Approach . . . . . . . . . . . . . . . . . . . . . . 801
Ritvik Muttreja, Himanshu Patel, Mayank Goyal, Santosh Kumar,
and Anurag Singh
Deep Residual Network-Based Sentiment Analysis of Amazon Cell
Phone Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815
Nehal Ahmad and Kuan-Ting Lai
Deep Learning-Based Topic Categorization of Tamil Social Media
Text Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829
M. Geerthana Anusha, S. Sachin Kumar, and K. P. Soman
Geometrical Feature Extraction of CAD Models with Fully
Convolutional Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845
Gokul S. Jain, H. B. Barathi Ganesh, N. S. Kamal, V. V. Sajith Variyar,
V. Sowmya, and K. P. Soman
Image Forgery Detection Using Multi-Layer Convolutional Neural
Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855
Simranjot Kaur and Rajneesh Rani
Feature Extraction from Plant Leaves and Classification of Plant
Health Using Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 867
A. Abisha and N. Bharathi
About the Editors

Dr. Deepak Gupta is Assistant Professor at the Department of Computer Science


and Engineering of National Institute of Technology, Arunachal Pradesh. He received
the Ph.D. degree in Computer Science and Engineering from the Jawaharlal Nehru
University, New Delhi, India. His research interests include support vector machines,
ELM, RVFL, KRR, and other machine learning techniques. He has published over 40
referred journal and conference papers of international repute. His publications have
around 495 citations with an h-index of 12 and i10-index of 16 (Google Scholar,
24/07/2021). He is Recipient of the 2017 SERB-Early Career Research Award in
Engineering Sciences which is the prestigious award of India at early career level.
He is Senior Member of IEEE and currently Active Member of many scientific
societies like IEEE, SMC, CIS, CSI, and many more. He is currently Member of
an editorial review board of applied intelligence. He has also served as Reviewer of
many scientific journals and various national and international conferences. He is
currently Principal Investigator (PI) or Co-PI of 02 major research projects funded
by the Science & Engineering Research Board (SERB), Government of India.

Dr. Koj Sambyo received his Ph.D. in Computer Science and Engineering from
National Institute of Technology, Arunachal Pradesh, in 2017 and M.Tech. degree
in Computer Science and Engineering from Rajiv Gandhi University, Arunachal
Pradesh, India, in 2011. Currently, he is working as Assistant Professor in the Depart-
ment of Computer Science and Engineering in National Institute of Technology,
Arunachal Pradesh. His research activities mainly focused on cloud computing and
natural language processing. He is author of numerous international refereed journals
and in referred international conferences.

Dr. Mukesh Prasad is Senior Lecturer at the School of Computer Science in the
Faculty of Engineering and IT at UTS who has made substantial contributions to
the fields of machine learning, artificial intelligence, and the Internet of things.
Mukesh’s research interests include also big data, computer vision, brain computer
interface, and evolutionary computation. He is working also in the evolving and
increasingly important field of image processing, data analytics, and edge computing,

xiii
xiv About the Editors

which promise to pave the way for the evolution of new applications and services in
the areas of health care, biomedical, agriculture, smart cities, education, marketing,
and finance. His research has appeared in numerous prestigious journals, including
IEEE/ACM Transactions, and at conferences, and he has written more than 100
research papers. Mukesh started his academic career as Lecturer with UTS in 2017
and became Core Member of the University’s world-leading Australian Artificial
Intelligence Institute (AAII), which has a vision to develop theoretical foundations
and advanced technologies for AI and to drive progress in related areas. His research is
backed by industry experience, specifically in Taiwan, where he was Principal Engi-
neer (2016–2017) at the Taiwan Semiconductor Manufacturing Company (TSMC).
There, he developed new algorithms for image processing and pattern recognition
using machine learning techniques. He was also Postdoctoral Researcher leading
a big data and computer vision team at National Chiao Tung University, Taiwan
(2015). Mukesh received an M.S. degree from the School of Computer and Systems
Sciences at the Jawaharlal Nehru University in New Delhi, India (2009), and a Ph.D.
from the Department of Computer Science at the National Chiao Tung University in
Taiwan (2015).

Dr. Sonali Agarwal is working as Associate Professor in the Information Tech-


nology Department of Indian Institute of Information Technology (IIIT), Allahabad,
India. She received her Ph.D. degree at IIIT, Allahabad, and joined as faculty at IIIT,
Allahabad, where she is teaching since October 2009. She holds Bachelor of Engi-
neering (B.E.) degree in Electrical Engineering from Bhilai Institute of Technology,
Bhilai, (C.G.) India, and Masters of Engineering (M.E.) degree in Computer Science
from Motilal Nehru National Institute of Technology (MNNIT), Allahabad, India.
Her main research interests are in the areas of big data, big data mining, complex
event processing system, support vector machines, stream analytics, and software
engineering. She is having hands-on experience on stream computing and complex
processing platforms such as Apache Spark, Apache Flink, and ESPER. She has
focused in the last few years on the research issues in data mining application espe-
cially in big data, stream computing, and smart cities. She has attended many national
and international conferences/workshops, and she has more than 70 research papers
in national/international journals and conferences. She has completed her Master’s
Thesis work at Liverpool John Moores University (LJMU), Liverpool, UK, during
November 1999 to February 2000 under Indo-UK REC Project, a collaboration in
between School of Computing and Mathematical Science, LJMU Liverpool, UK,
and Motilal Nehru National Institute of Technology, Allahabad. She has also taken
part in Indo Swiss Joint Research Program (ISJRP), and full financial support was
awarded to carry out joint research work and to gain knowledge regarding the recent
research and experimental facility/work at EPFL, Switzerland, from December 2011
to January 2012. She has also visited Thailand and Sri Lanka for attending/organizing
international-level conference/workshops. She has also been Member of IEEE, ACM,
CSI, and supervising three Ph.D. scholars and several graduate and undergraduate
students in big data mining and stream analytics domain.
Leukocyte Subtyping Using
Convolutional Neural Networks
for Enhanced Disease Prediction

Mulagala Sandhya , Tanmay Dhopavkar, Dilip Kumar Vallabhadas,


Jayaprakash Palla, Mulagala Dileep, and Sriramulu Bojjagani

Abstract Deep learning shown its potential in a variety of medical applications and
proved as a count on by people as a step ahead approach compared to traditional
machine learning models. Moreover, the other implementations of these models
such as the convolutional neural networks (CNNs) provide extensive applications
in the field of medicine, which usually involves processing and analysis of a large
dataset. This paper aims to create a CNN model which can solve the problem of
white blood cell subtyping which is a daunting one in clinical processing of blood.
The manual classification of white blood cells in laboratory is a time-consuming
process which gives rise to the need for an automated process to perform the task.
A CNN-based machine learning model is developed to classify the leukocytes into
their proper subtypes by performing tests on a dataset of around twelve thousand
images of leukocytes and their types, and a wide range of parameters is evaluated.
This model can automatically classify the white blood cells to save manual labor,
time and improve efficiency. Further, pretrained models like Inception-v3, VGGNet
and AlexNet are used for the classification, and their performance is compared and
analyzed.

Keywords CNN · Blood diagnosis · Leukocytes · Inception · VGGNet · AlexNet

1 Introduction

The primary step in the diagnosis of various illnesses is the detection and subtyping
of patient’s blood. Procedures that automatically perform this task have ground-

M. Sandhya (B) · T. Dhopavkar · D. K. Vallabhadas · J. Palla


Department of CSE, National Institute of Technology, Warangal, India
e-mail: [email protected]
M. Dileep
Department of CSE, Vishnu Institute of Technology, Bhimavaram, India
S. Bojjagani
Department of CSE, SRM University, Amaravati, India
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 1
D. Gupta et al. (eds.), Advanced Machine Intelligence and Signal Processing, Lecture
Notes in Electrical Engineering 858, https://doi.org/10.1007/978-981-19-0840-8_1
2 M. Sandhya et al.

breaking utilities in the medical field. Leukocytes usually referred to as white blood
cells (WBCs) are the very important components of our internal functioning [1].
These are responsible for fighting with infection. WBCs are categorized into five
subtypes: neutrophils (60–70%), eosinophils (1–4%), basophils (0.5–1%), lympho-
cytes (20–40%) and monocytes (2–8%) [2]. The ability to record the count and
category of leukocytes and any changes in them acts as an indicator for different ill-
nesses. Increased levels of eosinophils and monocytes can be suggestive of bacterial
infestation. High levels of lymphocytes can point toward presence of diseases such as
leukemia (a type of blood cancer). On the other hand, less neutrophils could indicate
other ailments. Hence, creating a procedure for exact counting and classification of
leukocytes into their subtypes is considered as a prominent problem. The detection
and distinguishing of diverse WBCs is important due to its vast influence in clinical
functions [3].
The rest of the paper is organized as follows: Sect. 2 provides a literature review
about the past relevant work and provides a description about the motivation for
present work. Section 3 provides a background about CNN and its layers and certain
pretrained models. Section 4 provides the proposed methodology, Sect. 5 provides the
experiment analysis and results, and Sect. 6 provides the conclusion and direction in
this area of research.

2 Related Work and Motivation

Robust image processing algorithms have been applied for detecting the nuclei and
classifying WBCs in blood smear images based on features the nuclei. New image-
enhancing techniques are used to manage variations in illumination. Color variations
in nuclei have been managed by using TissueQuant method [4]. The devices that con-
duct blood tests detect WBCs based on traditional techniques like pre-processing,
segmentation, feature extraction, feature selection and classification. Moreover, a
computer-aided automated system has been proposed for easy identification and
location of WBC types in blood images. Additionally, region-based CNNs have
been for classifying the blood cells [5]. Artificial neural network is used to enhance
nucleus by a method called intensity maxima. Classification is done on the basis of
features extracted from segmented images [6]. In addition, the naive Bayes classifier
with Laplacian correction has been used for giving a robust and efficient method to
the problems involving multi-category classification of peripheral Leishman blood
stain images [7]. Principal component analysis and neural network are used for auto-
matic counting, segmentation and classification of WBCs [8]. Classification scheme
using color information and morphology has been proposed for isolating and classi-
fying of WBCs in manually prepared, wright-stained, peripheral blood smears from
whole-slide images [9]. Moreover, SVM classifier and neural network have been
implemented for classification of white blood cells. The segmentation and feature
extraction are also done for classification purpose. WBC segmentation is a two-step
process carried out on the HSV equivalent of the image, using k-means clustering
Leukocyte Subtyping Using Convolutional Neural Networks … 3

followed by the EM algorithm [10]. Later, pretrained architectures such as AlexNet,


VGG-16, GoogleNet, ResNet were used for feature extraction. Minimum redun-
dancy maximum relevance (MRMR) methods have been used for selecting features
from extracted features. Extreme learning machine (ELM) has been proposed for
classification of WBCs [11]. Later, convolutional neural network architectures have
been proposed for subclass separation of WBC images. AlexNet architecture gives
dominant recognition rate compared to other CNN architectures used [12]. Moreover,
semi-automated approaches have been proposed for manual extraction and selection
of features and an automatic classification based on microscopic images of blood
smear. Deep learning method is used for automation of the complete process using
CNN for binary and multi-class classification [13].
The source of our motivation to carry out this work was derived from the extent
to which the leukocytes affect our health and well-being. Neutrophils are pow-
erful WBCs that destroy bacteria and fungi. Increased amount of neutrophils is
called neutrophilic leukocytosis. Neutrophilic leukocytosis is a way by which the
immune system counters the infection, injury, inflammation and various types of
leukemia. Eosinophils are accountable for the destruction of cancer cells. High lev-
els of eosinophils indicate a parasitic infection and asthma. Basophils have the task
of alerting the body to the persistent infections by releasing certain chemicals into
the blood, mostly to fight with allergies. High levels of basophils can be observed
in people with underactive thyroid disease. Lymphocytes are responsible to produce
antibodies. Lymphocytes protect the body from viruses, bacteria and various other
bodily threats. Increased level of lymphocytes is called lymphocytic leukocytosis.
Monocytes are primarily accountable for fighting and disintegrating germs or bacteria
which are present in the body. The proportion in human blood increases or decreases
based on affection of disease. A high leukocyte count indicates that the body’s inter-
nal mechanism is functioning to remove an external entity. High level of monocytes
indicates a chronic infection, cancer or a blood disorder. A low WBCs count indicates
viral infections and autoimmune disorders. The classification of white blood cells
is important to diagnose various diseases. Owing to all the points mentioned above,
it becomes important to use the prowess of CNNs in medical image processing to
improve the efficiency of the leukocyte classification process and make it error-free.

3 Background

A brief discussion of CNN and its layers is given in this section. The architectures
of Inception-v3, VGGNet, AlexNet are briefly discussed.
4 M. Sandhya et al.

3.1 Convolutional Neural Network

The predominant techniques of learning in deep learning applications are supervised


and unsupervised learning [14]. In the supervised method, the network is learned
through labeled inputs. For every training example, we have a set of inputs and
single or multiple designated outputs. The primary goal of this is pruning of the error
in classification by calculation of the proper outcome of the input data in the training
process. In the unsupervised method, the labels are excluded from the training data.
Success is usually measured by detecting if the network is capable of increasing or
reducing a cost function that is in relation to it. Many applications working in the
area of pattern recognition are dependent on supervised way of learning. A CNN is
constituted by neurons that are optimized by the method of learning. On reception of
some input, the neurons carry out an task such as scalar product which is accompanied
by some function which is not of linear nature. The complete neuron cluster now
provides just one score function known as weight of the image vectors that is given
as input to the previous output. The final layer contains loss functions correspondent
to each and every class. CNN is used to recognize patterns in images. It is helpful
for encoding image features into the inherent structure, thereby making the model
more appropriate for any tasks relevant to images and further reducing the parameters
required for designing the model.

3.2 Layers of CNN

Convolution is basically a multiplication operation on all the pixels constituting the


value of the filter, which is indeed a kind of matrix. The operation of convolution is
used for creating multiple images from the native one resulting in feature enhance-
ment of the fed-in image rendering more strength to the process of classification.

Fig. 1 CNN model diagram


Leukocyte Subtyping Using Convolutional Neural Networks … 5

This network consists of a sequence of layers, where every layer has a separate task.
Figure 1 shows the model diagram of CNN consisting of various layers.
Convolutional Layer This layer retrieves the features present in the input image.
A neural network is not aware about where the features will match exactly in the
image. Hence, it searches for them in the image with the help of filter. A filter
corresponds to a special feature. CNN implements convolution operations using a
filter which slips into the image and multiplies the filter value and the corresponding
pixel of the image. This continues for the leftover filters giving us a collection of
filtered images which is the final output. An auxiliary function known as ReLU
is carried out right after all convolutions. This is a function which is not linear in
nature and is done for each and every pixel to render some level of irregularity into
the network. It does a matrix multiplication, followed by element-wise addition for
changing the -ve pixels to zero in the feature map.
Pooling Layer This layer can be called downsampling whose job is to decrease the
size of all filtered images and retain the knowledge that is considered valuable. It
can be applied in different forms like average or max-pooling, etc. The outcome of
this layer has the exact same quantity of images, although they all consist of lesser
pixels than original. It is helpful for dealing with pre-processing step efficiently.
Fully Connected Layer This operation is performed following a series of convo-
lutional and pooling layers. The shape of convolutional features is transformed into
vector format that is then provisioned for a fully connected layer. The layers of con-
volutions are considered as the foundations for neural networks. They consider the
input to be one vector unlike two-dimensional arrays. They implicate a relationship
among the neurons of the layer before and the layer after them. The outcome as
gathered from the layers of convolutional and pooling contains various sophisticated
features that can be used by the fully connected layers for classifying the image to
the appropriate label based upon the learning data.

3.3 Inception-v3

The issue of overfitting increasingly affects the functioning of neural networks. In


order to pacify this problem, this model uses multi-sized kernels operating in an
equivalent level such that the cluster of neurons becomes horizontally large instead
of having vertical growth. The first Inception model was the GoogleNet, but later
enactments of the model are nomenclated as Inception v“n”. Here, the “n” is con-
sidered as the implementation version. The primary goal of this model is behaving
as a feature extractor at a multitude of stages using (11-33-55) convolutions in the
same module, and the outcomes of these kernels are then grouped together prior to
sending them to the following layer of the network [15] (Fig. 2).
6 M. Sandhya et al.

Fig. 2 Inception-v3 model architecture

Fig. 3 VGGNet model architecture

3.4 VGGNet

This model is distinctly characterized by its simplistic structure which uses just 33
convolutions above one another in a fashion of making the model deeper, while the
maximum pooling provides a smaller magnitude. The network contains two fully
connected layers consisting of 4096 nodes. They are accompanied by a the softmax
classifier [16] (Fig. 3).

3.5 AlexNet

AlexNet is the CNN used particularly in the deep learning applications to computer
vision. It is famous for winning the ImageNet LSVRC competition held in 2012 by a
big margin. AlexNet is deeper with more filters per layer and more than one convo-
Leukocyte Subtyping Using Convolutional Neural Networks … 7

Fig. 4 AlexNet model architecture

lutional layers. It consists of (11 × 11), (5 × 5), (3 × 3) convolutions, max-pooling,


data augmentation, dropout, ReLU activations, etc. It has got ReLU activations after
each convolutional and fully connected layer. It contains eight weighted layers, the
first five are convolutional layers, and rest are fully connected ones. The output of
the last fully connected layer is given to a classifier called as softmax that distributes
images across class labels. This network maximizes the average across training cases
under the class label prediction distribution. Shortly, AlexNet contains five convolu-
tional layers, three fully connected layers, and ReLU is applied after each of these
layers. Dropout is applied before the first and the second fully connected layer [17]
(Fig. 4).

3.6 Transfer Learning

Transfer learning is the process of applying the information that is acquired while
working out a problem to a new problem which is co-related to the previous one.
Figure 5 shows the process of transfer learning. Deep CNNs can offer innovative
support for overcoming several challenges faced in classification. The lack of proper
training data is an extremely common issue in using deep CNN models that usually
require a big amount of data to perform well. Also, the collection of a big dataset
is a tedious process and more so now. Hence, the process of training is very costly.
The complicated network models usually take many days for training with the help
of multiple machines that tend to be exorbitant. Very few people are able to train
the CNN afresh because very rarely are able to gather a sufficing dataset. Hence,
the technique of transfer learning is presently used for overcoming the problem of
small dataset [18]. This technique is very much effective in solving the issue of a
lack of training data. With the help of weights of a model for continuous retrieval of
8 M. Sandhya et al.

Fig. 5 Transfer learning

distinctive features from the image, most of the issues are taken care of. This method
can be incorporated with the help of training classifier and fine-tuning.
Training Classifier A base network is trained on a database which is then re-purposed
to either learn features or shift them to a target network. This target network has to be
trained on a target database. This process usually works well if the features extracted
are suitable to both base and target jobs, not specific to a single job. The usual method
is truncating the final layer, i.e., softmax layer [18] of the model and replacing it with
a proprietary softmax layer which is pertinent to the task at hand. For example, the
ImageNet model is characterized by a softmax layer having 1000 classes. If the job is
to classify across ten classes, the updated softmax classifier of the pretrained model
will have ten classes rather than having 1000. Backpropagation is applied to the
neural cluster for regulation and tuning of pretrained model weights. The process
of cross-validation is carried out so as to improve the generalization ability of the
model.
Fine-Tuning In the process of fine-tuning, the plan is not just to replace and retrain
the classifier used. It also focuses on regulating the pretrained model weights with
the help of continual backpropagation. Every layer of the convolutional network can
be regulated, or some of the initial layers can be constant, and just the upper layers
can be regulated in order to avoid overfitting. This method is born out of the fact
that the preliminary features of a convolutional network are non-specific ones that
are useful to most of the tasks, but the later layers in the model are more relevant to
the image details for all the labels of the native data.
Softmax Classifier This uses cross-entropy loss function. It allows the computing
probabilities for all classes. It can be implied that the obtained scores given in Fig. 6
as unnormalized and then convert the normalized value of the correct class to be
high which is low for the equivalent value. The final loss is 1.04 using the natural
logarithm.
Leukocyte Subtyping Using Convolutional Neural Networks … 9

Fig. 6 Softmax classifier

Dropout Layer This is a regularization technique for neural network models where
neurons are selected randomly selected and ignored during training. The aim of this
layer is to prevent overfitting. This implies that the contribution of removed neurons
is temporally removed for activating downstream neurons on the forward pass, and
no updations are performed to the weights on the backward pass.

4 Proposed Work

4.1 Dataset

The dataset used in this paper is BCCD dataset [19, 20] which is publicly avail-
able. The dataset contains 12,500 augmented images of blood cells. Approximately,
2500 images and 600 images for each of four different cell types are grouped into
four different folders according to blood cell type in train data and test data. The
images are accompanied by their corresponding cell labels (csv). The blood cell
types are eosinophil, lymphocyte, monocyte and neutrophil [21]. Training model
contains 2497 eosinophil images, 2483 lymphocyte images, 2478 monocyte images
and 2499 neutrophil images. Similarly, testing model contains 623 eosinophil images,
620 lymphocyte images, 620 monocyte images and 624 neutrophil images. Figure 7
shows a sample of blood subtypes in this dataset.

4.2 Simple CNN Model and Its Parameters

The layout of our simple CNN model is shown in Fig. 8. All the model details are
listed below:
Two Convolutional Layers
• First layer: Kernel size: 3 × 3, number of output filters: 32
• Second layer: Kernel size: 3 × 3, number of output filters: 64
• Activation function: ReLU
10 M. Sandhya et al.

Fig. 7 Blood subtypes in dataset

Fig. 8 Our simple CNN architecture

• Stride: 1
• Input image size: 60 × 80 (3 channels).
Pooling Layer
• Pooling type: Maximize
• Pool size: 2 × 2
• Dropout: 0.25.
Hidden Layer
• Number of nodes: 128
Leukocyte Subtyping Using Convolutional Neural Networks … 11

• Activation function: ReLU.


Output Layer
• Number of nodes: Number of classes
• Activation function: Softmax.
Loss Function
• Cross-entropy.

4.3 Inception-v3 CNN and Its Parameters

The Inception-v3 pretrained model is combined with the fully connected structure
similar to the one in the simple CNN model 8 and trained with an input image size
of (60 * 80 * 3).

4.4 VGGNet CNN and Its Parameters

We need to stack the convolutional layers according to an order of increasing size


of filter; i.e., if layer one consists 16 filters, then layer two must contain 16 or more
filters. Another important consideration is that in every VGG network, all the filters
are of size (3 × 3). The idea behind this is that two (3 × 3) filters can completely
cover the area of what a (5 × 5) filter can cover and also that two (3 × 3) filters are
much less costly than a single (5 × 5) filter in the sense of multiplications involved.
The model should be able to take the input image of size (60 * 60 * 3) and should
be able to detect the image class which would be one of the four classes. By taking
up the basic idea of VGGNet, we have built our own custom VGGNet classifier to
classify the blood cells.

4.5 AlexNet CNN and Its Parameters

The original AlexNet architecture takes input image size as (227 * 227 * 3), and it is
used for classification of images which will be among 1000 labels, but in our case,
we are taking input image size as (60 * 60 * 3), and we need to classify the images
among only four classes. Hence, the above original architecture will not be apt for
direct use, but we need to make some changes in the above architecture. By following
the basic idea of it, a custom AlexNet architecture is implemented.
12 M. Sandhya et al.

5 Results and Discussion

5.1 Evaluation Parameters

Accuracy, precision, recall and F1-score are the four main evaluation parameters
used to measure performance of CNNs. In the light of our classification problem, the
true and false positives and negatives have been defined as follows:
• True Positive: A WBC accurately classified into either class—eosinophil, neu-
trophil, lymphocyte or monocyte—by the medical professional as well as the
model.
• False Positive: A WBC accurately classified by the model but not the medical
professional.
• False Negative: A WBC accurately classified by the medical professional but not
the model.
• True Negative: A leukocyte cell neither classified by medical professional nor
classified by the model.

5.2 Simple CNN Results

The results of our simple CNN model are given below:


Classification over 4 classes The model classified the images into four classes,
namely neutrophil, eosinophil, monocyte and lymphocyte with an accuracy of 85.4%.
Table 1 represents the evaluation scores over four classes. The accuracy curve, loss
curve and confusion matrix are given in Fig. 9.
Classification over two classes The model classified the images into two classes,
namely mononuclear and polynuclear with an accuracy of 96.2%. The evaluation
scores over two classes are given in Table 2. The accuracy curve, loss curve and
confusion matrix are given in Fig. 10.

Table 1 Evaluation over four classes using simple CNN model


Precision Recall F1-score Accuracy
Neutrophil 0.67 0.84 0.74 624
Eosinophil 0.85 0.81 0.83 623
Monocyte 0.98 0.77 0.86 620
Lymphocyte 0.99 1.00 1.00 620
Accuracy 0.85 2487
Macro avg 0.87 0.85 0.86 2487
Weighted avg 0.87 0.85 0.86 2487
Leukocyte Subtyping Using Convolutional Neural Networks … 13

Fig. 9 Accuracy curve, loss curve and confusion matrix over four classes using simple CNN model

Table 2 Evaluation over two classes using simple CNN model


Precision Recall F1-score Accuracy
Mononuclear 0.99 0.93 0.96 1240
Polynuclear 0.94 0.99 0.96 1247
Accuracy 0.96 2487
Macro avg 0.96 0.96 0.96 2487
Weighted avg 0.96 0.96 0.96 2487

Fig. 10 Accuracy curve, loss curve and confusion matrix over two classes using simple CNN model

5.3 Inception-v3 Results

The Inception-v3 model classified the leukocytes with an accuracy of 89.51%. The
evaluation scores of Inception-v3 are given in Table 3. The accuracy curve, loss curve
and confusion matrix are given in Fig. 11.

5.4 VGGNet Results

The VGGNet model classified the leukocytes with an accuracy of 80.26%. The other
recorded metrics were as follows:
14 M. Sandhya et al.

Table 3 Inception-v3 evaluation parameters


Precision Recall F1-score Accuracy
Neutrophil 0.78 0.82 0.80 624
Eosinophil 0.86 0.93 0.89 623
Monocyte 0.97 0.84 0.90 620
Lymphocyte 0.99 1.00 0.99 620
Accuracy 0.90 2487
Macro avg 0.90 0.90 0.90 2487
Weighted avg 0.90 0.90 0.90 2487

Fig. 11 Accuracy curve, loss curve and confusion matrix using Inception-v3

Fig. 12 Accuracy curve, loss curve and heatmap of VGGNet

• Precision: 0.8170
• Recall: 0.7917
• F1-score: 0.7949.
Figure 12 shows the accuracy curve, loss curve and heatmap obtained using VGGNet
model to classify the WBCs.

5.5 AlexNet Results

The AlexNet model classified the leukocytes with a low accuracy of 64.7%. The
other metrics are as given below:
Leukocyte Subtyping Using Convolutional Neural Networks … 15

Fig. 13 Accuracy curve, loss curve and heatmap of AlexNet

• Precision: 0.7019
• Recall: 0.6470
• F1-score: 0.6412.
Figure 13 shows the accuracy curve, loss curves and heatmap obtained using AlexNet
model to classify the WBCs.
The reason why AlexNet is showing less accuracy is because of its filter size in
the convolutional layers. In our custom case, the first convolutional layer contains
(5 * 5), but using a 3 × 3 filter expresses most information about the image across all
the channels while keeping the size of the convolutional layers consistent with the
size of the image.

5.6 Comparative Analysis

The four different models used to classify the leukocytes gave varying results. It
was observed that Inception-v3 was the best performing model out of the four in
terms of accuracy, precision, recall and F1-score. It was followed by our simple
CNN model and VGGNet model. The AlexNet model fared low compared to the
other models. Table 4 shows the comparison of the models based of the various
performance metrics.

Table 4 Different models comparison


Model Accuracy Precision Recall F1-score
Simple CNN 0.854 0.87 0.85 0.86
Inception-v3 0.895 0.90 0.90 0.90
VGGNet 0.803 0.82 0.79 0.79
AlexNet 0.647 0.70 0.65 0.64
16 M. Sandhya et al.

6 Conclusion

In this paper, features are extracted from blood cell images that were trained, tested
and validated. The use of convolutional neural networks (CNNs) in medical image
processing is immense as is proven in this study. The leukocytes are classified
into four subtypes, namely neutrophils, eosinophils, monocytes and lymphocytes
by building CNN models. Initially, a simple CNN model is used for classification.
The concept of transfer learning is employed in the later models. This is achieved
by using pretrained models such as Inception-v3, VGGNet and AlexNet. The results
of all the models are recorded by measuring the various performance metrics. These
results are then compared with each other and analyzed to rank the performance.
This gives an insight into how different models perform on the same dataset and also
suggests what models can perform better in a real-world scenario for classification
of leukocytes. The highest achieved accuracy of 89.51% proves that this method can
fare very well in real-time classification of white blood cells. Other nature-inspired
models can be used to reach the optimal value and use the appropriate number of
layers to improve the accuracy of the CNN models in leukocyte subtyping.

References

1. Doan, C.A.: The white blood cells in health and disease. Bull. N. Y. Acad. Med. 30(6), 415
2. Rezatofighi, S.H., Soltanian-Zadeh, H.: Automatic recognition of five types of white blood
cells in peripheral blood. Comput. Med. Imaging Graphics 35(4), 333–343 (2011)
3. Su, M.-C., Cheng, C.-Y., Wang, P.-C.: A neural-network-based approach to white blood cell
classification. Sci. World J. (2014)
4. Hegde, R.B., Prasad, K., Hebbar, H., Singh, B.M.K.: Development of a robust algorithm for
detection of nuclei and classification of white blood cells in peripheral blood smear images. J.
Med. Syst. 42(6), 110 (2018)
5. Kutlu, H., Avci, E., Özyurt, F.: White blood cells detection and classification based on regional
convolutional neural networks. Med. Hypotheses 135, 109472 (2020)
6. Manik, S., Saini, L.M., Vadera, N.: Counting and classification of white blood cell using
artificial neural network. In: 2016 IEEE 1st International Conference on Power Electronics,
Intelligent Control and Energy Systems, pp. 1–5. IEEE (2016)
7. Mathur, A., Tripathi, A.S., Kuse, M.: Scalable system for classification of white blood cells
from Leishman stained blood stain images. J. Pathol. Inform. (2013)
8. Nazlibilek, S., Karacor, D., Ercan, T., Sazli, M.H., Kalender, O., Ege, Y.: Automatic segmen-
tation, counting, size determination and classification of white blood cells. Measurement 55,
58–65 (2013)
9. Ramesh, N., Dangott, B., Salama, M.E., Tasdizen, T.: Isolation and two-step classification of
normal white blood cells in peripheral blood smears. J. Pathol. Inform. 3 (2012)
10. Sinha, N., Ramakrishnan, A.G.: Automation of differential blood count. In: Conference on
Convergent Technologies for Asia-Pacific Region, vol. 2, pp. 547–551. IEEE (2003)
11. Özyurt, F.: A fused CNN model for WBC detection with MRMR feature selection and extreme
learning machine. Soft Comput. 1–10(2019), 547–551 (2003)
12. Togacar, M., Ergen, B., Sertkaya, M.E.: Subclass separation of white blood cell images using
convolutional neural network models. Elektron. Elektrotech. 25(5), 63–68 (2019)
Leukocyte Subtyping Using Convolutional Neural Networks … 17

13. Sharma, M., Bhave, A., Janghel, R.R.: White blood cell classification using convolutional
neural network. In: Soft Computing and Signal Processing, pp. 135–143. Springer, Singapore
(2019)
14. Agarwal, C.C.: Neural Networks and Deep Learning. Springer, Cham
15. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architec-
ture for computer vision. In: Proceedings of International Conference on Computer Vision and
Pattern Recognition (2016)
16. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A.,
Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet large scale visual recognition
challenge. IJCV (2015)
17. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional
neural networks. Advances in Neural Information Processing Systems, vol. 25 (NIPS 2012)
18. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge, MA (2016)
19. https://github.com/Shenggan/BCCD_Dataset
20. https://www.kaggle.com/paultimothymooney/blood-cells
21. Özyurt, F.: A fused CNN model for WBC detection with MRMR feature selection and extreme
learning machine. Soft Comput. (2020)
Analysis of Fifteen Approaches
to Automated COVID-19 Detection Using
Radiography Images

Kartik Soni, Abhaya Kirtivasan, Rishwari Ranjan, and Somya Goyal

Abstract The COVID-19 pandemic has caused economic, physiological, and


psychological harm to the world. A crucial step, hence, in the fight against covid
is the highly efficient screening of patient cases. Conventional RT-PCR testing, even
though more reliable, cannot be done on every patient as the virus has spread way
faster than the world’s resources could afford. One very important screening approach
that is being used across the globe is chest X-ray imaging. Since X-ray facilities are
readily obtainable in healthcare systems of most countries across the globe, and with
more and more X-ray systems being digitized, the cost and time of transportation are
cut as well. Hence, if the detection of the virus in a CXR image can be automated
using AI techniques, it will save a lot of time and effort of radiologists to have to
go through hundreds of such images, and in some cases will also spare the need of
doing RT-PCR testing, and since saving resources in this time is vital, automated
detection can be very effective. In this work, we will explore, analytically discuss,
and do a comparative study of many ML and deep learning techniques that have
been taken for automated COVID-19 detection through chest X-rays (CXR). We
carefully analyze the papers and derive a set of key factors for discriminating the
methodologies, classification techniques, approaches, and the results that yielded.

Keywords Chest X-rays · COVID-19 detection · Comparative study · Machine


learning · Deep learning

1 Introduction

The remarkable increase in the spread of the novel coronavirus has put shocking pres-
sure on healthcare systems across the world. COVID-19 began initially as reporting
of pneumonia with unknown causes in Wuhan, Hubei province of China. This virus
spread uncontrollably throughout the world due to a lack of therapeutic medication,
vaccines, and lack prior medical knowledge. Almost every healthcare system in the

K. Soni (B) · A. Kirtivasan · R. Ranjan · S. Goyal


Manipal University Jaipur, Jaipur, Rajasthan, India
e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 19
D. Gupta et al. (eds.), Advanced Machine Intelligence and Signal Processing, Lecture
Notes in Electrical Engineering 858, https://doi.org/10.1007/978-981-19-0840-8_2
20 K. Soni et al.

world was caught off-guard because of the rate of spread of the virus coupled with
the limited resources in hospitals such as beds, ventilators, and PPE kits.
The key to containing the virus spread by the infected individual and save lives
in this pandemic lies in preemptive detection of covid. The most widely used test
right now to detect COVID-19 is RT-PCR. Because of the low sensitivity of RT-PCR
tests, especially in mild cases, and most importantly, the lack of resources and time
to conduct the test on so many people, chest scans remain a vital way in detecting
early signs of COVID-19 in the patient, especially as X-ray facilities are readily
obtainable in healthcare systems of most countries across the globe. Studies proved
that visual abnormalities characteristic of a COVID-19 infection is present in chest
X-ray images.
Nonetheless, there are certain limitations to chest scans: time for image acquisi-
tion, cost for CT scans, unavailability in financially challenged areas.
At the same time, X-rays provide rapid triaging as they are cheaper, portable,
highly available for faster diagnosis, and pose less threat to the patient in the form
of radiation in comparison to CT scans. Due to the portable nature of X-rays, they
significantly reduce the risk of the spread of the virus via the transportation route of the
patient. The key factor in these radiological images to diagnose covid is the presence
of opacity-related findings. Ground glass (57%) and mixed attenuation (29%) are
the opacities most frequently reported [16]. During the early stages, ground glass is
minutely observed and may be difficult to see visually. Patchy or diffuse airspace
is other subtle abnormalities that are difficult to observe and usually interpreted by
a trained radiologist. It is said that CT and X-rays are successful methods to detect
COVID-19 coupled with RT-PCR. Radiology images are being widely used to detect
COVID-19 in patients in countries like Turkey where they are facing a paucity of
testing kits at the onset of the pandemic. In some cases, CT/X-ray images have shown
changes in the images of lungs even prior to COVID-19 symptoms’ onset. The biggest
tailback faced here is the requirement for healthcare professionals to spend their
time interpreting the radiography images. As such, automated AI techniques will
help radiologists correctly interpret CXR images to help detect COVID-19 cases
more rapidly and are vital, especially when the world finds itself without enough
healthcare personnel to fight this incredibly widespread pandemic, and how our
frontline workers find themselves massively overworked. Furthermore, a lack of
enough RT-PCR test kits in comparison to the number of growing cases means
further reliance on radiography imaging, especially when there is a cost of tests and
time to process the test results associated (Fig. 1).
As such, reliable and automated techniques that can interpret hundreds of images
with maximum accuracy in a short period are highly desired. Subtle abnormalities
which are present in CXR are difficult to spot even by trained radiologists. Consid-
ering the sheer number of reports to be reviewed and the limited number of trained
radiologists, human error due to exhaustion can play a factor in an incorrect diagnosis.
Automatic methods for identifying subtle abnormalities will significantly reduce the
mounting pressure on our healthcare system. Deep learning models can help us in
medical image analysis with significantly less time and human intervention to provide
a more precise diagnosis.
Analysis of Fifteen Approaches to Automated COVID-19 Detection … 21

Fig. 1 Chest X-ray images of (a) person with normal lung with no infection (b) person with
bacterial pneumonia (c) person with viral pneumonia (d) person with COVID-19

Paper organization
Section 2 of the paper elucidates related works. Section 3 gives an overview of the
fifteen approaches individually before using them is scrutinized and contrasted in
Sect. 4, the discussion and comparative analysis section. Section 5 concludes the
paper and discusses its future scope.

1.1 Motivation

The remarkable increase in the spread of the novel coronavirus has put shocking pres-
sure on healthcare systems across the world. COVID-19 began initially as reporting
of pneumonia with unknown causes in Wuhan, Hubei province of China. This virus
spread uncontrollably throughout the world due to a lack of therapeutic medication,
vaccines, and lack prior medical knowledge. Almost, every healthcare system in the
22 K. Soni et al.

world was caught off-guard because of the rate of spread of the virus coupled with
the limited resources in hospitals such as beds, ventilators, and PPE kits.

2 Related Work

In [1], by Wang et al., a human–machine-concerted design blueprint was used, and


using DarwinAI, the Covid-Net CNN architecture was built in less than 7 days based
on generator inquisitor pair (GSInquire 37) and was later audited, giving excel-
lent positive predict value (PPV) and sensitivity, and the COVIDx dataset was open
sourced. In the work by Jain et al. in [2], 2 CNNs, ResNet50, and ResNet101 were
used, the former to classify among viral, bacterial, or normal, and the latter to clas-
sify a ‘viral’ image into COVID-19 or other viruses (non-covid). Transfer learning
was used on both CNNs, using a pre-trained CNN on the ImageNet database. In
[3], by Abraham et al., multiple pre-trained networks were used, and the CFS algo-
rithm for feature selection along with a forward selection-based search method,
SSFS was utilized to get the best subset of features. In combination with them, the
Bayesnet classifier gave the best accuracy. In [4], by Chandra et al., machine learning
approaches were preferred over deep learning ones since they can work with lesser
data as well as the constrained environment, in contrast with the ‘data-hungry deep
learning algorithms. Outputs from five ML approaches, SVM, DT, KNN, NB, and
ANN, were considered, and classification happened using majority voting. In [5],
by Arpan et al., with the help of a pre-trained 121-layered DenseNet and the final
layers, were trained to adapt to COVID-19 detection. In [6], by Sara Hosseinzadeh
et al., they compared popular deep learning techniques for extracting features from
the CT and X-ray images for automatic COVID-19 classification. In [7], by Sharma,
the use of ML techniques for getting ideas whether the CT scan must be conducted
as a preliminary test/alternative test or not to RT-PCR. In [8], by Kadry et al., they
used a system of machine learning algorithms for the detection of COVID-19 using
CT scan images. In [9], by Jain et al., they analyzed the posteroanterior view of CXR
images for COVID-19 positive and negative patients and got to the conclusion which
gave very good results. In [10], by Ahishali et al., they evaluated the ability of ML
techniques for automated detection for COVID-19 detection from CXR images as
early as possible. In [11], by Ozturk et al., the model provides binary as well as multi-
category classification (COVID/not COVID and pneumonia, COVID or normal). The
model can be employed right off the bat to monitor patients. They chose DarkNet-19
as the starting point. In [12], by Mohd et al., they focused on transfer learning which
allows using already trained models to apply in domain-centric applications. They
used ResNet CNN to detect abnormalities in the X-rays.
In [13], by Khan et al., they used a CNN based on Xception CNN architec-
ture. This is trained to categorize the CXR images into four classes which are
pneumonia-viral, pneumonia-bacterial, COVID-19, and normal. In [14], by Asmaa
et al., decompose, transfer, and compose CNN was used which also takes into consid-
eration the irregularities in the image dataset. In the transfer learning stage, different
Analysis of Fifteen Approaches to Automated COVID-19 Detection … 23

ImageNets which were already pretrained CNN were used like GoogleNet, ResNet,
SqueezeNet, and VGG19. In [15], by Minaee et al., they pointed out that due to
the lack of a publicly available image dataset of X-ray, we cannot use CNN models
which are trained from scratch; fine-tuning of the final layer of the pretrained cate-
gory was done on the ImageNet database. Unlike traditional analysis, they used
end-to-end deep learning frameworks that predict COVID-19 directly from unpro-
cessed images without extracting features. Satpathy et al. in [17] used artificial intel-
ligence methods for mortality rate prediction in COVID-19 cases. Poonam et al. in
[18] analyzed factors that influence the spread of COVID-19. Sharma et al. in [19]
analyzed statistically the data of patient cases in Karnataka, India.

3 Overview

This section gives a brief introduction and overview of all fifteen papers individually.
Sara Hosseinzadeh et al. [6] collected publicly available CXR and available CT
images, and in the following step preprocessed the given dataset utilizing the usual
normalization strategies to better the nature of the input data information. Once the
images fed into were made, with the help of CNN, descriptors [quote] were fed in the
feature extraction step to extract the deep features of every input image. In training,
these functions were then inserted into ML classifiers like random forest, AdaBoost,
XGBoost, decision tree, bagging classifier, and LightGBM to determine if it was
COVID-19 or less case or control. Finally, the performance was evaluated on test
images. The best accuracy of 99% was achieved when the bagging tree classifier
along with the DenseNet-121 feature extractor was used. The next best accuracy
was that of a type of the feature extractor ResNet50 used with LightGBM whose
resulting accuracy was 98%. The authors used a dataset of publicly available X-
rays from DrJoseph Cohen, on the GitHub repository, from the aforementioned 117
chest X-rays and 20 CT images with a COVID-19 positive result. They also used
117 images of healthy patients with X-rays from the available Kaggle chest X-ray
images dataset (Pneumonia) and 20 images of healthy patients with CT images from
the Kaggle dataset. RSNA Pneumonia detection includes a record of positive and
normal cases. Sachin Sharma took CT scans of people with COVID-19 infection, [7]
next viral pneumonia, and normal healthy people and recorded them on a computer.
He then preprocessed the image, i.e., resizing and cropping the image, to extricate
the effective lung regions before taking the dataset into the analysis. The following
properties were selected: ground glass opacities and pleural effusion, as they are
distinguishable in CTS. So, he created separate folders for each of the three discrete
categories. To train the machine, specialized computer vision software analogical to
the Microsoft Azure residual neural network (ResNet) architecture was used. After
receiving the results, they are compared with the actual status (COVID-19 or other
viral pneumonia or a normal health case) of the patient to verify the accuracy of
the model. He achieved an accuracy of 91% for COVID-19 classification using this
technique. All images were collected from official databases of various hospitals
24 K. Soni et al.

in China, Italy, Moscow, and India. Approximately, 2200 images were collected
including 800 CT scans of COVID-19 infected patients, 600 CT images of rest
patients with viral pneumonia, and 800 CTs of healthy people.
Kadry et al. [8] performed a CT scan of the patient and acquired a three-
dimensional (3D) image of the lungs. As the 3D image is difficult to analyze, it
was converted into 2D sections for testing. A normal level of CTS/COVID-19 is
considered a test of the effectiveness of the proposed MLS. Chaotic bat algorithm
and Kapur’s entropy (CBA+KE) were used to improve the image quality of the
infected part as they used a three-level threshold. Next, a two-stage threshold filter
was used to divide the images into region of interests (ROI) and artifacts; then, a
feature extraction process was taken up to extract features from the initial image.
The dominant characteristics of each image type were selected using a statistical
test, and then, the selected characteristics are used for training, testing, and vali-
dating the classification system. In addition, the future blending method is also being
considered to increase the accuracy of the classification. This paper gave an accu-
racy of 89.80%. Normal CT scans were taken from LIDC-IDRI and RIDER-TCIA.
COVID-19 class images were acquired, warning from the Radiopaedia database and
reference images. Rachna Jain et al. learning algorithms PA image of CXR scans
for patients who contracted COVID-19 and vice-versa. After clearing the images
and applying data processing, CNN models based upon deep learning were used,
and their performance was analyzed and compared. Xception, ResNeXt, and Incep-
tion V3 models were compared, and their accuracy was examined. When analyzing
results, the Xception model provided the maximum accuracy of 97.97% for detecting
COVID-19 for CXRs. Mete et al. [10] examined the ability of 4004 ML techniques
to detect COVID-19 from CXRs as early as possible. This study considered compact
classifiers and deep learning approaches. In addition, a newer compact classifier, the
convolutional support estimator network (CSEN), was used for the same because it
is suitable for misclassifying data. The CSEN variant of models gives the highest
sensitivity level around >97%, whereas DenseNet-121 gives a decreased sensitivity
with higher specificity. Wang et al. [1] presented a human–machine-concerted design
strategy, and using DarwinAI, the Covid-Net CNN architecture was built in less than
7 days based on generator inquisitor pair (GSInquire 37) and was later thoroughly
audited (checking if the right classification is being made for the right reasons, etc.),
giving excellent positive predict value (PPV) and sensitivity, and the COVIDx dataset
was open sourced. Generative synthesis tool was used to provide granular insights
into neural network performance. The machine-carried design exploration was done
based on human-specific design requirements (sensitivity and PPV >80%), ensuring
high sensitivity as well as limiting false positives. Unique characteristics of the
network architecture, such as PEPX models of design, selective far communication,
and great diversity in architecture, are allowed for a computationally efficient yet
effective Covid-Net. A high accuracy of 93.3% was achieved. The dataset used was
COVIDx, consisting of CXRs from people. Jain et al. [2] carried out deep networks
in two phases involving two ResNet CNNs–ResNet50 and ResNet101. The former,
with 50 layers, to classify among viral, bacterial pneumonia, or normal, and if the
Analysis of Fifteen Approaches to Automated COVID-19 Detection … 25

image is classified as ‘viral’, it is then fed into the latter CNN to classify into COVID-
19 or other viruses (non-covid). Transfer learning was used on both CNNs, using
pre-trained CNNs on the ImageNet database, due to which the initial layers could
be made stagnant and only the final layers needed to be trained. Data augmenta-
tion techniques were employed to counter the class imbalance. The finding of the
optimal learning rate, as well as the overall implementation, was done in Python with
the assistance of the FastAi library. The dataset used consisted of chest X-rays of
normal healthy people, pictures of patients with bacterial pneumonia, and pictures
of viral pneumonia, from Cohen and Kaggle. Stage 1 model gave 93% accuracy,
and the Stage 2 model gave 97.77% accuracy. Abraham et al. [3] used multiple
pretrained networks, and the CFS algorithm is for feature selection in combination
with a forward selection-based search method; SSFS was exploited to get the best
subset of features. In combination with them, the Bayesnet classifier gave the best
accuracy. In combination with them, the Bayesnet classifier proved to be giving the
best results, especially with multi-CNNs. Dataset: Cohen et al. dataset of 560 CXR
images—453 COVID-19 and 107 non-COVID images. Chandra et al. [4] preferred
machine learning approaches over deep learning ones since, according to the authors,
they can work with lesser data as well as in a constrained environment, in contrast
with the ‘data-hungry deep learning algorithm. The given study uses eight first-order
statistical features (FOSF), which describes the complete image by using several
parameters such as the mean, variance, and roughness, 88 GLC matrix features and
8100 HOG features. The FOSF does not take care of the local information. As such,
the HOG and GLCM feature descriptors are utilized to do a thorough analysis of
texture. The GLCM describes the spatial correlation between intensities of pixels in
radio texture patterns in four directions, and the local information is encoded by HOG.
These statistical features can encode natural textures efficiently. For feature selection,
binary gray wolf optimization was used. Outputs from five ML approaches, SVM,
DT, KNN, NB, and ANN were considered, and the final prediction is the majority of
votes of the seven classifiers. The grid search algorithm selects the optimal hyperpa-
rameters while curtailing the losses of cross-validation. The dataset used was 2088
CXR images (696 each of normal, pneumonia, and COVID-19) for phase 1 and 258
images (86 images of each class) for phase 2. The accuracy achieved was 93.411%.
Arpan et al. [5] used a pretrained 121-layered DenseNet, CheXNet since the
visually identical inputs of the samples, the authors’ findings were this to be the
most accurate pretrained backbone to developing a model for identifying COVID-
19. The final layers were trained to adapt to COVID-19 detection, by replacing
the final 14 class classifier of CheXNet with a four-class classification layer. The
number of prediction classes was clubbed from 4 to 3 as that increased the model’s
overall precision. The dataset used was 5323 training CXR images, 654 test images,
and 37 validation images. An accuracy of 90.5% was achieved. Abbas et al. [14]
use DeTrac, by exploring the boundaries of its classes using a class decomposition
engine irregularities present in the image dataset were eliminated. They adapted their
previously proposed model to boost the efficacy of the model. The optimization was
done during the adoption and training of the previously trained ImageNet model. The
26 K. Soni et al.

dataset used here was the CXR images in the Japanese Society of Radiological Tech-
nology (JSRT) and the CXR images Cohen et al. (2020) COVID-19 images. Minaee
et al. [15] a dataset consisting of 5000 images was prepared by combining augmented
images which using based detection increased the factor of images available by 5 and
COVID-X-ray-5 k dataset. They used the transfer learning approach. They worked
on these four convolutional networks, DenseNet-121, ResNet50, SqueezeNet, and
ResNet18. The confusion matrix of each model precision-recall curve, character-
istic (ROC) curve, and average prediction, receiver operating was also presented.
Following the technique of Zeiler and Fergus, they also predict the infected region
of the chest. All results were obtained by fine-tuning four pre-trained CNN models
above. The dataset used was CXR images Cohen et al. (2020) COVID-19 images,
compilation CXR images Japanese Society of Radiological Technology (JSRT).
Ozturk et al. [11] worked on the Darknet-19 classifier model which forms the
basis of YOLO. Being inspired by DarkNet-19, they created the DarkCovidNet.
Unlike ResNets or ResNext, we need a less deep CNN which can identify subtle
details. Thus, they came up with DarkCovidNet which has 17 layers. Each CN
layer is added before a BatchNorm and LeakyReLU operations, where BatchNorm
standardizes the inputs as this decreases time and training, increasing stability. They
categorized the images into three classes—pneumonia, COVID-1, no findings. They
trained DarkCovidNet to classify the images into COVID-19, No-Findings. Dataset
Used: chest X-ray14, COVID-19 chest X-ray dataset, COVID-19 X-ray dataset,
compiled by researchers of University of Montreal. Khan et al. [13] worked on a
deep CNN model named CoroNet to identify COVID-19 by analyzing CXRs. This
model was constructed based on Xception architecture which was then trained end-
to-end and preemptively trained on the ImageNet dataset which was put together
by accumulating COVID-19 and pneumonia publicly in CXR available databases.
Three scenarios were implemented of the discussed model to detect COVID-19
present in the CXR. The key multi-class model is the first one (4-class CoroNet) that
was trained to classify CXRs films into the following of the four categories, namely
pneumonia-viral, pneumonia-bacterial, COVID-19, and normal. They have discussed
thee-class CoroNet (pneumonia, normal, and COVID-19) and binary CoroNet as
the alterations of the key multi-class model. Implementation of CoroNet is done
using Keras worked upon Tensorflow v2.0. CoroNet acquired an average accuracy—
96.6%, average accuracy of 89.6%, F-measure (F1-score)—95.6%, recall—98.25%,
and precision—93.17% which was achieved for the COVID-19 model.
Mohd et al. [12] independent sets for every training, validation, and testing phase
were used in their paper. The depth in this deep learning methodologies is significantly
numerous in visually recognizable applications. By using ResNet101, a 101-layer
CNN that they adopted for this study due to the advantage of the remaining learning
structure, which is known to be computationally less expensive than its counterpart
and also not sacrificing depth and thus maintaining the precision. The performance
achieved of a binary classified model with sensitivity—77.3%, specificity—71.8%,
and precision—71. 9%. Dataset used: chest X-ray14, COVID-19 chest X-ray dataset.
A dataset compiled by researchers of the University of Montreal were also used (Table
1).
Analysis of Fifteen Approaches to Automated COVID-19 Detection … 27

Table 1 Individual pros and cons


Paper No. Pros Cons
[1] • Transparency: Authors understood • Not production ready
critical factors in identifying • Sensitivity and PPV can be improved
COVID-19 cases in several design
audits. The projections COVID-Net
makes are transpicuous and therefore
reliable for physicians who can use
them in the screening process to
perform faster and more accurate
assessments
• Discovering new insights: The critical
factors at play in the proposed COVID
network could potentially help
healthcare professionals rethink key
visual indicators related to COVID-19
which can be then used to improve the
accuracy of screening
[2] • The model is reliable, accurate, and fast • The accuracy of the stage 1 model can
• Less computational necessities be improved, especially as the overall
• Good overall accuracy, especially in architecture is highly reliant on the
stage 2 output of stage 1
• Sensitivity and PPV can be improved
[3] • High accuracy of 91.15% was achieved • Classification only between COVID
by identifying the right combination of and non-COVID-19 images, not
feature selection technique, search among normal images, pneumonia, and
algorithm, and classifier (CFS, SSFS, COVID-19
and Bayesnet, respectively) • Segmentation of infected regions was
not performed
• Combinations of all multi-CNNs
unexplored
[4] • As ML techniques have been used, the • Several deep learning approaches
given method is computationally outperformed this method
efficient and can work in constrained
environments as well
• For the same reason, this method can
be trained and can work with fewer
data
• Very high-level expertise was not
needed to define convoluted DL
network architecture
• The selected algorithms can be trained
efficiently using small datasets without
affecting performance
• Outperformed some deep learning
approaches
(continued)
28 K. Soni et al.

Table 1 (continued)
Paper No. Pros Cons
[5] • Infected regions were indicated • Significant computational cost
• Significant improvements over • Accuracy needs significant
Covid-Net were made improvement for the model to be
production ready
[6] • Faster Detection: With the method • Extracting features from X-ray analysis
described above, reports were is important when training models for
generated much faster, and a person ML because the performance of this
with COVID-19 could be easily model is directly linked to the quality
identified with greater accuracy of the extracted features. This means
• No complex data features are required that if the scans are not clear, there can
• Overcame overfitting issue be an error in the detection
• Increased generalization ability • Bagging was a slow learner but gave
• Deep CNN’s bag tree classification better accuracy and hence can delay
showed excellent COVID-19 data the release of the reports
classification despite the lack of data • Taken from scratch, the deep CNN
models are faster for computation and
less resource utilization, which is much
better than the proposed approach
[7] • The author turns to another technique • This technique got confused in a few
other than the traditional RT-PCR, cases, and hence, accuracy was not all
which requires kits that put a strain on that high
healthcare professionals, requiring
only easy-to-generate CT reports
[8] • Does not require operator assistance: • Accuracy could be improved for
The MLS method used in this production use
document is automated
• The MLS technology, despite of
orientation, it has better segmentation
of CTS
[9] • Gives the best performance and can be • The high accuracy of the XceptionNet
fruitfully utilized in the future was concerning due to overfitting
[10] • X-ray has less exposure to radiation • The author uses deep features with
and reduces the risk of spreading the CSENs but with lesser sensitivity
disease as compared to CT scans • The decision of methods is difficult to
• The CSEN uses deep learners which interpret since they are black-box
improved specificity techniques
• Faster technique than traditional
RT-PCR
[11] • It performed well in binary class • Poor results with low-quality images
classification with ARDS
• The model can be used to assist experts • The success rate is low in multi-class
classification
[12] • By an estimate, it is 258 times more • Because images were less in number
proficient than a radiologist during testing, the results presented
• With a GPU, an even better here are preliminary
performance can be achieved
(continued)
Analysis of Fifteen Approaches to Automated COVID-19 Detection … 29

Table 1 (continued)
Paper No. Pros Cons
[13] • Great recall (Sensitivity) and precision • Better accuracy can be achieved
(PPV) for covid cases
• The accuracy achieved for the
following cases: 4 classes—89.5%, 3
classes—94.59%, and binary class
classification—99%
[14] • Overcomes Lack of Datasets: Using • Usability scope
transfer learning and using already • Not deployable on handheld devices
gained insights from pretrained CNNs
for a specific medical imaging task
• DeTRac outperforms ResNet,
GoogleNet, ALexNet, SqueezeNet,
and VGG in the class decomposition
layer with a sensitivity of 100%
[15] • They can predict the infected region of • The dataset used has only 200 covid
the chest and visualize it using a heat images
map • Not conducted on a large scale

4 Discussion and Comparative Analysis

Considering the overall parameters such as usability in constrained environments,


sensitivity, positive predictive value, accuracy, and transparency, we hereby discuss
the abovementioned works. This section analytically discusses the approaches taken
in each paper.
When there are resource and system constraints, it is often the better choice to
employ machine learning techniques over deep learning ones since they require
less computational power and resources. As such, the approaches [8] and [4] fit
this bracket. They used the technique of MLS which gives better segmentation of
CTS irrespective of its orientation. It did not require complex software and systems
to run the same. In particular, Chandra et al. [4] outperformed even some deep
learning approaches and hence did not compromise accuracy even with requiring
lesser computational power. Apart from this, user-friendliness and easy-to-use soft-
ware are the need of the hour, and this was taken care of by Sharma [7]. It incorporated
an easy-to-use GUI which could be understood by laymen, and it allowed the health-
care professionals and patients to easily upload the scans and get the results instantly
with 91% accuracy. Sara Hosseinzadeh et al. [6] gave fast results as it took CTS
and X-rays as the testing tool which was much faster than the traditional RT-PCR. It
worked on generic feature extraction using CNN which did not require many exten-
sive features. It overcame the overfitting issue as it took enormous data for training.
The approach used in the same was slow and used enormous resources. On the other
hand, a deep CNN model from the ground up would result in more efficient compu-
tation and lesser consumption of resources if used. Jain et al. [9] gave exceptional
results when the Xception model methodology was employed. However, this high
30 K. Soni et al.

accuracy obtained could have been a cause of overfitting. Mete et al. [10] used CSENs
which resulted in high specificity, whereas it compromised on the sensitivity.
Wang et al. [1] were another impressive work. The network architecture was built
in <7 days using a human–machine collaborative design strategy, and very special
care was taken to keep the sensitivity and PPV high, which is crucial. With no
environmental constraints, this model can be deployed. As a matter of fact, Arpan
et al. [5], as per its authors, gave improved results over Covid-Net, and hence, with
enough resources, the state-of-the-art Covid-Net can be employed. The two-stage
architecture in [2] is a bit computationally costly as it uses two CNNs, and even
though stage 2 of this architecture has an excellent accuracy of 97.77%, the overall
model accuracy is dependent upon stage 1’s accuracy as well, which is 93%. The
overall model is excellent, and if the accuracy of stage 1 is improved, this could be the
preferred model in situations when there are no constraints on available resources. In
[3], the best permutation of using CFS, SFSS, CFG feature selection technique, and
Bayesnet classifier along with multiple pre-trained networks bode well for the model.
The accuracy of 91.15%, even though excellent, but some of the earlier mentioned
models did better. For instance, Chandra et al. [4] gives better accuracy despite using
Machine Learning techniques and hence lesser computational power than the model
in question. To overcome the lack of image datasets particular to our research, many
studies have used data augmentation techniques to produce transformed versions of
images (such as minute distortions and rotations.) and use transfer learning to better
train our CNN. One such paper that deals with is [14] using decompose, transfer,
and compose (DeTraC), since DeTrac transfers knowledge acquired in larger-scale
more generalized image recognition work to a domain-specific required calculation,
so we save computational time required to train a CNN from scratch. This model
is not handheld device friendly as it needs more comp multiple papers dealing with
multi-class classification (pneumonia, normal, and COVID-19) [1, 11] and [13].
Accuracy for multi-class classification has varied across papers since with some
models. In the paper [11], images of lungs with pneumonia are also considered in the
research afterward. The model ended up identifying COVID-19 patients classified in
pneumonia class. Even though COVID-19 being a kind of lung infection classified
in the model, the diagnosis is right, but the interpretation looks incorrect. So, some
patients will be classified in the pneumonia class because this probability of success
of the model in the problem of classifying binary classes is high compared to several
classes.

4.1 Discussion

The models proposed should help healthcare workers in drawing accurate and rela-
tively fast results. They require models which are not only accurate but portable too.
As we see in [1, 5, 14], these models require significant computational cost and are
not production ready. Except Sharma [7], we don’t see a platform or GUI which
makes operating the model easier even for people without a technical background
Analysis of Fifteen Approaches to Automated COVID-19 Detection … 31

as it makes it user-friendly. To help assuage the pressure on our healthcare systems,


we need a model which is portable and with an interactive GUI. In papers [1, 4, 5,
9, 13, 15], we require significant computational power which is not easily available
in everyday computers available in most hospitals. Having optimized models which
require significantly less computational power and can run on the most widely used
PCs. Image quality is a notable issue when it comes to producing accurate results in
[3, 6, 11]. Extracting features from X-ray analysis is important when training models
for ML because the performance of this model is directly linked to the quality of the
extracted features. This means that if the scans are not clear, there can be an error
in the detection. COVID-19 being a recent outbreak resulting in a lack of proper
and large datasets publicly available, more so [12, 15], we need a model which can
produce good results with reduced quality images. This issue can be addressed with
time as more structured and verified data are available.
Additionally, most of these approaches are only at the research stage not at the
deployment stage where they are being used by medical professionals. Steps could
be taken to make them more production ready.

5 Conclusion and Future Work

5.1 Conclusion

Recently, a lot of researchers have come forward with several good approaches to
tackle the pandemic via automated COVID-19 detection. In this work, we have tried
to compare, contrast, and present a coherent idea about fifteen such approaches. Some
approaches stand out when a model needs to be worked in a constrained environment,
whereas some stand out despite requiring higher computational resources, for their
high sensitivity and positive predict value (PPV), both of which are extremely crucial
in our context; since a false positive can overburden, we already strained healthcare
systems using the authors false negative can result into spreading of the virus by the
misdiagnosed patient. We hope that this helps future researchers in coming up with
innovative ideas of their own.

5.2 Future Work

The limitations that were faced in most of the studies can be dealt with more avail-
able patient data (both asymptomatic patients and symptomatic and) which will be
available in the future to help with more deep analysis. More disease-specific (cold,
pneumonia, COVID-19) features can be included to help the machine learn and
differentiate between these and give accurate, reliable results with high sensitivity.
To allow deployment on handheld devices, we need to increase efficiency by model
32 K. Soni et al.

pruning and quantization. With the ongoing pandemic, we hope to increase our avail-
able dataset which will further improve the research and experimental work and will
achieve more validation for the method with larger datasets. Moreover, with growing
research in this area, there will be many future studies worth considering. They can
be incorporated with the current work as well.

References

1. Wang, L., Lin, Z.Q., Wong, A.: Covid-net: a tailored deep convolutional neural network design
for detection of COVID-19 cases from chest x-ray images. Sci. Rep. 10(1), 1–12 (2020)
2. Jain, G., Mittal, D., Thakur, D., Mittal, M.: A deep learning approach to detect COVID-19
coronavirus with X-Ray images. Biocybernetics Biomed. Eng. 40(4), 1391–1405 (2020)
3. Abraham, B., Nair, M.: Computer-aided detection of COVID-19 from X-ray images using
multi-CNN and Bayesnet classifier. Biocybernetics Biomed. Eng. 40(4), 1436–1445 (2020)
4. Chandra, T., Verma, K., Singh, B., Jain, D., Netam, S.: Coronavirus disease (COVID-19)
detection in chest X-Ray images using majority voting-based classifier ensemble. Expert Syst.
Appl. 165, 113909 (2021)
5. Arpan, M., Surya, K., Harish, R., Krithika, R., Vinay, N., Subhashis, B., Chetan A.: CovidAID:
COVID-19 detection using chest X-ray. arXiv preprint (2020)
6. Sara Hosseinzadeh, K., Mike, W., Peyman Hosseinzadeh, K., Kevin, A.: Automatic detection of
coronavirus disease (COVID-19) in X-ray and CT images: a machine learning-based approach.
(n.d.) (2020)
7. Sharma, S.: Drawing insights from COVID-19-infected patients using CT scan images and
machine learning techniques: a study on 200 patients. Environ. Sci. Pollut. Res. 27(29), 37155–
37163 (2020)
8. Kadry, S., Rajinikanth, V., Rho, S., Raja, N.S.M., Rao, V.S., & Thanaraj, K.P.: Development of
a machine-learning system to classify lung CT scan images into normal/covid-19 class. arXiv:
2004.13122 (2020)
9. Jain, R., Gupta, M., Taneja, S., Hemanth, D.: Deep learning-based detection and analysis of
COVID-19 on chest X-ray images. Appl. Intell. 51(3), 1690–1700 (2020)
10. Mete, A., Aysen, D., Mehmet, Y., Serkan, K., Muhammad, E., Chowdhury, H., Khalid, H.,
Tahir, H., Rashid, M., Moncef, G.: Advance warning methodologies for covid-19 using chest
x-ray images. IEEE Access 9, 41052–41065 (2021)
11. Alqudah, A.M., Qazan, S., Alqudah, A.: Automated systems for detection of COVID-19 using
chest X-ray images and lightweight convolutional neural networks (2020)
12. Che A., Mohd, Z.M.Z., Hassan, R., Mohd Tamirn, M.I., Md Ali, M.A.: COVID-19 deep
learning prediction model using publicly available radiologist-adjudicated chest X-ray images
as training data: preliminary findings. Int. J. Biomed. Imag. (2020)
13. Agarwal, C., Khobahi, S., Schonfeld, D., Soltanalian, M.: CoroNet: a deep network architecture
for enhanced identification of COVID-19 from chest X-ray images. Med. Imag. Comput.-Aided
Diagnosis (2021)
14. Abbas, A., Abdelsamea, M.M., Medhat, M.: Classification of COVID-19 in chest X-ray images
using DeTraC deep convolutional neural network (2020)
15. Minaee, S., Kafieh, R., Sonka, M., Yazdani, S., Jamalipour Soufi, G.: Deep-COVID: predicting
COVID-19 from chest X-ray images using deep transfer learning. Med. Image Analysis 65,
101794 (2020)
16. Kong, W., Agarwal, P.P.: Chest imaging appearance of COVID-19 infection. Radiol. Cardio-
thoracic Imag. 2(1), e200028
17. Satpathy, S., Mangla, M., Sharma, N., Deshmukh, H., Mohanty, S.: Predicting mortality rate
and associated risks in COVID-19 patients. Spatial Inf. Res. 1–10 (2021)
Discovering Diverse Content Through
Random Scribd Documents
FOWLER, Japan, China, and India, 10s. 6d.
FRA ANGELICO. See Great Artists.
FRA BARTOLOMMEO, ALBERTINELLI, and ANDREA
DEL SARTO. See Great Artists.
FRANC, Maud Jeanne, Beatrice Melton, 4s.
—— Emily's Choice, n. ed. 5s.
—— Golden Gifts, 4s.
—— Hall's Vineyard, 4s.
—— Into the Light, 4s.
—— John's Wife, 4s.
—— Little Mercy, for better, for worse, 4s.
—— Marian, a Tale, n. ed. 5s.
—— Master of Ralston, 4s.
—— Minnie's Mission, a Temperance Tale, 4s.
—— No longer a Child, 4s.
—— Silken Cords and Iron Fetters, a Tale, 4s.
—— Two Sides to Every Question, 4s.
—— Vermont Vale, 5s.
A plainer edition is published at 2s. 6d.
France. See Foreign Countries.
FRANCIS, F., War, Waves, and Wanderings, 2 vols.
24s.
—— See also Low's Standard Series.
Frank's Ranche; or, My Holiday in the Rockies, n.
ed. 5s.
FRANKEL, Julius, Starch Glucose, &c., 18s.
FRASER, Bishop, Lancashire Life, n. ed. 12s. 6d.;
popular ed. 3s. 6d.
FREEMAN, J., Melbourne Life, lights and shadows,
6s.
FRENCH, F., Home Fairies and Heart Flowers,
illust. 24s.
French and English Birthday Book, by Kate D.
Clark, 7s. 6d.
French Revolution, Letters from Paris, translated,
10s. 6d.
Fresh Woods and Pastures New, by the Author of
"An Angler's Days," 5s., 1s. 6d., 1s.
FRIEZE, Duprè, Florentine Sculptor, 7s. 6d.
FRISWELL, J. H. See Gentle Life Series.
Froissart for Boys, by Lanier, new ed. 7s. 6d.
FROUDE, J. A. See Prime Ministers.
Gainsborough and Constable. See Great Artists.
GASPARIN, Sunny Fields and Shady Woods, 6s.
GEFFCKEN, British Empire, 7s. 6d.
Generation of Judges, n. e. 7s. 6d.
Gentle Life Series, edited by J. Hain Friswell, sm.
8vo. 6s. per vol.; calf extra, 10s. 6d. ea.; 16mo,
2s. 6d., except when price is given.

Gentle Life.
About in the World.
Like unto Christ.
Familiar Words, 6s.; also 3s. 6d.
Montaigne's Essays.
Sidney's Arcadia, 6s.
Gentle Life, second series.
Varia; readings, 10s. 6d.
Silent hour; essays.
Half-length Portraits.
Essays on English Writers.
Other People's Windows, 6s. & 2s. 6d.
A Man's Thoughts.

George Eliot, by G. W. Cooke, 10s. 6d.


Germany. See Foreign Countries.
GESSI, Romolo Pasha, Seven Years in the Soudan,
18s.
GHIBERTI & DONATELLO. See Great Artists.
GILES, E., Australia Twice Traversed, 1872-76, 2
vols. 30s.
GILL, J. See Low's Readers.
GILLESPIE, W. M., Surveying, n. ed. 21s.
Giotto, by Harry Quilter, illust. 15s.
—— See also Great Artists.
GIRDLESTONE, C., Private Devotions, 2s.
GLADSTONE. See Prime Ministers.
GLENELG, P., Devil and the Doctor, 1s.
GLOVER, R., Light of the World, n. ed., 2s. 6d.
GLÜCK. See Great Musicians.
Goethe's Faustus, in orig. rhyme, by Huth, 5s.
—— Prosa, by C. A. Buchheim (Low's German
Series), 3s. 6d.
GOLDSMITH, O., She Stoops to Conquer, by
Austin Dobson, illust. by E. A. Abbey, 84s.
—— See also Choice Editions.
GOOCH, Fanny C., Mexicans, 16s.
GOODALL, Life and Landscape on the Norfolk
Broads, 126s. and 210s.
—— & EMERSON, Pictures of East Anglian Life, £5
5s. and £7 7s.
GOODMAN, E. J., The Best Tour in Norway, 6s.
—— N. & A., Fen Skating, 5s.
GOODYEAR, W. H., Grammar of the Lotus,
Ornament and Sun Worship, 63s. nett.
GORDON, J. E. H., Physical Treatise on Electricity
and Magnetism. 3rd ed. 2 vols. 42s.
—— Electric Lighting, 18s.
—— School Electricity, 5s.
—— Mrs. J. E. H., Decorative Electricity, illust. 12s.
GOWER, Lord Ronald, Handbook to the Art
Galleries of Belgium and Holland, 5s.
—— Northbrook Gallery, 63s. and 105s.
—— Portraits at Castle Howard, 2 vols. 126s.
—— See also Great Artists.
GRAESSI, Italian Dictionary, 3s. 6d.; roan, 5s.
GRAY, T. See Choice Eds.
Great Artists, Biographies, illustrated,
emblematical binding, 3s. 6d. per vol. except
where the price is given.

Barbizon School, 2 vols.


Claude le Lorrain.
Correggio, 2s. 6d.
Cox and De Wint.
George Cruikshank.
Della Robbia and Cellini, 2s. 6d.
Albrecht Dürer.
Figure Paintings of Holland.
Fra Angelico, Masaccio, &c.
Fra Bartolommeo, &c.
Gainsborough and Constable.
Ghiberti and Donatello, 2s. 6d.
Giotto, by H. Quilter, 15s.
Hogarth, by A. Dobson.
Hans Holbein.
Landscape Painters of Holland.
Landseer.
Leonardo da Vinci.
Little Masters of Germany, by Scott; éd. de
luxe, 10s. 6d.
Mantegna and Francia.
Meissonier, 2s. 6d.
Michelangelo.
Mulready.
Murillo, by Minor, 2s. 6d.
Overbeck.
Raphael.
Rembrandt.
Reynolds.
Romney and Lawrence, 2s. 6d.
Rubens, by Kett.
Tintoretto, by Osler.
Titian, by Heath.
Turner, by Monkhouse.
Vandyck and Hals.
Velasquez.
Vernet & Delaroche.
Watteau, by Mollett, 2s. 6d.
Wilkie, by Mollett.

Great Musicians, edited by F. Hueffer. A series


of biographies, 3s. each:—
Bach, by Poole.
Beethoven.
[7]Berlioz.
Cherubini.
English Church Composers.
[7]Glück.
Handel.
Haydn.
[7]Marcello.
Mendelssohn.
Mozart.
[7]Palestrina and the Roman School.
Purcell.
Rossini and Modern Italian School.
Schubert.
Schumann.
Richard Wagner.
Weber.

Greece. See Foreign Countries.


GRIEB, German Dictionary, n. ed. 2 vols. 21s.
GRIMM, H., Literature, 8s. 6d.
GROHMANN, Camps in the Rockies, 12s. 6d.
GROVES, J. Percy. See Low's Standard Books.
GUIZOT, History of England, illust. 3 vols. re-issue
at 10s. 6d. per vol.
—— History of France, illust. re-issue, 8 vols. 10s.
6d. each.
—— Abridged by G. Masson, 5s.
GUYON, Madame, Life, 6s.

HADLEY, J., Roman Law, 7s. 6d.


Half-length Portraits. See Gentle Life Series.
HALFORD, F. M., Dry Fly-fishing, n. ed. 25s.
—— Floating Flies, 15s. & 30s.
HALL, How to Live Long, 2s.
HALSEY, F. A., Slide Valve Gears, 8s. 6d.
HAMILTON. See English Philosophers.
—— E. Fly-fishing, 6s. and 10s. 6d.
—— Riverside Naturalist, 14s.
HAMILTON'S Mexican Handbook, 8s. 6d.
HANDEL. See Great Musicians.
HANDS, T., Numerical Exercises in Chemistry, 2s.
6d.; without ans. 2s.; ans. sep. 6d.
Handy Guide to Dry-fly Fishing, by Cotswold Isys,
1s.
Handy Guide Book to Japanese Islands, 6s. 6d.
HARDY, A. S., Passe-rose, 6s.
—— Thos. See Low's Standard Novels.
HARKUT, F., Conspirator, 6s.
HARLAND, MARION, Home Kitchen, 5s.
Harper's Young People, vols. I.-VII. 7s. 6d. each;
gilt 8s.
HARRIES, A. See Nursing Record Series.
HARRIS, W. B., Land of the African Sultan, 10s.
6d.; 1. p. 31s. 6d.
HARRISON, Mary, Modern Cookery, 6s.
—— Skilful Cook, n. ed. 5s.
—— Mrs. B. Old-fashioned Fairy Book, 6s.
—— W., London Houses, Illust. n. edit. 1s. 6d., 6s.
net; & 2s. 6d.
HARTLEY and MILL. See English Philosophers.
HATTON, Joseph, Journalistic London, 12s. 6d.
—— See also Low's Standard Novels.
HAWEIS, H. R., Broad Church, 6s.
—— Poets in the Pulpit, 10s. 6d. new edit. 6s.;
also 3s. 6d.
—— Mrs., Housekeeping, 2s. 6d.
—— Beautiful Houses, 4s., new edit. 1s.
HAYDN. See Great Musicians.
HAZLITT, W., Round Table, 2s. 6d.
HEAD, Percy R. See Illus. Text Books and Great
Artists.
HEARD, A. F., Russian Church, 16s.
HEARN, L., Youma, 5s.
HEATH, F. G., Fern World, 12s. 6d., new edit. 6s.
—— Gertrude, Tell us Why, 2s. 6d.
HELDMANN, B., Mutiny of the "Leander," 7s. 6d.
and 5s.
—— See also Low's Standard Books for Boys.
HENTY, G. A., Hidden Foe, 2 vols. 21s.
—— See also Low's Standard Books for Boys.
—— Richmond, Australiana, 5s.
HERBERT, T., Salads and Sandwiches, 6d.
HICKS, C. S., Our Boys, and what to do with
Them; Merchant Service, 5s.
—— Yachts, Boats, and Canoes, 10s. 6d.
HIGGINSON, T. W., Atlantic Essays, 6s.
—— History of the U.S., illust. 14s.
HILL, A. Staveley, From Home to Home in N.-W.
Canada, 21s., new edit. 7s. 6d.
—— G. B., Footsteps of Johnson, 63s.; édition de
luxe, 147s.
HINMAN, R., Eclectic Physical Geography, 5s.
Hints on proving Wills without Professional
Assistance, n. ed. 1s.
HOEY, Mrs. Cashel. See Low's Standard Novels.
HOFFER, Caoutchouc & Gutta Percha, 12s. 6d.
HOGARTH. See Gr. Artists.
HOLBEIN. See Great Artists.
HOLDER, Charles F., Ivory King, 8s. 6d.
—— Living Lights, 8s. 6d.
—— Marvels of Animal Life, 8s. 6d.
HOLM, Saxe, Draxy Miller, 2s. 6d. and 2s.
HOLMES, O. Wendell, Before the Curfew, 5s.
—— Over the Tea Cups, 6s.
—— Iron Gate, &c., Poems, 6s.
—— Last Leaf, 42s.
—— Mechanism in Thought and Morals, 1s. 6d.
—— Mortal Antipathy, 8s. 6d., 2s. and 1s.
—— Our Hundred Days in Europe, new edit. 6s.; l.
paper 15s.
—— Poetical Works, new edit., 2 vols. 10s. 6d.
—— Works, prose, 10 vols.; poetry, 4 vols.; 14
vols. 84s. Limited large paper edit., 14 vols. 294s.
nett.
—— See also Low's Standard Novels and Rose
Library.
HOLUB, E., South Africa, 2 vols. 42s.
HOPKINS, Manley, Treatise on the Cardinal
Numbers, 2s. 6d. Horace in Latin, with Smart's
literal translation, 2s. 6d.; translation only, 1s. 6d.
HORETZKY, C., Canada on the Pacific, 5s.
How and where to Fish in Ireland, by H. Regan,
3s. 6d.
HOWARD, Blanche W., Tony the Maid, 3s. 6d.
—— See also Low's Standard Novels.
HOWELLS, W. D., Suburban Sketches, 7s. 6d.
—— Undiscovered Country, 3s. 6d. and 1s.
HOWORTH, H. H., Glacial Nightmare, 18s.
—— Mammoth and the Flood, 18s.
HUDSON, N. H., Purple Land that England Lost;
Banda Oriental 2 vols. 21s.: 1 vol. 6s.
HUEFFER. E. See Great Musicians.
HUGHES, Hugh Price. See Preachers.
HUME F., Creature of the Night, 1s.
Humorous Art at the Naval Exhibition, 1s.
HUMPHREYS, Jennet, Some Little Britons in
Brittany, 2s. 6d.
Hundred Greatest Men, new edit. one vol. 21s.
HUNTINGDON, The Squire's Nieces, 2s. 6d.
(Playtime Library.)
HYDE, Hundred Years by Post, 1s.
Hymnal Companion to the Book of Common
Prayer, separate lists gratis.
Iceland. See Foreign Countries.
Illustrated Text-Books of Art-Education, edit. by E.
J. Poynter, R.A., illust. 5s. each.
Architecture, Classic and Early Christian.
Architecture, Gothic and Renaissance.
German, Flemish, and Dutch Painting.
Painting, Classic and Italian.
Painting, English and American.
Sculpture, modern.
Sculpture, by G. Redford.
Spanish and French artists.
INDERWICK, F. A., Interregnum, 10s. 6d.
—— Sidelights on the Stuarts, new edit. 7s. 6d.
INGELOW, Jean. See Low's Standard Novels.
INGLIS, Our New Zealand Cousins, 6s.
—— Sport and Work on the Nepaul Frontier, 21s.
—— Tent Life in Tiger Land, 18s.
IRVING, W., Little Britain, 10s. 6d. and 6s.
—— Works, "Geoffrey Crayon" edit. 27 vols. 16l.
16s.
JACKSON, J., Handwriting in Relation to Hygiene,
3d.
—— New Style Vertical Writing Copy-Books, Series
I. 1-8, 2d. and 1d. each.
—— New Code Copy-Books, 22 Nos. 2d. each.
—— Shorthand of Arithmetic, Companion to all
Arithmetics, 1s. 6d.
—— L., Ten Centuries of European Progress, with
maps, 12s. 6d.
JAMES, Croake, Law and Lawyers, new edit. 7s.
6d.
—— Henry. See Daudet, A.
JAMES and MOLÉ'S French Dictionary, 3s. 6d.
cloth; roan, 5s.
JAMES, German Dictionary, 3s. 6d. cloth; roan 5s.
JANVIER, Aztec Treasure House, 7s. 6d.; new edit.
5s.

Japan. See Foreign Countries.


JEFFERIES, Richard, Amaryllis at the Fair, 7s. 6d.
—— Bevis, new edit. 5s.
JEPHSON, A. J. M., Emin Pasha relief expedition,
21s.
JERDON. See Low's Standard Series.
JOHNSTON, H. H., The Congo, 21s.
JOHNSTON-LAVIS, H. J., South Italian Volcanoes,
15s.
JOHNSTONE, D. L., Land of the Mountain
Kingdom, new edit. 3s. 6d. and 2s. 6d.
JONES, Mrs. Herbert, Sandringham, Past and
Present, illust., new edit. 8s. 6d.
JULIEN, F., Conversational French Reader, 2s. 6d.
—— English Student's French Examiner, 2s.
—— First Lessons in Conversational French
Grammar, n. ed. 1s.
—— French at Home and at School, Book I.
accidence, 2s.; key, 3s.
—— Petites Leçons de Conversation et de
Grammaire, n. ed. 3s.
—— Petites Leçons, with phrases, 3s. 6d.
—— Phrases of Daily Use, separately, 6d.
KARR, H. W. Seton, Shores and Alps of Alaska,
16s.
KARSLAND, Veva, Women and their Work, 1s.
KAY. See Foreign Countries.
KENNEDY, E. B., Blacks and Bushrangers, new
edit. 5s., 3s. 6d. and 2s. 6d.
KERR, W. M., Far Interior, the Cape, Zambesi, &c.,
2 vols. 32s.
KERSHAW, S. W., Protestants from France in their
English Home, 6s.
KETT, C. W., Rubens, 3s. 6d.
Khedives and Pashas, 7s. 6d.
KILNER, E. A., Four Welsh Counties, 5s.
King and Commons. See Cavalier in Bayard Series.
KINGSLEY, R. G., Children of Westminster Abbey,
5s.
KINGSTON. See Low's Standard Books.
KIPLING, Rudyard, Soldiers Three, &c., stories, 1s.
—— Story of the Gadsbys, new edit. 1s.
—— In Black and White, &c., stories, 1s.
—— Wee Willie Winkie, &c., stories, 1s.
—— Under the Deodars, &c., stories, 1s.
—— Phantom Rickshaw, &c., stories, 1s.
*** The six collections of stories may also be had
in 2 vols. 3s. 6d. each.
—— Stories, Library Edition, 2 vols. 6s. each.
KIRKALDY, W. G., David Kirkaldy's Mechanical
Testing, 84s.
KNIGHT, A. L., In the Web of Destiny, 7s. 6d.
—— E. F., Cruise of the Falcon, new edit. 3s. 6d.
—— E. J., Albania and Montenegro, 12s. 6d.
—— V. C., Church Unity, 5s.
KNOX, T. W., Boy Travellers, new edit. 5s.
KNOX-LITTLE, W. J., Sermons, 3s. 6d.
KUNHARDT, C. P., Small Yachts, new edit. 50s.
—— Steam Yachts, 16s.
KWONG, English Phrases, 21s.
LABOULLAYE, E., Abdallah, 2s. 6d.
LALANNE, Etching, 12s. 6d.

LAMB, Chas., Essays of Elia, with designs by C. O.


Murray, 6s.
LAMBERT, Angling Literature, 3s. 6d.
Landscape Painters of Holland. See Great Artists.
LANDSEER. See Great Artists.
LANGLEY, S. P., New Astronomy, 10s. 6d.
LANIER, S., Boy's Froissart, 7s. 6d.; King Arthur,
7s. 6d.; Mabinogion, 7s. 6d.; Percy, 7s. 6d.
LANSDELL, Henry, Through Siberia, 1 v. 15s. and
10s. 6d.
—— Russia in Central Asia, 2 vols. 42s.
—— Through Central Asia, 12s.
LARDEN, W., School Course on Heat, n. ed. 5s.
LAURIE, A., Secret of the Magian, the Mystery of
Ecbatana, illus. 6s. See also Low's Standard
Books.
LAWRENCE, Sergeant, Autobiography, 6s.
—— and ROMNEY. See Great Artists.
LAYARD, Mrs., West Indies, 2s. 6d.
LEA, H. C., Inquisition, 3 vols. 42s.
LEARED, A., Marocco, n. ed. 16s.
LEAVITT, New World Tragedies, 7s. 6d.
LEFFINGWELL, W. B., Shooting, 18s.
—— Wild Fowl Shooting, 10s. 6d.
LEFROY, W., Dean. See Preachers.
LELAND, C. G., Algonquin Legends, 8s.
LEMON, M., Small House over the Water, 6s.

Leo XIII. Life, 18s.


Leonardo da Vinci. See Great Artists.
—— Literary Works, by J. P. Richter, 2 vols. 252s.
LIEBER, Telegraphic Cipher, 42s. nett.
Like unto Christ. See Gentle Life Series.
LITTLE, Arch. J., Yang-tse Gorges, n. ed., 10s. 6d.
Little Masters of Germany. See Great Artists.
LONGFELLOW, Miles Standish, illus. 21s.
—— Maidenhood, with col. pl. 2s. 6d.; gilt edges,
3s. 6d.
—— Nuremberg, photogr. illu. 31s. 6d.
—— Song of Hiawatha, illust. 21s.
LOOMIS, E., Astronomy, n. ed. 8s. 6d.
LORNE, Marquis of, Canada and Scotland, 7s. 6d.
—— Palmerston. See Prime Ministers.
Louis, St. See Bayard Series.
Low's French Readers, edit. by C. F. Clifton, I. 3d.,
II. 3d., III. 6d.
—— German Series. See Goethe, Meissner,
Sandars, and Schiller.
—— London Charities, annually, 1s. 6d.; sewed,
1s.
—— Illustrated Germ. Primer, 1s.
—— Infant Primers, I. illus. 3d.; II. illus. 6d. and
7d.
—— Pocket Encyclopædia, with plates, 3s. 6d.;
roan, 4s. 6d.
—— Readers, I., 9d.; II., 10d.; III., 1s.; IV., 1s.
3d.; V., 1s. 4d.; VI., 1s. 6d.

Low's Select Parchment Series.


Aldrich (T. B.) Friar Jerome's Beautiful Book, 3s.
6d.
Lewis (Rev. Gerrard), Ballads of the Cid, 2s. 6d.
Whittier (J. G.) The King's Missive. 3s. 6d.
Low's Stand. Library of Travel (except where price
is stated), per volume, 7s. 6d.

1. Butler, Great Lone Land; also 3s. 6d.


2. —— Wild North Land.
3. Stanley (H. M.) Coomassie, 3s. 6d.
4. —— How I Found Livingstone; also 3s. 6d.
5. —— Through the Dark Continent, 1 vol.
illust., 12s. 6d.; also 3s. 6d.
8. MacGahan (J. A.) Oxus.
9. Spry, voyage, Challenger.
10. Burnaby's Asia Minor, 10s. 6d.
11. Schweinfurth's Heart of Africa, 2 vols.
15s.; also 3s. 6d. each.
12. Marshall (W.) Through America.
13. Lansdell (H.) Through Siberia, 10s. 6d.
14. Coote, South by East, 10s. 6d.
15. Knight, Cruise of the Falcon, also 3s. 6d.
16. Thomson (Joseph) Through Masai Land.
19. Ashe (R. P.) Two Kings of Uganda, 3s. 6d.

Low's Standard Novels (except where price is


stated), 6s.
Baker, John Westacott.
Black (W.) Craig Royston.
—— Daughter of Heth.
—— House Boat.
—— In Far Lochaber.
—— In Silk Attire.
—— Kilmeny.
—— Lady Siverdale's Sweetheart.
—— New Prince Fortunatus.
—— Penance of John Logan.
—— Stand Fast, Craig Royston!
—— Sunrise.
—— Three Feathers.

Blackmore (R. D.) Alice Lorraine.


—— Christowell.
—— Clara Vaughan.
—— Cradock Nowell.
—— Cripps the Carrier.
—— Ereme, or My Father's Sins.
—— Kit and Kitty.
—— Lorna Doone.
—— Mary Anerley.
—— Sir Thomas Upmore.
—— Springhaven.
Brémont, Gentleman Digger.
Brown (Robert) Jack Abbott's Log.
Bynner, Agnes Surriage.
—— Begum's Daughter.
Cable (G. W.) Bonaventure, 5s.
Coleridge (C. R.) English Squire.
Craddock, Despot of Broomsedge.
Croker (Mrs. B. M.) Some One Else.
Cumberland (Stuart) Vasty Deep.
De Leon, Under the Stars and Crescent.
Edwards (Miss Betham) Half-way.
Eggleston, Juggernaut.
French Heiress in her own Château.
Gilliat (E.) Story of the Dragonnades.
Hardy (A. S.) Passe-rose.
—— (Thos.) Far from the Madding.
—— Hand of Ethelberta.
—— Laodicean.
—— Mayor of Casterbridge.
—— Pair of Blue Eyes.
—— Return of the Native.
—— Trumpet-Major.
—— Two on a Tower.
Harkut, Conspirator.
Hatton (J.) Old House at Sandwich.
—— Three Recruits.
Hoey (Mrs. Cashel) Golden Sorrow.
—— Out of Court.
—— Stern Chase.
Howard (Blanche W.) Open Door.
Ingelow (Jean) Don John.
—— John Jerome, 5s.
—— Sarah de Berenger.
Lathrop, Newport, 5s.
Mac Donald (Geo.) Adela Cathcart.
—— Guild Court.

Mac Donald (Geo.) Mary Marston.


—— Orts.
—— Stephen Archer, &c.
—— The Vicar's Daughter.
—— Weighed and Wanting.
Macmaster, Our Pleasant Vices.
Macquoid (Mrs.) Diane.
Musgrave (Mrs.) Miriam.
Osborn, Spell of Ashtaroth, 5s.
Prince Maskiloff.
Riddell (Mrs.) Alaric Spenceley.
—— Daisies and Buttercups.
—— Senior Partner.
—— Struggle for Fame.
Russell (W. Clark) Betwixt the Forelands.
—— Frozen Pirate.
—— Jack's Courtship.
—— John Holdsworth.
—— Little Loo.
—— My Watch Below.
—— Ocean Free Lance.
—— Sailor's Sweetheart.
—— Sea Queen.
—— Strange Voyage.
—— The Lady Maud.
—— Wreck of the Grosvenor.
Steuart, Kilgroom.
Stockton (F. R.) Ardis Claverden.
—— Bee-man of Orn, 5s.
—— Hundredth Man.
—— The late Mrs. Null.
Stoker, Snake's Pass.
Stowe (Mrs.) Old Town Folk.
—— Poganue People.
Thomas, House on the Scar.
Thomson, Ulu, an African Romance.
Tourgee, Murvale Eastman.
Tytler (S.) Duchess Frances.
Vane, From the Dead.
Wallace (Lew.) Ben Hur.
Warner, Little Journey in the World.
Woolson (Constance Fenimore) Anne.
—— East Angles.
—— For the Major, 5s.
—— Jupiter Lights.

See also Sea Stories.

Low's Stand. Novels, new issue at short intervals,


2s. 6d. and 2s.
Blackmore, Alice Lorraine.
—— Christowell.
—— Clara Vaughan.
—— Cripps the Carrier.
—— Kit and Kitty.
—— Lorna Doone.
—— Mary Anerley.
—— Tommy Upmore.
Cable, Bonaventure.
Croker, Some One Else.
Cumberland, Vasty Deep.
De Leon, Under the Stars.
Edwards, Half-way.
Hardy, Laodicean.
—— Madding Crowd.
—— Mayor of Casterbridge.
—— Trumpet-Major.
—— Two on a Tower.
Hatton, Old House at Sandwich.
—— Three Recruits.
Hoey, Golden Sorrow.
—— Out of Court.
—— Stern Chase.
Holmes, Guardian Angel.
Ingelow, John Jerome.
—— Sarah de Berenger.
Mac Donald, Adela Cathcart.
—— Guild Court.
—— Stephen Archer.
—— Vicar's Daughter.
Oliphant, Innocent.
Riddell, Daisies and Buttercups.
—— Senior Partner.
Stockton, Bee-man of Orn, 5s.
—— Dusantes.
—— Mrs. Lecks and Mrs. Aleshine.
Stowe, Dred.
—— Old Town Folk.
—— Poganuc People.
Thomson, Ulu.
Walford, Her Great Idea, &c., Stories.
Low's German Series, a graduated course. See
"German."
Low's Readers. See English Reader and French
Reader.
Low's Standard Books for Boys, with numerous
illustrations, 2s. 6d. each; gilt edges, 3s. 6d.
Adventures in New Guinea: the Narrative of Louis
Tregance.
Biart (Lucien) Adventures of a Young Naturalist.
—— My Rambles in the New World.
Boussenard, Crusoes of Guiana.
—— Gold Seekers, a sequel to the above.
Butler (Col. Sir Wm., K.C.B.) Red Cloud, the
Solitary Sioux: a Tale of the Great Prairie.
Cahun (Leon) Adventures of Captain Mago.
—— Blue Banner.
Célière, Startling Exploits of the Doctor.
Chaillu (Paul du) Wild Life under the Equator.
Collingwood (Harry) Under the Meteor Flag.
—— Voyage of the Aurora.
Cozzens (S. W.) Marvellous Country.
Dodge (Mrs.) Hans Brinker; or, The Silver Skates.
Du Chaillu (Paul) Stories of the Gorilla Country.

You might also like