100% found this document useful (8 votes)
50 views76 pages

(EBOOK PDF) The Latest Developments and Challenges in Biomedical Engineering Proceedings of the 23rd P 1st Edition 3031384296 9783031384295 full chapters - Quickly download the ebook to never miss any content

The document provides information about various eBooks available for download on ebookball.com, focusing on biomedical engineering topics. It highlights the Proceedings of the 23rd Polish Conference on Biocybernetics and Biomedical Engineering, featuring 35 publications on current trends and challenges in the field. The eBooks cover a range of subjects including biomedical imaging, machine learning, telemonitoring, and biomaterials, with instant access available in multiple formats.

Uploaded by

loonyakabe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (8 votes)
50 views76 pages

(EBOOK PDF) The Latest Developments and Challenges in Biomedical Engineering Proceedings of the 23rd P 1st Edition 3031384296 9783031384295 full chapters - Quickly download the ebook to never miss any content

The document provides information about various eBooks available for download on ebookball.com, focusing on biomedical engineering topics. It highlights the Proceedings of the 23rd Polish Conference on Biocybernetics and Biomedical Engineering, featuring 35 publications on current trends and challenges in the field. The eBooks cover a range of subjects including biomedical imaging, machine learning, telemonitoring, and biomaterials, with instant access available in multiple formats.

Uploaded by

loonyakabe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Quick and Easy Ebook Downloads – Start Now at ebookball.

com for Instant Access

(EBOOK PDF) The Latest Developments and Challenges


in Biomedical Engineering Proceedings of the 23rd
P 1st Edition 3031384296 9783031384295 full
chapters

https://ebookball.com/product/ebook-pdf-the-latest-
developments-and-challenges-in-biomedical-engineering-
proceedings-of-the-23rd-p-1st-
edition-3031384296-9783031384295-full-chapters-25524/

OR CLICK BUTTON

DOWLOAD NOW

Instantly Access and Download Textbook at https://ebookball.com


Your digital treasures (PDF, ePub, MOBI) await
Download instantly and pick your perfect format...

Read anywhere, anytime, on any device!

(Ebook PDF) Current Trends in Biomedical Engineering and


Bioimages Analysis Proceedings of the 21st Polish
Conference on Biocybernetics and Biomedical Engineering
1st edition by Józef Korbicz 303029885X 9783030298852
https://ebookball.com/product/ebook-pdf-current-trends-in-biomedical-
full chapters
engineering-and-bioimages-analysis-proceedings-of-the-21st-polish-
conference-on-biocybernetics-and-biomedical-engineering-1st-edition-
by-ja3zef-korbicz-303029885/
ebookball.com

(Ebook PDF) The Role of the Internet of Things IoT in


Biomedical Engineering 1st edition by Sushree Bibhuprada
Priyadarshini 1000400891 9781000400892 full chapters
https://ebookball.com/product/ebook-pdf-the-role-of-the-internet-of-
things-iot-in-biomedical-engineering-1st-edition-by-sushree-
bibhuprada-priyadarshini-1000400891-9781000400892-full-chapters-22402/

ebookball.com

(Ebook PDF) Clinical and Biomedical Engineering in the


Human Nose 1st edition by Kiao Inthavong, Narinder Singh,
Eugene Wong 9811567166 9789811567162 full chapters
https://ebookball.com/product/ebook-pdf-clinical-and-biomedical-
engineering-in-the-human-nose-1st-edition-by-kiao-inthavong-narinder-
singh-eugene-wong-9811567166-9789811567162-full-chapters-22582/

ebookball.com

(EBOOK PDF) Proceeding of the 3rd International Conference


on Electronics Biomedical Engineering and Health
Informatics 2023rd Edition 9819902479 978-9819902477 full
chapters
https://ebookball.com/product/ebook-pdf-proceeding-of-the-3rd-
international-conference-on-electronics-biomedical-engineering-and-
health-informatics-2023rd-edition-9819902479-978-9819902477-full-
chapters-25528/
ebookball.com
(Ebook PDF) Applications of Biomedical Engineering in
Dentistry 1st edition by Tayebi Lobat 3030215830
9783030215835 full chapters
https://ebookball.com/product/ebook-pdf-applications-of-biomedical-
engineering-in-dentistry-1st-edition-by-tayebi-
lobat-3030215830-9783030215835-full-chapters-22396/

ebookball.com

(Ebook PDF) Application of Biomedical Engineering in


Neuroscience 1st edition by Sudip Paul 9811371423
9789811371424 full chapters
https://ebookball.com/product/ebook-pdf-application-of-biomedical-
engineering-in-neuroscience-1st-edition-by-sudip-
paul-9811371423-9789811371424-full-chapters-22404/

ebookball.com

(EBook PDF) Recent Developments in Electrical and


Electronics Engineering 1st edition by Poonam Singhal,
Sakshi Kalra, Bhim Singh, Bansal 303108280X 9783031082801
full chapters
https://ebookball.com/product/ebook-pdf-recent-developments-in-
electrical-and-electronics-engineering-1st-edition-by-poonam-singhal-
sakshi-kalra-bhim-singh-bansal-303108280x-9783031082801-full-
chapters-22886/
ebookball.com

(Ebook PDF) Careers in Biomedical Engineering 1st edition


by Michael Levin Epstein 9780128148174 full chapters

https://ebookball.com/product/ebook-pdf-careers-in-biomedical-
engineering-1st-edition-by-michael-levin-epstein-9780128148174-full-
chapters-22418/

ebookball.com

(Ebook PDF) Innovations in Biomedical Engineering 1st


edition by Marek Gzik 3319471546 9783319471549 full
chapters
https://ebookball.com/product/ebook-pdf-innovations-in-biomedical-
engineering-1st-edition-by-marek-gzik-3319471546-9783319471549-full-
chapters-22382/

ebookball.com
Lecture Notes in Networks and Systems 746

Paweł Strumiłło
Artur Klepaczko
Michał Strzelecki
Dorota Bociąga Editors

The Latest
Developments
and Challenges
in Biomedical
Engineering
Proceedings of the 23rd Polish
Conference on Biocybernetics and
Biomedical Engineering, Lodz, Poland,
September 27–29, 2023
Lecture Notes in Networks and Systems

Volume 746

Series Editor
Janusz Kacprzyk , Systems Research Institute, Polish Academy of Sciences,
Warsaw, Poland

Advisory Editors
Fernando Gomide, Department of Computer Engineering and Automation—DCA,
School of Electrical and Computer Engineering—FEEC, University of
Campinas—UNICAMP, São Paulo, Brazil
Okyay Kaynak, Department of Electrical and Electronic Engineering,
Bogazici University, Istanbul, Türkiye
Derong Liu, Department of Electrical and Computer Engineering, University of
Illinois at Chicago, Chicago, USA
Institute of Automation, Chinese Academy of Sciences, Beijing, China
Witold Pedrycz, Department of Electrical and Computer Engineering, University of
Alberta, Alberta, Canada
Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland
Marios M. Polycarpou, Department of Electrical and Computer Engineering,
KIOS Research Center for Intelligent Systems and Networks, University of Cyprus,
Nicosia, Cyprus
Imre J. Rudas, Óbuda University, Budapest, Hungary
Jun Wang, Department of Computer Science, City University of Hong Kong,
Kowloon, Hong Kong
The series “Lecture Notes in Networks and Systems” publishes the latest
developments in Networks and Systems—quickly, informally and with high quality.
Original research reported in proceedings and post-proceedings represents the core
of LNNS.
Volumes published in LNNS embrace all aspects and subfields of, as well as new
challenges in, Networks and Systems.
The series contains proceedings and edited volumes in systems and networks,
spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor
Networks, Control Systems, Energy Systems, Automotive Systems, Biological
Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems,
Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems,
Robotics, Social Systems, Economic Systems and other. Of particular value to both
the contributors and the readership are the short publication timeframe and
the world-wide distribution and exposure which enable both a wide and rapid
dissemination of research output.
The series covers the theory, applications, and perspectives on the state of the art
and future developments relevant to systems and networks, decision making, control,
complex processes and related areas, as embedded in the fields of interdisciplinary
and applied sciences, engineering, computer science, physics, economics, social, and
life sciences, as well as the paradigms and methodologies behind them.
Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago.
All books published in the series are submitted for consideration in Web of Science.
For proposals from Asia please contact Aninda Bose ([email protected]).
Paweł Strumiłło · Artur Klepaczko ·
Michał Strzelecki · Dorota Boci˛aga
Editors

The Latest Developments


and Challenges
in Biomedical Engineering
Proceedings of the 23rd Polish Conference on
Biocybernetics and Biomedical Engineering,
Lodz, Poland, September 27–29, 2023
Editors
Paweł Strumiłło Artur Klepaczko
Institute of Electronics Institute of Electronics
Lodz University of Technology Lodz University of Technology
Lodz, Poland Lodz, Poland

Michał Strzelecki Dorota Boci˛aga


Institute of Electronics Institute of Materials Science
Lodz University of Technology and Engineering
Lodz, Poland Lodz University of Technology
Lodz, Poland

ISSN 2367-3370 ISSN 2367-3389 (electronic)


Lecture Notes in Networks and Systems
ISBN 978-3-031-38429-5 ISBN 978-3-031-38430-1 (eBook)
https://doi.org/10.1007/978-3-031-38430-1

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature
Switzerland AG 2024

This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse
of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

We are honored to hand over to the readers the Proceedings of the 23rd Polish Confer-
ence on Biocybernetics and Biomedical Engineering, which will be held in Lodz
from September 27 to 29, 2023. The conference was organized by the Committee
of Biocybernetics and Biomedical Engineering of the Polish Academy of Sciences
and hosted by the Lodz University of Technology. Due to the complex and multi-
disciplinary area of issues covered by biomedical engineering, two TUL units were
involved in the organization of this conference, namely the Institute of Electronics and
the Institute of Materials Science and Engineering. The conference is a continuation
of the cyclical, bi-annual meetings of the biomedical engineers’ community, which
attract scientists and industry representatives from various fields of engineering, IT,
biomaterials, biotechnology, and medicine.
The ongoing and dynamic advancement of AI-based data processing and anal-
ysis methods is playing an increasingly vital role in medicine. These methods find
application in various areas, such as disease diagnosis, prediction, and monitoring,
particularly through the utilization of image data analysis algorithms. Other areas of
application include personalized medicine, where multimodal patient data is acquired
and analyzed, as well as robot-assisted surgery and clinical decision support.
These Proceedings contain 35 publications on the above issues as well as other
relevant hot topics regarding the most important challenges of modern biomedical
engineering. The papers are organized in the following five chapters:
• Biomedical Imaging & Analysis
• Modeling and Machine Learning
• Signal Processing
• Telemonitoring & Measurement
• Biomaterials and Implants.

v
vi Preface

The editors would like to express their gratitude to the authors for their submissions
and to all the reviewers for their meticulous evaluation of the papers and valuable
comments, which undoubtedly enhanced the scientific merit of the accepted papers.
We believe that through this collaborative effort, these Proceedings will serve as
a significant scientific resource for the biocybernetics and biomedical engineering
community.

Lodz, Poland Paweł Strumiłło


Artur Klepaczko
Michał Strzelecki
Dorota Boci˛aga
Contents

Biomedical Imaging & Analysis


Modified CNN-Watershed for Corneal Endothelium Segmentation:
Image-to-Image Versus Sliding-Window Comparison . . . . . . . . . . . . . . . . . 3
Adrian Kucharski and Anna Fabijańska
Tissue Pattern Classification with CNN in Histological Images . . . . . . . . . 17
Krzysztof Siemion, Lukasz Roszkowiak, Jakub Zak, Antonina Pater,
and Anna Korzynska
Robust Multiresolution and Multistain Background Segmentation
in Whole Slide Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Artur Jurgas, Marek Wodzinski, Manfredo Atzori, and Henning Müller
Impact of Visual Image Quality on Lymphocyte Detection Using
YOLOv5 and RetinaNet Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
A. Polejowska, M. Sobotka, M. Kalinowski, M. Kordowski,
and T. Neumann
Using Local Normalization and Local Thresholding in the Detection
of Small Objects in MR Brain Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Patrycja Kwiek and Elżbieta Pociask
Using Histogram Skewness and Kurtosis Features for Detection
of White Matter Hyperintensities in MRI Images . . . . . . . . . . . . . . . . . . . . . 67
Anna Baran and Adam Piórkowski
Texture Analysis Versus Deep Learning in MRI-based
Classification of Renal Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Artur Klepaczko, Marcin Majos, Ludomir Stefańczyk,
Katarzyna Szychowska, and Ilona Kurnatowska
Mobile Application for Learning Polish Sign Language . . . . . . . . . . . . . . . 95
Anna Slian, Joanna Czajkowska, and Monika Bugdol

vii
viii Contents

Colour Clustering and Deep Transfer Learning Techniques


for Breast Cancer Detection Using Mammography Images . . . . . . . . . . . . 105
Hosameldin O. A. Ahmed and Asoke K. Nandi
Constructing a Panoramic Radiograph Image Based on Magnetic
Resonance Imaging Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Piotr Cenda, Adam Cieślak, Elżbieta Pociask, Rafał Obuchowicz,
and Adam Piórkowski
Optimization of the BOLD Hemodynamic Response Function
for EEG-FMRI Studies in Epilepsy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Nikodem Hryniewicz, Rafał Rola, Kamil Lipiński,
Ewa Pi˛atkowska-Janko, and Piotr Bogorodzki
Improving the Resolution and SNR of Diffusion Magnetic
Resonance Images From a Low-Field Scanner . . . . . . . . . . . . . . . . . . . . . . . . 147
Jakub Jurek, Kamil Ludwisiak, Andrzej Materka,
and Filip Szczepankiewicz

Modeling and Machine Learning


Improving the Predictive Ability of Radiomics-Based Regression
Survival Models Through Incorporating Multiple Regions
of Interest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Agata Małgorzata Wilk, Emilia Kozłowska, Damian Borys,
Andrea D’Amico, Izabela Gorczewska, Iwona Debosz-Suwińska,
Seweryn Gałecki, Krzysztof Fujarewicz, Rafał Suwiński,
and Andrzej Świerniak
Assessing the Prognosis of Patients with Metastatic or Recurrent
Non-small Cell Lung Cancer in the Era of Immunotherapy
and Targeted Therapy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Seweryn Gałecki, Marzena Kysiak, Emilia Kozłowska,
Agata Małgorzata Wilk, Rafał Suwiński, and Andrzej Świerniak
Predicting the Risk of Metastatic Dissemination in Non-small Cell
Lung Cancer Using Clinical and Genetic Data . . . . . . . . . . . . . . . . . . . . . . . . 187
Emilia Kozłowska, Agata Małgorzata Wilk, Dorota Butkiewicz,
Małgorzata Krześniak, Agnieszka Gdowicz-Kłosok, Monika Giglok,
Rafał Suwiński, and Andrzej Świerniak
Metastasis Modelling Approaches—Comparison of Ideas . . . . . . . . . . . . . 199
Artur Wyciślok and Jaroslaw Śmieja
Model of Lung Cancer Progression and Metastasis—Need
for a Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Krzysztof Psiuk-Maksymowicz
Contents ix

Classification of Recorded Electrooculographic Signals on Drive


Activity for Assessing Four Kind of Driver Inattention by Bagged
Trees Algorithm: A Pilot Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Rafał Doniec, Szymon Sieciński, Natalia Piaseczna, Konrad Duraj,
Joanna Chwał, Maciej Gawlikowski, and Ewaryst Tkacz
Monte-Carlo Modeling of Optical Sensors for Postoperative Free
Flap Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Paulina Stadnik, Ignacy Rogoń, and Mariusz Kaczmarek
3D-Breast System for Determining the Volume of Tissue Needed
for Breast Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Gabriela Małyszko, Julia Czałpińska, Andżelika Janicka,
Katarzyna Ostrowska, and Mariusz Kaczmarek
Preeclampsia Risk Prediction Using Machine Learning Methods
Trained on Synthetic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Magdalena Mazur-Milecka, Natalia Kowalczyk, Kinga Jaguszewska,
Dorota Zamkowska, Dariusz Wójcik, Krzysztof Preis, Henriette Skov,
Stefan Wagner, Puk Sandager, Milena Sobotka, and Jacek Rumiński
Computational Approach for Verification of Aortic Wall Tear
Size on CT Contrast Distribution in Patients with Type B Aortic
Dissection—The Preliminary Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
Andrzej Polanczyk, Aleksandra Piechota-Polanczyk,
Ludomir Stefańczyk, Julia Balcer, and Michal Strzelecki

Signal Processing
Using Frequency Correction of Stethoscope Recordings to Improve
Classification of Respiratory Sounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Adam Biniakowski, Krzysztof Szarzyński, and Tomasz Grzywalski
Bioimpedance Spectroscopy—Niche Applications in Medicine:
Systematic Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
Ilona Karpiel, Mirella Urzeniczok, and Ewelina Sobotnicka
Evaluation of Neurological Disorders in Isokinetic Dynamometry
and Surface Electromyography Activity of Biceps and Triceps
Muscles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Anna Roksela, Anna Poświata, Jarosław Śmieja, Dominika Kozak,
Katarzyna Bienias, Jakub Ślaga, and Michał Mikulski
EMG Mapping Technique for Pinch Meter Robot Extension . . . . . . . . . . . 339
Marcel Smolinski, Michal Mikulski, and Jaroslaw Śmieja
Data Glove for the Recognition of the Letters of the Polish Sign
Language Alphabet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
Jakub Piskozub and Paweł Strumiłło
x Contents

Telemonitoring & Measurement


Smart Pillcase System to Support the Elderly and the Disabled . . . . . . . . 365
Michał Śniady and Aleksandra Królak
Opportunities of Data Medicine: Telemonitoring of Multimodal
Medical Data in Outpatient Care . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
Alexander Keil, Nick Brombach, Olaf Gaus, Rainer Brück, and Kai Hahn
Measurement of Blood Flow in the Carotid Artery
as one of the Elements of Assessing the Ability for Pilots
in the Gravitational Force Conditions–Review of Available
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
Ewelina Sobotnicka, Jan Mocha, Aleksander Sobotnicki,
Jerzy Gałecka, and Adam Gacek
Application of Unsuppressed Water Peaks for MRS Thermometry . . . . . 407
Marcin Sińczuk, Jacek Rogala, Ewa Pi˛atkowska-Janko,
and Piotr Bogorodzki
Analyzing the Performance of Real-Coded Genetic Algorithm
with Control Locations for Multi-Robot Path Planning . . . . . . . . . . . . . . . . 421
Karolina Wójcik and Adam Ciszkiewicz
Detection of People Swimming in Water Reservoirs with the Use
of Multimodal Imaging and Machine Learning . . . . . . . . . . . . . . . . . . . . . . . 431
Jakub Konert, Adam Dradrach, and Jacek Rumiński
Haptic Display of Depth Images in an Electronic Travel Aid
for the Blind: Technical Indoor Trials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
Piotr Skulimowski, Paweł Strumiłło, Szymon Trygar, and Wacław Trygar

Biomaterials and Implants


The Influence of Aging Conditions on the Properties of Polymer
Dental Composites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Dariusz M. Bieliński, Maria Rokicka, Tomasz Gozdek,
and Katarzyna Klajn
Biomedical Imaging & Analysis
Modified CNN-Watershed for Corneal
Endothelium Segmentation:
Image-to-Image Versus Sliding-Window
Comparison

Adrian Kucharski and Anna Fabijańska

Abstract This paper considers the problem of corneal endothelium image segmen-
tation using a method that combines a CNN model with a watershed transform.
Specifically, first CNN predicts cell bodies, edges, and centers. Next, cell centers are
used as markers that guide the watershed transform performed concerning the cell
edge probability maps inferred by the CNN to outline cell edges. Different variants
of the method are considered. Specifically, a downscaled U-Net is compared with
the Attention U-Net in the image-to-image and sliding window setup. Results show
that using a marker-driven watershed transform to post-process cell edge probability
maps allows for replacing the sliding window setup with an image-to-image setup,
reducing prediction time while maintaining similar or better segmentation accuracy.
Also, when used as a backbone, Attention U-Net outperforms classical U-Net in
determining cell morphometric parameters with high accuracy.

Keywords Corneal endothelial cell · Image segmentation · U-Net · Attention


U-Net · Convolutional neural network

1 Introduction

Over time, segmentation methods for corneal endothelial cells have progressed from
unsupervised to supervised machine learning systems. Recently, supervised machine
learning methods using deep learning solutions based on convolutional neural net-

A. Kucharski (B) · A. Fabijańska


Lodz University of Technology Institute of Applied Computer Science, 18 Stefanowskiego Str.,
90-537 Lodz, Poland
e-mail: [email protected]
A. Fabijańska
e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 3


P. Strumiłło et al. (eds.), The Latest Developments and Challenges in Biomedical
Engineering, Lecture Notes in Networks and Systems 746,
https://doi.org/10.1007/978-3-031-38430-1_1
4 A. Kucharski and A. Fabijańska

works (CNNs) have become state-of-the-art. The U-Net model [1] has emerged as
a popular choice for this task, with the downscaled version applied to overlapping
image tiles in a patch-based setup outperforming the full image-based approach [2,
3]. Using a CNN for overlapping image tiles cut out of an image increases the number
of training samples, improving the model’s ability to detect weak and blurred cell
edges due to a smaller image region being analyzed [4]. However, despite these ben-
efits, these methods still have limitations in determining weak and fuzzy boundaries.
They also result in additional computational overhead and increased inference time.
Recent studies have addressed these limitations by integrating convolutional
encoder-decoder models with conventional image processing methods, including
the watershed transform, to improve the post-processing of cell boundaries [5, 6].
Another alternative explored in [7] is extending the U-Net model by build-in sub-
modules to better adapt to weak and fuzzy cell boundaries. This paper adopts this
method and proposes an image segmentation technique for corneal endothelium that
utilizes the Attention U-Net in conjunction with CNN-Watershed [5]. This approach
addresses the challenges posed by weak and discontinuous cell edges. Specifically,
we apply a marker-based watershed transform to the cell edge probability maps
inferred by the Attention U-Net to outline endothelial cells precisely. We also test
the approach in patch-based and full-image scenarios to determine whether addi-
tional computational overhead from the patch-based setup can be avoided using the
watershed transform in the post-processing step.

2 Dataset Details

The study utilized the Rotterdam dataset [8], which is publicly available and consists
of 52 confocal corneal microscopy images. The resolution of these images varies
from 324 × 385 pixels to 763 × 525 pixels. To handle varying sizes, we resized
each image to 384 × 384 pixels. Although the dataset does not include ground truth
results for cell edges, it does contain manually marked cell centers. We used these
centers to generate ground truth segmentation results by applying marker-guided
watershed segmentation, followed by manual correction as required. The resulting
segmentation masks contain three classes: cell bodies, edges, and centers. Sample
images from the Rotterdam dataset used in this study are displayed in Fig. 1.

3 Methods

3.1 General Idea

The CNN-watershed method for corneal endothelial cell segmentation is a hybrid


approach that combines a CNN with the watershed transform to improve accuracy.
Modified CNN-Watershed for Corneal Endothelium Segmentation … 5

a) b) c) d)

Fig. 1 A sample corneal endothelial image from the Rotterdam dataset. a a grayscale microscopy
image, b cell edges, c manually marked cell centers, d a mask that depicts cell edges, cell centers,
and cell bodies, with each class represented by a different color

MARKERS EXTRACTION PROCESS

cell centers
binarization labelling
prediction

CNN TRAINING WATERSHED SEGMENTATION

cell borders predicted


cell centers, cell borders, cell bodies segmented cells
prediction markers

Fig. 2 The general workflow of the CNN-watershed

The process is summarized in Fig. 2. Firstly, the CNN predicts the positions of
cell bodies, edges, and centers. Next, the predicted cell centers are used as markers
that guide the watershed segmentation. Unlike the traditional setup, the watershed
transform is not performed concerning the image gradient but with the cell edge
probability maps inferred by the CNN. The resulting watershed dams created around
the markers are always continuous and correspond to the resulting cell edges.
The baseline CNN-watershed approach [5] employs the downscaled U-Net model
in a sliding window setup. In our study, we modify the baseline approach by using
the Attention U-Net in both the image-to-image (I2I) and the sliding window (SW)
setup.
6 A. Kucharski and A. Fabijańska

3.2 Full Image Versus Sliding Window

In the I2I approach, a CNN was trained to output a segmentation mask directly for
a given input image. Input images of size 384 × 384 pixels were considered. This
approach considers global information about the image, which can be helpful in cases
where the cells are irregularly shaped or arranged in complex patterns.
For the SW approach, input images were divided into overlapping tiles of size
64 × 64 pixels. A CNN model was trained to perform image segmentation in each
window, and the predicted segmentation masks were merged to obtain a seamless cell
edge probability map. Incorporating multiple image tiles during the prediction stage
increases computational complexity. Furthermore, the SW approach may encounter
challenges when dealing with large cell clusters or variations in cell size and shape
within an image.

3.3 Convolutional Neural Networks

U-Net The configuration of the U-Net model varied depending on the setup. The
input to the SW U-Net model (see Fig. 3) was an image of size 64 × 64 × 1 pixels
and 384 × 384 × 1 pixels for the I2I model. The contracting path included (three
for the SW and four for the I2I architecture) downsampling levels with three 2D
convolutional layers (with a filter size 3 × 3) per level, each followed by a ReLU
activation function. The number of filters in each downsampling level was (32, 64,
128) for the SW and (64, 128, 256, 512) for the I2I. Max-pooling 2D with a pool
size of 2 × 2 was performed after each downsampling block (except the last one).
The expansive path had three for the SW and four for the I2I upsampling levels, with
three Conv2D layers per level. The number of filters in each upsampling level was
[128, 64, 32], and a ReLU activation function followed each convolutional layer.
After each upsampling layer, the corresponding feature maps from the contracting
and expansive paths were concatenated and passed through three 2D convolutional
layers, each followed by a ReLU activation function. The number of filters in each

ConvBlock(x)

Conv2D(x)
ReLU
Conv2D(x)
ReLU
ConvBlock(128)

Output 64x64x3
UpSampling2D

UpSampling2D

Conv2D(x)
ConvBlock(32)
MaxPooling2D

ConvBlock(64)
MaxPooling2D

ConvBlock(64)

ConvBlock(32)
Input 64x64x1

Concatenate

Concatenate
Conv2D(64)

Conv2D(32)

Conv2D(3)
Softmax

ReLU
ReLU

ReLU

Output

Fig. 3 The architecture of the SW U-Net model


Modified CNN-Watershed for Corneal Endothelium Segmentation … 7

Conv2D(32)

Multiply
Gate
ConvBlock(x) Gate

Conv2D(x) Add

UpSampling2D
ConvBlock(64)
Concatenate

Conv2D(16)
ReLU ReLU
Conv2D(x) Conv2D(1)
ReLU Sigmoid
ConvBlock(128)

Output 64x64x3
UpSampling2D
ConvBlock(32)
MaxPooling2D

ConvBlock(64)
MaxPooling2D

ConvBlock(32)
Input 64x64x1

Concatenate
Conv2D(32)

Conv2D(16)

Conv2D(3)
Conv2D(x)

Multiply

Softmax
Gate
ReLU Output

Output

Fig. 4 The architecture of the SW attention U-Net model

upsampling level was (128, 64, 32) for the SW and (512, 256, 128, 64) for the I2I. The
final layer of the model was a 2D convolutional layer with one filter of size 1 × 1 and
softmax activation, which outputted the predicted segmentation mask. The number
of output labels was three, with the probabilities of the cell edges, cell centers, and
cell bodies.

Attention U-Net The Attention U-Net [9] was derived from the baseline U-Nets for
both I2I and SW setups. The contracting path remained unaltered, while the expansive
path was modified to include an attention mechanism that helped the model focus
on critical features in the input. At each upsampling block, the attention mechanism
was incorporated by concatenating the relevant feature maps from the contracting
and expansive paths and subsequently utilizing a weighting mechanism to assess the
significance of the features in the concatenation. The add attention mechanism was
used (see Fig. 4).

3.4 Data Augmentation

The Rotterdam dataset was augmented to increase its variability using a set of geo-
metric transformations with randomized parameters. The following transformations
were applied to each image in the dataset:
– A shear operation introduces distortion along both axes to mimic natural defor-
mation in the corneal endothelium (random value between –0.3 and 0.3).
– A scale transformation to simulate cell size and spacing variations (random scaling
factor from 0.5 to 1.5).
– A rotation transformation to simulate random rotational changes in the micro-
scope’s imaging plane (random angle from –0.15 to 0.15).
– A vertical flipping operation with a 50% probability and a horizontal flipping
operation with a 50% probability to increase dataset variability.
After each training epoch, the transformation parameters were randomized again.
8 A. Kucharski and A. Fabijańska

3.5 Training

The I2I and SW models were trained using the Adam optimizer with a learning rate
of 1e-4 and utilized categorical cross-entropy loss as the loss function.
For the image-to-image Attention U-Net and U-Net models, the training was
conducted for 100 epochs with a batch size of 4 and 34 steps per epoch.
Meanwhile, the sliding-window models were trained for 100 epochs with a batch
size of 128 and 102 steps per epoch. The training dataset was augmented using the
data augmentation process described in Sect. 3.4, and patches of size 64 × 64 were
extracted from the augmented images. A total of 13,000 patches were extracted per
training epoch.

3.6 Prediction and Watershed Postprocessing

Trained models generated three probability maps for cell borders, centers, and bodies
(see Fig. 5). Cell bodies were not used for further processing.
While I2I Attention U-Net and U-Net models process a whole image simultane-
ously, SW Attention U-Net and U-Net models were applied to consecutive image
patches of size 64 × 64 in a sliding window setup. The patches overlapped by a stride
of 4 pixels in horizontal and vertical directions to obtain seamless probability maps.
The model’s predictions for overlapping patch regions were averaged.
Predicted cell centers were then used to generate markers for watershed markers-
controlled segmentation. To transform a cell center’s probability maps to 1-pixel size
seeds, a mean filter of size 3 × 3 was first applied. Then, the results were binarized via
the minimum cross-entropy approach proposed by [10]. Next, morphological erosion
with a square structuring element of size 3 × 3 pixels was applied to the binary image.
Finally, the centroid’s positions were calculated for each connected region to generate
1-pixel size markers, and markers-controlled watershed segmentation was performed
on cell borders probability maps outputted by CNN’s models and prepared seeds.

a) b) c) d)

Fig. 5 Probability maps outputted by I2I U-Net models. a an original image, b cell bodies, c cell
borders, d cell centers
Modified CNN-Watershed for Corneal Endothelium Segmentation … 9

4 Results

4.1 General Assessment Procedure

The quality of endothelial cell segmentation was evaluated visually and quantita-
tively using image segmentation accuracy measures and measures derived from the
cells’ morphology. The assessment was performed using a three-fold cross-validation
approach. The available corneal endothelium image data for each setup was randomly
divided into two approximately equal subsets. Two subsets were used to train the
models, while the third subset was used to evaluate their performance. The assess-
ment was repeated thrice with different training and testing fold configurations.

4.2 Visual Results

Visual results of the CNN-Watershed used in I2I and SW setup and running at the top
of the U-Net and the Attention U-Net models are presented in Fig. 6. Specifically, the
top panel presents the resulting probability maps, with intensities of different colors
denoting probabilities of cell edges, cell centers, and cell bodies. The middle panel
presents the resulting cell edges overlaid on an original sample image. Finally, the
bottom panel compares the ground truth edges in green with the inferred edges in
red. Overlapping edges are shown in white.
Additionally, Fig. 7 visualizes the resulting attention maps of the Attention U-Net
used in the sliding window and image-to-image setup.

4.3 Image Segmentation Quality Assessment

The DICE coefficient (see Eq. 1) was computed between the resulting P and the
ground truth T edges. Prior to the DICE calculation, the edges, which were only one
pixel wide, were dilated using a square structural element of size 4 × 4.

2|T P|
DICE = (1)
|T | + |P|

The longest distance between a pixel of ground truth edges and the nearest pixel of
predicted edges was quantified with the modified Hausdorff distance [11]. The ideal
case MHD value between two images is 0. High MHD values suggest the presence
of false or missing edges.
The summary of cell segmentation accuracy measures obtained for each consid-
ered version of the CNN-watershed for each testing fold is shown in Table 1. The
best scores are shown in bold.
10 A. Kucharski and A. Fabijańska

a) b) c) d)

Fig. 6 Visual results of segmentation of corneal endothelial images. Results obtained with a I2I
Attention U-Net, b I2I U-Net, c SW attention U-Net, d SW U-Net. Top—results obtained by CNNs.
Middle-predicted edges overlaid on the original image. Bottom—comparison between the ground
truth and edges generated by the CNN-watershed algorithm. The edges identified as ground truth
are highlighted in red, the edges generated by the CNN-watershed algorithm are highlighted in
green, and overlapping edges are highlighted in white

a) b) c) d)

Fig. 7 Visual results of attention maps outputted by SW and I2I attention U-Nets. a Attention maps
obtained with SW Attention U-Net (pixels value from 0.62 to 0.93), b attention maps obtained with
I2I Attention U-Net (pixels value from 0.47 to 0.68), c an original image, d a target ground truth
(red: cell bodies, green: cell boundaries, blue: cell centers)
Modified CNN-Watershed for Corneal Endothelium Segmentation … 11

Table 1 Image segmentation accuracy measures for each testing fold (F1, F2, F3). DSC—the DICE
coefficient, MHD—the modified Hausdorff distance
Model DSC MHD
F1 F2 F3 F1 F2 F3
I2I attention 0.875 0.869 0.848 0.352 0.379 0.480
U-Net
SW 0.877 0.871 0.848 0.390 0.374 0.585
attention
U-Net
I2I U-Net 0.885 0.883 0.860 0.329 0.345 0.477
SW U-Net 0.869 0.873 0.831 0.382 0.368 0.636

4.4 Cell Morphometry Assessment

The Pearson correlation coefficient (PCC) [12], the mean absolute error of the number
of cell neighbors (M AE N ), and the relative error of cell hexagonality (R E H ) were
used to assess the resulting cell morphology.
To compare the number of cell neighbors in the ground truth T and predicted P
images, the mean absolute error between the number of neighbors (see Eq. 2) was
calculated. The reference number of cells and their positions in both images were
based on the T image.

1 
NT
M AE N = |Ti − Pi | (2)
NT i=1

the number of cells in the T image is denoted by NT , whereas T i and Pi refer to


the numbers of neighbors of the i-th cell in both images being compared.
The relative hexagonality error R E H (Eq. 3) between T and P images was cal-
culated to evaluate cells’ hexagonality. The hexagonality coefficient (T H and P H )
was calculated by dividing the number of hexagonal cells with six neighbors by the
total number of cells. A high hexagonality coefficient indicates good corneal health,
while a low value may indicate corneal damage or disease.
TH − PH
REH = | | (3)
TH

To measure the degree of correlation between the sizes (in pixels) of corresponding
cells in the T and P images, the Pearson correlation coefficient (PCC) was  utilized.
A strong correlation is indicated by PCC values within the [−1; −0.5] [0.5; 1.0],
while a perfect match is attained when the PCC value is equal to either 1 or –1.
The summary of the cell morphometry accuracy measures obtained for each con-
sidered version of the CNN-watershed for each testing fold is shown in Table 2. The
best scores are shown in bold.
12 A. Kucharski and A. Fabijańska

Table 2 Morphometric parameters accuracy measures for each testing fold (F1, F2, F3). PCC—the
Pearson correlation coefficient, MAE N —the mean absolute error between the number of neighbors,
RE H —the relative hexagonality error
Model PCC MAE N RE H
F1 F2 F3 F1 F2 F3 F1 F2 F3
I2I 0.898 0.915 0.748 0.136 0.111 0.239 0.035 0.055 0.048
atten-
tion
U-Net
SW 0.765 0.909 0.729 0.319 0.123 0.480 0.085 0.071 0.251
atten-
tion
U-Net
I2I 0.885 0.930 0.750 0.156 0.106 0.263 0.053 0.050 0.094
U-Net
SW 0.688 0.911 0.688 0.196 0.117 0.493 0.040 0.056 0.154
U-Net

Table 3 Average prediction time in seconds for an image with the size of 384 × 384 × 1
Model Average time (s)
I2I attention U-Net 0.080
SW attention U-Net 3.190
I2I U-Net 0.062
SW U-Net 3.041

4.5 Prediction Time

Finally, the prediction time was measured for each considered CNN model and
prediction setup. Table 3 shows the average time for a single image prediction for
each considered model. Time measurements were performed on an NVIDIA GTX
1070 with 8 GB RAM DDR5 combined with an AMD Ryzen 5 5600X and 64 GB
of RAM DDR4. All the experiments were conducted on a TensorFlow 2.9.1 with the
Keras [13] library, and GPU was utilized for computations.

5 Discussion

Based on the visual assessment presented in the top panel of Fig. 6, it can be observed
that the probability maps generated by the models used in the image-to-image (I2I)
setup are more precise and less blurry compared to those produced by the sliding
window (SW) setup. The probability for each class is higher for the I2I models,
Modified CNN-Watershed for Corneal Endothelium Segmentation … 13

particularly for the cell centers, where the probabilities outputted by the SW models
are notably weaker. This suggests that the sliding window models are less confident
in their predictions, potentially due to the lack of global information related to the
cells and their neighbors. This observation is further supported by the attention maps
displayed in Fig. 7 for the Attention U-Net model, where the attention for the cell
centers in the I2I setup is more concentrated compared to the SW counterparts.
However, this shortcoming is primarily limited to the cell centers, as all model outputs
exhibit similar levels of confidence in the case of the cell edges. As a result, this leads
to comparable cell segmentation outcomes.
The numerical evaluation supports this finding, particularly when examining the
accuracy measures for image segmentation, as shown in Table 1. Although the DICE
scores vary by no more than 3.4% (and no less than 1.5%) between the worst and
best-performing variants, the CNN models used in the I2I setup display slightly
better results than their SW counterparts, with an average difference in segmentation
scores of less than 0.5% for the Attention U-Net and around 2% for the U-Net.
This small difference may be attributed to the application of the watershed transform
to the edge probability maps, which resolves discontinuous edges resulting from
the threshold-based post-processing that is typically applied to cell edge probability
maps in state-of-the-art methods.
When the accuracy measures derived from cell morphometry are considered, the
image-to-image setup still outperforms the sliding window approach for both variants
of the U-Net model. However, the Attention U-Net is more accurate. The advantage
of the model is, on average, 0.15% for the Pearson correlation coefficient of cell sizes,
6.7% for the mean absolute error of the number of neighboring cells, and almost 46%
for the relative hexagonality error.
Finally, experiments confirmed that using a CNN model in a sliding window
setup is an essential computational hurdle. Specifically, it increased the prediction
time 40–50 times compared to the image-to-image setup (see Table 3).
Indirectly comparing our results to the corneal endothelium image segmentation
results reported by selected authors is auspicious. In particular, our proposed I2I
Attention U-Net scored the average cell-based DICE coefficient of 0.979, edge-
based DICE of 0.876, and MHD of 0.400 on a challenging Rotterdam dataset. The
corresponding scores for I2I U-Net were 0.980, 0.880, and 0.384. Vigueras-Guillen et
al. [14] utilized a dataset of 50 images and corresponding masks, achieving an average
DICE coefficient (for cells) of 0.981 and an MHD of 0.22, which is a comparable
result in terms of cell-based DICE. In another study by [15], the authors proposed
a method to segment the relatively simple Alizarine dataset [16]. With the best-fit
algorithm [17], a DICE coefficient (for cells) on this dataset was equal to 0.94 and
an MHD to 0.14, which is worse than we achieved in terms of the cell-based DICE.
Lower MHD scores were the results of applied postprocessing using the best-fit
method. Without the best-fit postprocessing, they achieved a DICE coefficient of 0.62
and an MHD of 1.26. our method avoids postprocessing, reducing the complexity
and time required for segmentation while maintaining excellent results.
14 A. Kucharski and A. Fabijańska

6 Conclusions

This study has shown that applying a marker-driven watershed transform to post-
process the cell edges probability maps in the CNN-based corneal endothelium image
segmentation can significantly improve the method’s sensitivity to discontinuous
edges. This approach allows replacing the sliding window setup with an image-to-
image setup, decreasing computational overhead and shortening prediction times
while maintaining similar or even better segmentation accuracy. Additionally, our
results indicate that Attention U-Net outperforms the classical U-Net regarding seg-
mentation cell quality measured in terms of cell morphometric parameters. These
findings demonstrate the potential of our proposed method for efficient and accurate
corneal endothelium image segmentation, which can have practical applications in
diagnosing and treating various eye diseases.

References

1. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image
segmentation. In: Medical Image Computing and Computer-Assisted Intervention (MICCAI),
vol. 9351 of LNCS, pp. 234–241. Springer (2015)
2. Fabijańska, A.: Segmentation of corneal endothelium images using a u-net-based convolutional
neural network. Artif. Intell. Med. 88, 1–13 (2018). https://doi.org/10.1016/j.artmed.2018.04.
004
3. Daniel, M., Atzrodt, L., Bucher, F., Wacker, K., Böhringer, S., Reinhard, T., Böhringer, D.:
Automated segmentation of the corneal endothelium in a large set of “real-world” specular
microscopy images using the u-net architecture. Sci. Rep. 9, 4752 (2019). https://doi.org/10.
1038/s41598-019-41034-2
4. Vigueras-Guillén, J.P., Sari, B., Goes, S.F., Lemij, H.G., van Rooij, J., Vermeer, K.A., van Vliet,
L.J.: Fully convolutional architecture versus sliding-window CNN for corneal endothelium cell
segmentation. BMC Biomed. Eng. 1, 4 (2019). https://doi.org/10.1186/s42490-019-0003-2
5. Kucharski, A., Fabijańska, A.: CNN-watershed: a watershed transform with predicted markers
for corneal endothelium image segmentation. Biomed. Signal Process. Control 68, 102805
(2021). https://doi.org/10.1016/j.bspc.2021.102805
6. Vigueras-Guillén, J.P., van Rooij, J., van Dooren, B.T.H., Lemij, H.G., Islamaj, E., van Vliet,
L.J., Vermeer, K.A.: Denseunets with feedback non-local attention for the segmentation of
specular microscopy images of the corneal endothelium with guttae (2022). https://doi.org/10.
48550/ARXIV.2203.01882. arxiv:2203.01882
7. Zhang, Y., Higashita, R., Fu, H., Xu, Y., Zhang, Y., Liu, H., Zhang, J., Liu, J.: A multi-branch
hybrid transformer network for corneal endothelial cell segmentation. In: de Bruijne, M, Cattin,
P.C., Cotin, S., Padoy, N., Speidel, S., Zheng, Y., Essert, C. (eds.) Medical Image Computing
and Computer Assisted Intervention—MICCAI 2021, pp. 99–108. Springer International Pub-
lishing, Cham (2021). https://doi.org/10.1007/978-3-030-87193-2_10
8. Selig, B., Vermeer, K.A., Rieger, B., Hillenaar, T., Hendriks, C.L.L.: Fully automatic evaluation
of the corneal endothelium from in vivo confocal microscopy. BMC Med. Imaging 15(1), 13
(2015). https://doi.org/10.1186/s12880-015-0054-3
9. Oktay, O., Schlemper, J., Folgoc, L.L., Lee, M., Heinrich, M., Misawa, K., Mori, K., McDonagh,
Hammerla, N.Y., Kainz, B., Glocker, B., Rueckert, D.: Attention u-net: learning where to look
for the pancreas (2018). https://doi.org/10.48550/ARXIV.1804.03999. arxiv:1804.03999
Modified CNN-Watershed for Corneal Endothelium Segmentation … 15

10. Li, C., Tam, P.: An iterative algorithm for minimum cross entropy thresholding. Pattern Recogn.
Lett. 19(8), 771–776 (1998). https://doi.org/10.1016/s0167-8655(98)00057-9
11. Dubuisson, M.-P., Jain, A.: A modified hausdorff distance for object matching. In: Proceedings
of 12th International Conference on Pattern Recognition, vol. 1, pp. 566–568 (1994). https://
doi.org/10.1109/ICPR.1994.576361
12. Freedman, D., Pisani, R., Purves, R.: Statistics (international student edition), Pisani, R. Purves,
4th edn. WW Norton & Company, New York
13. Sha, Y.: Keras-u-net-collection (2021). https://github.com/yingkaisha/keras-unet-collection.
https://doi.org/10.5281/zenodo.5449801
14. Vigueras-Guillén, J.P., Sari, B., Goes, S.F., Lemij, H.G., van Rooij, J., Vermeer, K.A., van Vliet,
L.J.: Fully convolutional architecture versus sliding-window CNN for corneal endothelium cell
segmentation. BMC Biomed. Eng. 1(1) (2019). https://doi.org/10.1186/s42490-019-0003-2
15. Nurzynska, K.: Deep learning as a tool for automatic segmentation of corneal endothe-
lium images. Symmetry 10(3). https://doi.org/10.3390/sym10030060. https://www.mdpi.com/
2073-8994/10/3/60
16. Ruggeri, A., Scarpa, F., Luca, M.D., Meltendorf, C., Schroeter, J.: A system for the automatic
estimation of morphometric parameters of corneal endothelium in Alizarine red-stained images.
Br. J. Ophthalmol 94(5), 643–647 (2010). arXiv:https://bjo.bmj.com/content/94/5/643.full.pdf,
https://doi.org/10.1136/bjo.2009.166561. https://bjo.bmj.com/content/94/5/643
17. Piórkowski, A.: Best-fit segmentation created using flood-based iterative thinning. In: Advances
in Intelligent Systems and Computing, pp. 61–68. Springer International Publishing (2016).
https://doi.org/10.1007/978-3-319-47274-4_7
Tissue Pattern Classification with CNN in
Histological Images

Krzysztof Siemion, Lukasz Roszkowiak, Jakub Zak, Antonina Pater,


and Anna Korzynska

Abstract Tissue pattern is an important factor in morphological evaluation of tis-


sue samples. It can be decisive in disease discrimination or for establishing disease
subtypes. Tissue architecture can be generally described by pathologist as classical,
hypocellular or hypercellular. This article presents a study used to establish classifi-
cation convolutional neural network model for tissue compactness assessment. The
VGG16 network was trained to classify image patches to create reliable heatmaps.
The application of image augmentation, class-specific sampling, and hyperparameter
tuning was used to prevent overfitting and increase the accuracy of the model. Based
on the current results it can be concluded that the differentiation between hypo-
and hypercellular tissue (compactness) is possible with application of deep learning
classification model VGG16. We hope to find correlation of features related to tissue
compactness with subtypes of the analysed disease that would become diagnostic
markers or prognostic factors.

Keywords Biomedical engineering · Digital pathology · Deep learning · Image


classification

1 Introduction

Tissue architecture is critical for cell homeostasis and physiological functions [1].
It can be also crucial factor in disease discrimination or establishing subtypes of the
disease. Typically tissue architecture can be described by pathologist as hypocellular,
hypercellular or classical. Such classification might aid the expert in evaluation of the

K. Siemion · L. Roszkowiak · J. Zak · A. Pater · A. Korzynska


Nalecz Institute of Biocybernetics and Biomedical Engineering, Polish Academy of Sciences,
Ks. Trojdena 4 st., 02-109 Warsaw, Poland
K. Siemion (B)
Medical Pathomorphology Department, Medical University of Bialystok, Bialystok, Poland
e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 17


P. Strumiłło et al. (eds.), The Latest Developments and Challenges in Biomedical
Engineering, Lecture Notes in Networks and Systems 746,
https://doi.org/10.1007/978-3-031-38430-1_2
18 K. Siemion et al.

tissue and in cooperation with other features. It might also help the expert pathologist
in disease differentiation, diagnosis and prognosis.
This article presents a study used to establish classification model for tissue com-
pactness. The model was tested on the set of digital whole slide images from patients
with inflammatory spindle cell lesions (ISCLs) [2], which are treated as a heterogenic
group of diseases and the biology of many of them remains not fully understood.
The histomorphology of these tumors seems to be diverse enough to examine if the
correlation of features related to tissue compactness with subtypes of the analysed
lesions exists. To achieve that goal, the convolutional neural network (CNN) was
trained to classify compactness in the images of the tissue samples.

1.1 Related Works

Genetic and epigenetic factors have major influence on cancer cell phenotype and
tumor architecture [3]. Other scientific teams presented different approaches to deter-
mining spatial contexts and tissue architecture, such as fast Fourier transform [4] or
spatially resolved transcriptomics [5] and apply them to multiple tissues, i.e. mus-
cle [6]. Also, some scientists provided insight on how tissue fixation protocol affects
tissue architecture [7].
Up to date, there is no study focusing strictly on the automatic classification of
tissue architecture. The method presented in this study could be used in the future to
aid in differentiation of multiple diseases.

2 Materials and Methods

2.1 Disease

A group of “inflammatory myofibroblastic lesions” consists of neoplastic, e.g. inflam-


matory myofibroblastic tumor (IMT), inflammatory fibroid polyp, inflammatory
liposarcoma, inflamed gastrointestinal stromal tumor, and reactive lesions, called
inflammatory pseudotumor(IPT) [8]. The IMT is an intermediate-grade neoplasm,
which often recurs after surgical excision and rarely gives metastases [9, 10]. Oth-
erwise, IPT almost never recurs after resection and does not metastasize [2].
ISCLs can occur as a mass in every localization of the human body. IMTs occur
most frequently in the abdominal cavity, less often in the lung, the genital tract, the
head and neck region [11]. IPTs can occur in almost every anatomical location [12].
Both tumors consist of proliferating fibroblasts and/or myofibroblasts infiltrated by
lymphocytes, plasma cells, eosinophils, and histiocytes [9].
Genetic test, such as gene sequencing or fluorescent in situ hybridization, are the
most reliable methods, which enable to differentiate both disease. Unfortunately,
Tissue Pattern Classification with CNN in Histological Images 19

Fig. 1 Dataset sample distribution annotated by expert; compactness classes: hypercellular in blue,
hypocellular in green, classical in yellow for patch size of 256 × 256

these methods are expensive in money and resources. Finding correlation of tissue
compactness features with subtypes of the analysed disease would provide efficient
diagnostic markers or prognostic factors.
Three basic histologic patterns of ISCLs are distinguished: hypocellular (scle-
rotic, scar-like), classical (nodular fasciitis-like, myxoid) and hypercellular (com-
pact, proliferating) [8, 13, 14]. The presence of giant cells, myxoid intercellular
content, ganglion-like cells, lymphovascular invasion, necrosis, high mitotic activity
and increased cellularity are considered as adverse factors that worsen the prognosis
after tumor resection [8, 15, 16]. That is why assessment of the histologic pattern
by the deep neural network may improve histopathologial diagnostics.

2.2 Image Data

This study dataset consisted of histological slides from patients with ISCLs collected
from the archives of the Academic Center of Pathomorphological and Genetic-
Molecular Diagnostics in Bialystok (Poland). The study obtained the consent of
the bioethics committee at the Medical University of Bialystok—number APK.002.
339.2020. The hematoxylin and eosin stained microscopic slides were digitalized
with Hamamatsu NanoZoomer SQ slide scanner. The 85 whole-slide images of 77
patients were collected in total. Then arbitrarily selected 3200 × 1800 pixels images
were annotated by an expert pathologist with masks representing each histological
tissue architecture type (example presented in Fig. 2). The annotated images were
then subdivided into smaller fragments (patches) for deep learning (DL) model train-
ing. They were divided into patches of size 256 × 256, 128 × 128 and 64 × 64 pixels
which resulted in about 9, 35 and 150 thousand images respectively. The distribu-
tion of samples with size 256 × 256 pixels that contained classes of hypocellular,
hypercellular and classical tissue architecture are presented in Fig. 1.
20 K. Siemion et al.

2.3 Dataset Parameters

The available annotations in the dataset are relatively rough, due to uncertainty of the
data. We established a crucial parameter, called class_tresh (class threshold), which
value decides about the class assignment to the sample patch. Moreover, the labeled
regions do not correspond directly to patch splitting as the annotations were made in
bigger images (see Sect. 2.2). The set value of class_tresh is a minimal ratio of area
in an image patch taken by annotation label. For example, when the class_tresh is
set to 0.2 the label has to take up more than 20% of the area of the image patch to be
included in the considered class.
Due to progressive manner of dataset creation during project development, at the
first stage of our research the number of examples in our dataset was very limited
and we wanted to try lowering the threshold to increase the number of examples in
each class. Nevertheless, the logical assumption would be to set the class_tresh at
0.5, and that is the set value in current stage of the research.
The dataset comprised of image patches with different tissue architecture was
quantitatively heterogeneous with the majority of classical architecture, as can be
seen in Fig. 1. In case of strongly biased dataset if the validation set is too small there
might occur a situation where all samples in the validation set will derivate from one
class (the one in majority). This inevitably results in pushing the updated weights in
the direction of assigning every sample to class in the majority of examples. To avoid
this problem, instead of fully randomly, we assigned the samples with consideration
of classes into each (train/validation/test) set. The other employed strategy was to
limit the sets to the number of example in the least populated class. So, all subsets
contained equally distributed classes with randomly selected examples.
Lastly, we arbitrarily opted for 50/10/40 (train/validation/test) dataset split. The
test set was separated, while train and validation test were used for cross-validation.

2.4 The Deep Learning Model & Implementation

In these experiments, we used a well known VGG16 model [17] where its input and
output layers were modified to fit our purposes. The model with about 14 million
trainable parameters was initialized with random weights. Primarily stochastic gra-
dient descent (SGD) optimizer was used with default parameters of learning rate and
decay, and typical loss function—the “categorical crossentropy” was used. Batch
size was set to 16 in all of the experiments.
The model, training and inference was implemented in Python with Keras Tensor-
flow framework. The model was obtained from the open-access Github repository.1

1 https://github.com/qubvel/classification_models.
Tissue Pattern Classification with CNN in Histological Images 21

Fig. 2 Classification model with sample input image; classes: hypercellular, classical and hypocel-
lular

Table 1 Comparison of tested augmentation schemes


Augmentation Flips Rotate RandomCrop Elastic ColorJitter CLAHE Brightness
transformations modification
Simple v v v
Med v v v v
med+colorJitter v v v v v
med+CLAHE v v v v v
Heavy v v v v v v
no_deform v v v v v v
no_deform+no_ v v v v v
colorJitter

2.5 Data Augmentation

In situations where there are very few examples, the data augmentation can improve
the achieved results. In this study Albumentations2 package was used to perform
image modification. We proposed several augmentation strategies and called them:
simple, med (medium) and heavy. We tested the influence of geometric deforma-
tions and ColorJitter on our data by implementing additional augmentation schemes:
med+colorJitter, med+CLAHE, no_deform and no_deform+no_colorJitter. For clear
comparison between tested augmentation schemes all of them are presented in
Table 1.
Simple The simple augmentation consisted of only vertical and horizontal flips,
image rotation and random cropping. This kind of augmentation has low impact
on image content as it’s not influential on values of pixels. Nevertheless it can
improve the training capabilities of the model.
Medium (med) Along simple augmentation this strategy consisted of several addi-
tional distortions: grid distortion, optical distortion and elastic deformation. This

2 https://github.com/albumentations-team/albumentations.
22 K. Siemion et al.

realizes multiple geometric transformations changing the location and shape of


the objects in the images.
Med+colorJitter The colorJitter is simply realised by randomly changing the
brightness, contrast, saturation and hue of an image. Adding this specific transform
extends the scope of changes in images from geometry to intensity of pixels. The
high variability of color and brightness in histopathology images is one of main
issue with this type of data, hence adding this kind of transform should increase
model generalization ability.
Med+CLAHE The algorithm of Contrast Limited Adaptive Histogram Equaliza-
tion (CLAHE) is a variant of adaptive histogram equalization in which the contrast
amplification is limited, so as to reduce this problem of noise amplification [18].
This particular modification tends to normalize the images hence improving the
generalization.
Heavy The proposed heavy augmentation consist of multiple distortions in geom-
etry and brightness of images along with Contrast Limited Adaptive Histogram
Equalization (CLAHE). Such strong manipulation of data can sometimes have
negative impact on training because of additional uncertainty of data.
no_deform The distortions in images caused by geometric transformations are
sometimes very drastic and might hinder the process of learning. With this par-
ticular augmentation approach we wanted to test if the applied distortions really
improve the training process and knowledge generalisation. This approach works
as an ablation test for augmentation techniques.
no_deform+no_colorJitter Simple augmentation with added CLAHE and bright-
ness modification. The geometric transformations and color modifications are
omitted. This approach works as an ablation test for augmentation techniques.

2.6 Training Scenarios for Compactness Classification

We have organized the dataset so that all patches containing the annotation (with
sufficient class_tresh) were treated as either classical, hypo- or hypercellular, while
the rest of the samples were considered as “other” class. This resulted in a heavily
biased dataset, with the majority of “other” class. We tackled this problem with the
training in which the number of samples was limited per class (to the number of
examples in the least populated class) to make the dataset more balanced.
Moreover, we tested two different approaches with and without inclusion of neg-
ative examples in the dataset. First, the 4-class approach included 3 positive classes
and one negative. In the other approach the 3-class scenario was reduced to predict
only classical, hypo- and hypercellular classes.
Tissue Pattern Classification with CNN in Histological Images 23

3 Results

First, we tested the capability of the VGG16 model to contain the knowledge sufficient
for the task. In this experiment we did not use any data augmentation, since we wanted
to try and overfit the model, see Fig. 3. The results in Table 2. confirm that the
model is able to learn the subtle differences in tissue compactness, but regularization
techniques are necessary to tackle the problem of overfitting.
Second, we concluded a comparative analysis for different class_tresh values.
The evaluation was performed with 3-fold cross-validation and 3-class model. The
resulting accuracy of models with different class_tresh parameter value showed the
proportional increase as seen in Table 3.
Third, we compared the proposed augmentation strategies. The Table 4. shows
that medium augmentation gives the best numerical results according to evaluation
performed on test set. We tested two tile-sizes and achieved consistent results. For
this evaluation the same 3-class model was used with 0.5 class_tresh parameter and
same limit of samples.
Fourth, results of average compactness classification accuracy with 2 different
training schemes are presented in Table 5. Distinct models with different number of
classes explicitly treated are compared. For this evaluation the class_tresh parameter
value was set to 0.5 as well as the same augmentation strategy, limit of samples, and
train/val/test split was used to achieve comparable results.

Fig. 3 Example of VGG16 model overfitting to ISCL dataset, as presented with train and validation
sets. Around 18th epoch the model overfits as the loss of validation set is no longer decreasing

Table 2 Overfit results for two tile-sizes (where, acc stands for accuracy)
Train Validation Test Evaluation
on train set
tile_size Loss Acc Loss Acc Loss Acc binary_acc Acc
64 0,00 1,00 0,16 0,89 0,46 0,89 0,95 1,00
128 0,00 1,00 0,15 0,91 0,36 0,90 0,95 1,00
24 K. Siemion et al.

Table 3 Comparison of classification accuracy with different class_tresh parameter value. Given
values are means acquired on test set with 3-fold cross-validation
class_tresh Accuracy Loss Binary accuracy Precision
0.90 97.07 6.05 98.05 97.09
0.75 96.84 6.88 97.90 96.85
0.50 95.57 9.23 97.05 95.60
0.33 94.83 10.65 96.42 95.55
0.10 93.47 11.37 95.26 95.83
0.00 82.35 14.41 88.27 70.91

Table 4 Comparison of augmentation schemes. Given values are means acquired on test set with
3-fold cross-validation. Headers description: epochs—number of epochs before the training was
terminated; acc—accuracy on test set; bin_acc—binary accuracy o test set
tile_size augmentation Epochs Acc Loss bin_acc Precision
128 Simple 39 95.08 12.10 96.71 95.10
Med 49 95.39 9.60 96.92 95.43
med+colorJitter 69 93.89 12.70 95.93 93.95
med+CLAHE 71 94.92 10.21 96.62 94.96
Heavy 77 94.38 10.36 96.25 94.44
no_deform 73 92.62 14.67 95.09 92.74
no_deform+no_ 56 94.22 11.21 96.13 94.21
colorJitter
256 Simple 49 91.79 15.72 94.56 91.97
Med 63 95.83 9.91 97.24 95.95
med+colorJitter 104 89.29 18.13 92.87 89.43
med+CLAHE 73 89.00 18.32 92.64 89.12
Heavy 104 95.63 8.99 97.10 95.70
no_deform 133 91.63 17.92 94.39 91.66
no_deform+no_ 83 95.81 8.83 97.23 95.89
colorJitter

Table 5 Results of average compactness classification accuracy of models trained with different
training scenarios. Randomly selected 4 images were used for F-score evaluation. Where: 3-class
model means discrimination of classical, hypo- and hypercellular compactness; 4-class is the model
with explicit “other” class included in the dataset; TP - true positive, FP—false positive, FN—false
negative, TN—true negative. The best results marked in bold
Class tile_size epochs train_acc val_acc test_acc test_bin_acc TP FP FN TN F-score
4 64 127 89.00 90.85 90.84 95.42 1929 601 671 9399 75.20
128 111 92.19 91.86 91.98 96.01 504 177 148 2321 75.62
256 153 89.82 93.14 92.54 96.28 123 28 20 585 83.67
3 64 53 92.15 93.39 93.59 95.74 2153 2024 447 7976 63.54
128 72 95.27 95.83 94.30 96.20 565 406 87 2092 69.62
256 63 91.11 97.04 94.70 96.50 131 106 12 507 68.95
Tissue Pattern Classification with CNN in Histological Images 25

4 Discussion

ISCLs can occur as masses in every localization of the human body including the
inflammatory myofibroblastic tumors that are intermediate-grade neoplasms charac-
terized by high recurrence rate after excision and low metastatic potential [9]. Three
basic histologic patterns can be distinguished, i.e. hypocellular, classical and hyper-
cellular [8, 13, 14]. Increased cellularity and presence of myxoid content in the clas-
sical morphology can worsen the prognosis of a patient [8]. That is why assessment
of the histologic pattern by the deep neural network may improve histopathologial
diagnostics.
We encountered two main problems with the automatic compactness classifica-
tion. First, was the strong bias in number of examples in each class, with the classical
tissue as far more common in our samples. It created a vast discrepancy in the training
data. To avoid this problem, instead of fully randomly, we assigned the samples with
consideration of classes into each (train,val,test) subset. All of the subsets contained
equally distributed classes, and were limited to the number of example in the least
populated class. This resulted in balanced datasets that provided consistent good
classification results.
The second problem, compactness classification in locations where tissue with
different patterns are mingled. This creates intertwined class labels (there were image
patches with multiple types of compactness) and consequently that might confuse
the learning model. There are two solutions that might work in this situation: (1)
lowering the size of the image patch; (2) developing a model that gives continuous
value, corresponding to the ratio of area taken by each class. Lowering the size of
the image patch should give more precise results, and this was partially confirmed
with the achieved results.
To cope with the mingled classes in patches, we established a crucial parameter,
called class_tresh, which value decides about the class assignment to the sample
patch. The set value of class_tresh is a minimal ratio of area in an image patch taken
by annotation label. In general, the models accuracy increased with higher class_tresh
parameter value (see Table 3). This confirmed that the model could achieve better
results with fewer samples but consisting more reliable information.
The integration of augmentation during training of the model significantly
improved the results. It provided for longer training without suffering from overfit-
ting. The training should be sufficiently extended, gradually increasing the accuracy
of the model.
According to the augmentation comparison, presented in Table 4, the medium
(med) augmentation strategy gives best numerical results according to evaluation
performed on test set. Applying different augmentation schemes we wanted to test
the influence of different augmentation methods on our type of data. Based on the
achieved results we have concluded that color manipulation (implemented as col-
orJitter) in histology images has negative influence on classification results. These
might give too drastic changes to the images that contain very subtle information.
26 K. Siemion et al.

Fig. 4 Example of inference on sample image, generating separate heatmaps for each class. Relation
of localization of hypercellular class could be clearly seen in the results image. Heatmaps (soft
predictions) are presented with “hot” colormap, where white equals max value

This consequently makes it more difficult to classify tissue architecture, hence lower
numerical results of these more complex augmentation strategies.
On the other hand, geometric deformations improve the results as they increase
the generalisation of learning process. Also, the process of brightness and contrast
manipulation have positive impact on learning process. To sum up, the heavy aug-
mentation combines the augmentation techniques that are beneficial for tissue archi-
tecture classification. The numerical results are comparable between med and heavy
augmentations as well as time taken per epoch. In the final experiments we decided to
use heavy augmentation as more complex augmentation has better chance to prevent
overfitting and because this method allowed for the longer effective training.
Most importantly we compared different training scenarios with different number
of classes explicitly given to the model. The main difference was how the negative
samples were provided and we tried to answer the question are they actually improv-
ing the classification accuracy. The model that was trained on patches with only
Tissue Pattern Classification with CNN in Histological Images 27

relevant classes (3-class model) without negative examples achieved best accuracy.
The example of inference in comparison to labels is shown in Fig. 4.
Our hypothesis was that introduction of the 4th class (“other”) might increase
the generality of the model, but the high variability of tissue in that class might
be misleading for the trained model. A seen in Table 5 the 4-class model achieved
comparable accuracy value to 3-class model. However, according to patch based
evaluation of test set images the number of false positives significantly decreases.
The achieved results in automatic classification of tissue architecture were satis-
factory, yet they were achieved only based on ISCL dataset. In the future we plan to
further develop this method and test it with different kind of tissue.

5 Conclusions

To sum up, based on the current results, we can conclude that the differentiation
between classical, hypo- and hypercellular tissue compactness is possible with appli-
cation of deep learning classification model VGG16. The achieved accuracy was
promising, although we hope to produce even better results by increasing the num-
ber of samples used in training the classification model. In the future we hope to find
correlation of features related to tissue compactness with markers of prognostic or
predictive factors of ISCLs.

Acknowledgements Ethics approval Consent of the Bioethics Committee at the Medical Uni-
versity of Bialystok—number APK.002.339.2020.

Funding This work has received support from Nalecz Institute of Biocybernetics and Biomedical
Engineering Polish Academy of Sciences statutory financing. The research is partially funded by two
subsidies from the Medical University of Bialystok: SUB/1/DN/22/002/1155 and SUB/1/DN/21/
002/1194.

References

1. Nelson, C.M., Bissell, M.J.: Of extracellular matrix, scaffolds, and signaling: tissue architecture
regulates development, homeostasis, and cancer. Ann. Rev. Cell Dev. Biol. 22(1), 287–309
(2006)
2. Kutok, et al.: Inflammatory pseudotumor of lymph node and spleen: an entity biologically
distinct from inflammatory myofibroblastic tumor. Hum. Pathol. 32(12), 1382–1387 (2001)
3. Almagro, J., Messal, H.A., Elosegui-Artola, A., van Rheenen, J., Behrens, A.: Tissue architec-
ture in tumor initiation and progression. Trends Cancer 8(6), 494–505 (2022)
4. Zak, J., Siemion, K., Roszkowiak, L., Korzynska, A.: Fourier transform layer for fast fore-
ground segmentation in samples’ images of tissue biopsies. In: Biocybernetics and Biomedical
Engineering—Current Trends and Challenges, pp. 118–125. Springer International Publishing
(2021)
28 K. Siemion et al.

5. Chang, Y., He, F., Wang, J., Chen, S., Li, J., Liu, J., Yu, Y., Su, L., Ma, A., Allen, C., Lin,
Y., Sun, S., Liu, B., Otero, J., Chung, D., Fu, H., Li, Z., Xu, D., Ma, Q.: Define and visualize
pathological architectures of human tissues from spatially resolved transcriptomics using deep
learning (2021)
6. Morris, T.A., Eldeen, S., Tran, R.D.H., Grosberg, A.: A comprehensive review of computational
and image analysis techniques for quantitative evaluation of striated muscle tissue architecture.
Biophys. Rev. 3(4), 041302 (2022)
7. Singhal, P.: Evaluation of histomorphometric changes in tissue architecture in relation to alter-
ation in fixation protocol—An invitro study. J. Clin. Diagn. Res. (2016)
8. Siemion, K., Reszec-Gielazyn, J., Kisluk, J., Roszkowiak, L., Zak, J., Korzynska, A.: What do
we know about inflammatory myofibroblastic tumors?—A systematic review. Adv. Med. Sci.
67(1), 129–138 (2022)
9. Antonescu, C.R., et al.: WHO classification of tumours. Soft tissue and bone tumours, Inter-
national Agency for Research on Cancer (2020)
10. Gros, L., Tos, A.P.D., Jones, R.L., Digklia, A.: Inflammatory myofibroblastic tumour: state of
the art. Cancers 14(15), 3662 (2022)
11. Lindberg, M.R.: Diagnostic Pathology: soft Tissue Tumors E-Book. Elsevier Health Sciences
(2019)
12. Zhu, et al.: Pulmonary inflammatory myofibroblastic tumor versus IgG4-related inflammatory
pseudotumor: differential diagnosis based on a case series. J. Thorac. Disease 9(3), 598–609
(2017)
13. Shenawi, H.A., Al-Shaibani, S.A., Saad, S.K.A., Al-Sindi, F., Al-Sindi, K., Shenawi, N.A.,
Naguib, Y., Yaghan, R.: An extremely rare case of malignant jejunal mesenteric inflammatory
myofibroblastic tumor in a 61-year-old male patient: a case report and literature review. Front.
Med. 9 (2022)
14. Khatri, A., Agrawal, A., Sikachi, R., Mehta, D., Sahni, S., Meena, N.: Inflammatory myofi-
broblastic tumor of the lung. Adv. Respir. Med. 86(1), 27–35 (2018)
15. Coffin, C.M., Hornick, J.L., Fletcher, C.D.M.: Inflammatory myofibroblastic tumor. Am. J.
Surg. Pathol. 31(4), 509–520 (2007)
16. Bennett, J.A., Nardi, V., Rouzbahman, M., Morales-Oyarvide, V., Nielsen, G.P., Oliva, E.:
Inflammatory myofibroblastic tumor of the uterus: a clinicopathological, immunohistochemi-
cal, and molecular analysis of 13 cases highlighting their broad morphologic spectrum. Modern
Pathol. 30(10), 1489–1503 (2017)
17. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recog-
nition
18. Pizer, S.M., Amburn, E.P., Austin, J.D., Cromartie, R., Geselowitz, A., Greer, T., ter
Haar Romeny, B., Zimmerman, J.B., Zuiderveld, K.: Adaptive histogram equalization and
its variations. Comput. Vis. Graph. Image Process. 39(3), 355–368 (1987)
Robust Multiresolution and Multistain
Background Segmentation in Whole
Slide Images

Artur Jurgas, Marek Wodzinski, Manfredo Atzori, and Henning Müller

Abstract Background segmentation is an important step in analysis of histopatho-


logical images. It allows one to remove irrelevant regions and focus on the tissue of
interest. However, background segmentation is challenging due to the variability of
stain colors and intensity levels across different images, modalities, and magnifica-
tion levels. In this paper, we present a learning-based model for histopathology back-
ground segmentation based on convolutional neural networks. We compare two mul-
tiresolution approaches to deal with the variability of magnification in histopathology
images: (i) model that uses upscaling of smaller patches of the image, and (ii) model
simultaneously trained on multiple resolution levels. Our model is characterized
by solid performance both in multiresolution and multistain dyes (H&E and IHC),
achieving good performance on publicly available dataset. The quantitative scores
are, in terms of the Dice score, close to 94.71. The qualitative analysis presents
strong performance on previously unseen cases from different distributions and var-
ious dyes. We freely release the model, weights, and ground-truth annotations to
promote the open science and reproducible research.

Keywords Computational pathology · Deep learning · Digital pathology


Segmentation · Whole-slide images · WSI

A. Jurgas · M. Wodzinski · M. Atzori · H. Müller


University of Applied Sciences Western Switzerland (HES-SO Valais), Information Systems
Institute, Sierre, Switzerland
A. Jurgas (B) · M. Wodzinski
Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering,
AGH University of Science and Technology, Krakow, Poland
e-mail: [email protected]
M. Atzori
Department of Neuroscience, University of Padova, Padova, Italy
H. Müller
Medical Faculty, University of Geneva, Geneva, Switzerland

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 29


P. Strumiłło et al. (eds.), The Latest Developments and Challenges in Biomedical
Engineering, Lecture Notes in Networks and Systems 746,
https://doi.org/10.1007/978-3-031-38430-1_3
30 A. Jurgas et al.

1 Introduction

Background segmentation is a basic step in most preprocessing tasks for whole slide
images (WSIs), which are digital scans of tissue slides used for cancer diagnosis
and prognosis. Segmentation aims to separate the foreground tissue regions from
the background glass regions, which can reduce computational costs and improve
accuracy for subsequent analysis such as classification, detection, grading, and regis-
tration [10]. However, background segmentation of WSIs is challenging due to vari-
ations in tissue appearance, staining quality, illumination conditions, and scanning
artifacts. Existing methods for background segmentation of WSIs are either based
on handcrafted features or deep learning models. Up to our knowledge, the currently
existing methods are limited to H&E (hematoxylin and eosin) staining. Moreover,
most existing methods for background segmentation are either not publicly available
or require manual tuning of parameters for different datasets.
Many studies have explored histopathology segmentation, which is a technique
to identify different regions in tissue images like [1, 5, 13] However, most of these
studies focus on segmenting nuclei, which are small and distinct structures in the
tissue. Segmenting the whole tissue is also challenging because it involves large areas
that have low contrast and that are often similar to the background, especially when
immunohistochemistry (IHC) staining is used. This type of staining is much less
normalized and quality controlled [7, 17]. Even for H&E staining, the differences in
dyes are problematic [16]. Therefore, existing methods for segmentation cannot be
easily adapted to whole tissue segmentation.
Some previous studies [12] used a conventional method to segment tissue regions
of interest (ROI) from histopathological images. However, this method works well
only for images stained with H&E, and not for those stained with immunohisto-
chemistry (IHC). Moreover, our goal is different from theirs. We want to segment
the background from the tissue, not just the ROI within the tissue. This is impor-
tant for some preprocessing techniques that require selecting the entire tissue area,
including its folds and artifacts.
Recent studies have demonstrated that deep learning can achieve remarkable
results in histopathology, enabling more accurate and detailed predictions for various
diseases [6, 11, 13, 15] However, most of the existing work focuses on specific types
of tissue, such as epidermal tissue [14]. While there is some research in background
segmentation [2], the proposed models are not publicly available or accessible, lim-
iting their reproducibility and applicability.
In this paper, we propose a deep learning-based pipeline for background segmen-
tation of WSIs that can handle diverse types of tissues and stains without requiring any
prior information or user intervention. Our framework consists of two main compo-
nents: a patch-level segmentation network that predicts foreground probability maps
for small patches extracted from WSIs, and a slide-level fusion during inference that
combines the patch-level predictions into a final binary mask for the whole slide.
We evaluate our framework on a public dataset from ACROBAT challenge, covering
mainly breast cancer patients. We show that our framework achieves solid perfor-
Robust Multiresolution and Multistain Background Segmentation … 31

mance and high generalizability across different tissues. Additionally, we show solid
performance in multiple resolutions of those images, as well as both H&E and IHC
staining.
One of the applications of our background segmentation method is to improve
the quality and efficiency of other algorithms that process histopathology images.
Software tools like QuPath [3] and HistoQC [8] are widely used for various tasks
such as tissue detection, annotation, classification, and quantification and are proven
to benefit histopathology analysis [4]. However, these tools often rely on manual or
semi-automatic methods to remove the background regions from the images, which
can be time-consuming and inconsistent. Our background segmentation method can
help streamline this process.
The source code, ground-truth segmentation masks, and model weights will be
made publicly available [9].

2 Materials and Methods

2.1 Dataset

The ground-truth used for training is defined by manual segmentation of nine different
tissues from the ACROBAT dataset [18]. The dataset is a collection of tissues in a
pyramidal tiff format with varying resolutions at each level of the pyramid. Starting
at 10x, authors provide 7 to 9 lower resolutions per image, with a downsampling
factor of 2 in between. The second level which made up most of the training data has
an average resolution of 12146 ± 2189 pixels along the X axis, and 24007 ± 4854
pixels along the Y axis. Each slide at this resolution can potentially generate up to
4449 unique, non-overlapping patches of size 256 by 256 pixels. We utilize a random
sampling strategy. In each iteration, we sample 256 patches in random locations. This
means over the course of training, there is some overlap of those patches. The final
number of patches is not defined and depends on how long we train the network.
The dataset includes various artifacts such as markers, scratches, out-of-focus
regions and coverslips. It also consists of two types of staining: H&E and IHC. We use
three distinct IHC dyes during training and testing. We selected this dataset because
it represents the real-world challenges of image analysis. We used eight images for
training, one image for validation, and four images each in its two staining variants
for qualitative evaluation without prior manual segmentations. We aimed to achieve
high-quality segmentation by using a patch-based approach for both training and
inference. This allows us to exploit the heterogeneity of the tissue structures across
different slides and to generalize better with less data (Table 1).
We used Nvidia Tesla V100 graphics card configured with 300W TDP and 32 GB
of memory hosted on PLGrid HPC cluster Prometheus. We did not utilize the tensor
cores as of the time of writing, which means the inference times in Table 2 could be
further improved.
32 A. Jurgas et al.

Table 1 Models’ performance during training


Dataset patches Model Dice
Training Multiresolution 96.14
Validation Multiresolution 92.27
Training Upscaled 96.31
Validation Upscaled 91.68

Table 2 Models’ performance on validation image on different resolution levels


Multiresolution model Upscaled model
Pyramid level Dice Latency [mm:ss] Dice Latency [mm:ss]
2nd 93.66 1:08 94.43 1:06
3rd 95.09 0:16 93.21 0:49
4th 95.61 0:03 92.85 0:44
5th 94.62 0:01 81.00 0:47
6th 94.57 0:00 11.16 0:45
Mean dice 94.71 74.53

2.2 Model

We present an encoder-decoder Convolutional Neural Network (CNN) as our model


for image segmentation in Fig. 2. The model architecture and the training pipeline
are illustrated in Fig. 1. We trained our model using binary cross-entropy (BCE)
loss and Adam optimizer, and we measured its performance on both training and
validation datasets using soft Dice score.
We adopted a UNet-like architecture that consists of an encoder-decoder structure
with both short and long skip connections. The short skip connections allow the
network to preserve spatial information across different levels of abstraction, while
the long skip connections enable the network to recover fine details from the encoder
output. We present the architecture details of the model in Table 3.
Our training pipeline involves extracting image patches of size 256 × 256 from: (i)
different resolutions of the image pyramid, or (ii) only the second level of the pyramid,
and then inferring on other levels by extracting smaller patches proportional to the
downsampling level and upscaling them using bilinear interpolation to 256 × 256
pixels.
Our random sampling strategy is dependent on previously generated probability
maps. We apply gaussian blurring with a 3 × 3 kernel on segmentation maps and then
subtract that from the original mask. This gives us a blurred edge of the segmentation
mask. Then we transform this output so that the background and foreground of the
tissue has a 25% sampling probability and the edge 50%.
Robust Multiresolution and Multistain Background Segmentation … 33

Table 3 Model architecture with each convolution kernel size, the output shape of the layer, and
the layer’s number of parameters
Layer type Kernel shape Output shape Param #
UNet – [1, 1, 256, 256] –
+ Sequential – [1, 1, 128, 128] –
+ Conv2d [4, 4] [1, 1, 128, 128] 17
+ GroupNorm – [1, 1, 128, 128] 2
+ LeakyReLU – [1, 1, 128, 128] –
+ Sequential – [1, 32, 64, 64] –
+ ResidualBlock [3, 3] [1, 32, 128, 128] 9,760
+ Conv2d [4, 4] [1, 32, 64, 64] 16,416
+ GroupNorm – [1, 32, 64, 64] 64
+ LeakyReLU – [1, 32, 64, 64] –
+ Sequential – [1, 64, 32, 32] –
+ ResidualBlock [3, 3] [1, 64, 64, 64] 57,792
+ ResidualBlock [3, 3] [1, 64, 64, 64] 78,272
+ Conv2d [4, 4] [1, 64, 32, 32] 65,600
+ GroupNorm – [1, 64, 32, 32] 128
+ LeakyReLU – [1, 64, 32, 32] –
+ Sequential – [1, 128, 16, 16] –
+ ResidualBlock [3, 3] [1, 128, 32, 32] 230,272
+ ResidualBlock [3, 3] [1, 128, 32, 32] 312,192
+ Conv2d [4, 4] [1, 128, 16, 16] 262,272
+ GroupNorm – [1, 128, 16, 16] 256
+ LeakyReLU – [1, 128, 16, 16] –
+ Sequential – [1, 128, 32, 32] –
+ ResidualBlock [3, 3] [1, 128, 16, 16] 312,192
+ ResidualBlock [3, 3] [1, 128, 16, 16] 312,192
+ ConvTranspose2d [4, 4] [1, 128, 32, 32] 262,272
+ GroupNorm – [1, 128, 32, 32] 256
+ LeakyReLU – [1, 128, 32, 32] –
+ Sequential – [1, 64, 64, 64] –
+ ResidualBlock [3, 3] [1, 64, 32, 32] 160,192
+ ConvTranspose2d [4, 4] [1, 64, 64, 64] 65,600
+ GroupNorm – [1, 64, 64, 64] 128
+ LeakyReLU – [1, 64, 64, 64] –
+ Sequential – [1, 32, 128, 128] –
+ ResidualBlock [3, 3] [1, 32, 64, 64] 40,160
+ ConvTranspose2d [4, 4] [1, 32, 128, 128] 16,416
+ GroupNorm – [1, 32, 128, 128] 64
+ LeakyReLU – [1, 32, 128, 128] –
+ Sequential – [1, 1, 256, 256] –
+ ResidualBlock [3, 3] [1, 1, 128, 128] 346
+ ConvTranspose2d [4, 4] [1, 1, 256, 256] 17
+ GroupNorm – [1, 1, 256, 256] 2
+ LeakyReLU – [1, 1, 256, 256] –
+ Sequential – [1, 1, 256, 256] –
+ Conv2d [1, 1] [1, 1, 256, 256] 2
34 A. Jurgas et al.

Fig. 1 A schematic diagram of the proposed deep learning pipeline. The pipeline consists of two
stages: training and inference. Each block has marked inputs described in the legend

Fig. 2 A schematic illustration of the UNet-like model used in this study. The model consists of
an encoder-decoder architecture with both short skip connections described in ResidualBlock and
long skip connections based on concatenating feature maps

During the validation and inference steps, we employ an aggregator of individual


patches by merging them together with a 20% overlap. In this overlap region, we
average the value of pixels.
Robust Multiresolution and Multistain Background Segmentation … 35

3 Results

The Figs. 3, 4 and 5 illustrate some examples of our method’s output. We compared
the results of the upscaled and multiresolution model in Tables 1 and 2. The scores
reported in those tables are the average scores of all patches in each batch.
Results in Table 1 indicated that the multiresolution model had a similar per-
formance on the training dataset but a slightly higher performance on the valida-
tion dataset than the single-resolution model. This indicates that the multiresolution
model reduced overfitting and improved generalization. Figures 4 and 5 illustrate
some examples of image reconstruction from both models on the test set Table 4
shows expanded descriptions.

4 Discussion and Conclusions

We proposed two methods for patch-based segmentation of histopathology images:


(i) training and testing on a single pyramid level with patch resampling at infer-
ence time, and (ii) training and testing on multiple pyramid levels without patch
resampling. We found that the second method achieved better performance across

Fig. 3 Visualization of segmentation on the validation dataset. Segmentation output is marked


with green color. Ground-truth masks are marked with red color. Overlap of the prediction with
ground-truth is marked with orange color
36 A. Jurgas et al.

Fig. 4 Exemplary visualization from the test dataset (IHC staining). Segmentation output is marked
with green color

different resolutions and significantly reduced the inference time. Our method is a
general approach for histopathology slide analysis that does not target any specific
regions of tissues. Instead, it aims to segment the background of the slide by inverting
the segmentation of other structures, such as cells, nuclei, glands, vessels, etc.
According to Table 2 mutliresolution model achieved substantially better perfor-
mance on lower magnification levels, which is supported by comparing visualizations
in Figs. 5 and 6. The multiresolution model had also a faster inference time than the
single-resolution model, which is beneficial for practical applications.
The second method required a larger model with a wider receptive field, but it
is still feasible to deploy on even IoT class hardware such as Nvidia Jetson. This
allows for deployment on modern WSIs digitization hardware or integration into
most computational histopathology software.
Additionally, the first method incurred additional computational costs due to two
interpolation steps. The first method suffered from information loss and artifact
introduction when upscaling patches from lower resolutions. The difference in per-
formance between the two methods can be intuitively explained by considering that
the second method used more information from the original image and did not intro-
duce any artificial structures due to patches interpolation.
Robust Multiresolution and Multistain Background Segmentation … 37

Fig. 5 Exemplary visualization from the test dataset (H&E staining). Segmentation output is
marked with green color

A common problem in histopathology is the variability of staining techniques and


dyes used to visualize different tissue structures and biomarkers. Different laborato-
ries may use different protocols, reagents, and equipment to perform immunohisto-
chemistry staining, which can result in inconsistent and non-reproducible results.
This poses a challenge for developing computational models that can analyze
histopathological images and extract meaningful features from them. Our model
addresses this challenge by being stain-invariant, meaning that it can work on images
stained with a wide range of different dyes without requiring any pre-processing or
normalization steps. This makes our model more robust and generalizable to differ-
ent datasets and applications. Segmentations of the same tissue dyed by IHC and HE
staining is presented in Figs. 4 and 5.
38 A. Jurgas et al.

Table 4 Qualitative analysis of the model’s output in the test set. Each case’s number matches the
number in the original dataset. The images will be published as the supplementary material in the
associated repository
Case from dataset Output commentary
4 Small artifacts from aggregation in blurry
regions in H&E variant. They disappear while
going down the magnification levels. On the
3rd level, artifacts from scratches appear. IHC
variant has no artifacts, even though they are
present in the image
5 While correctly segmenting tissue, there is a
larger artifact area in the HE variant. It
disappears on lower resolution levels. IHC
variant is segmented similarly to H&E one
8 Solid performance on both HE and IHC
variants. A little noise in the largest IHC
sample where coverslip is present in the
original, though most of the coverslip is
properly not segmented by the model. The
artifact disappears at lower magnifications
14 Solid HE segmentation. Similar performance in
IHC, although some noise is present on the
highest resolution in less visible regions of the
tissue
29 Solid HE segmentation. The coverslip on the
IHC variant has been partly segmented
30 Solid HE and IHC segmentation. The coverslip
on the IHC variant was properly not segmented

One of the limitations of our method is the sensitivity of the model to the res-
olution of the input images. Our model was trained on images with a fixed set of
magnifications, and it may not perform well on images with a very different magni-
fication level. Another limitation is the possibility of large or significantly different
artifacts than in our dataset influencing the segmentation results. Particularly with
higher resolutions, such artifacts could be segmented as part of the background,
leading to inaccurate or incomplete segmentation.
In this paper, we proposed a novel method for fast and generalizable background
segmentation in histopathological images. We tackled the problem from two per-
spectives: an upscaling of lower resolution images, and training the model on all
magnification levels. We demonstrated that our method can achieve high accuracy
and robustness on various tissues with different resolution levels and staining dyes.
Robust Multiresolution and Multistain Background Segmentation … 39

Fig. 6 Exemplary visualization from the test dataset (H&E staining) on the upscaling model.
Segmentation output is marked with green color

Compared to other works like [12] our method does not require adjusting any hyper-
parameters. We also showed that our deep learning model can learn to segment tissues
effectively even with a small amount of data, thanks to the patch-based approach.
Our method can be useful for preprocessing histopathological images for further
analysis and diagnosis.

Acknowledgements This work was done as a part of the IMI BigPicture project (IMI945358).
We gratefully acknowledge Poland’s high-performance computing infrastructure PLGrid (HPC
Centers: ACK Cyfronet AGH) for providing computer facilities and support within computational
grant no. PLG/ 2023/016239.
40 A. Jurgas et al.

References

1. Al-Kofahi, Y., Lassoued, W., Lee, W., et al.: Improved automatic detection and segmentation
of cell nuclei in histopathology images. IEEE Trans. Biomed. Eng. 57(4), 841–852 (2010)
2. Bándi, P., Balkenhol, M., van Ginneken, B., et al.: Resolution-agnostic tissue segmentation in
whole-slide histopathology images with convolutional neural networks. PeerJ 7, e8242 (2019)
3. Bankhead, P., Loughrey, M.B., Fernández, J.A., et al.: QuPath: open source software for digital
pathology image analysis. Sci. Rep. 7(1), 16878 (2017)
4. Chen, Y., Zee, J., Smith, A., et al.: Assessment of a computerized quantitative quality control
tool for whole slide images of kidney biopsies. J. Pathol. 253(3), 268–278 (2021)
5. Cui, Y., Zhang, G., Liu, Z., et al.: A deep learning algorithm for one-step contour aware nuclei
segmentation of histopathology images. Med. Biol. Eng. Comput. 57(9), 2027–2043 (2019)
6. Ehteshami Bejnordi, B., Veta, M., Johannes van Diest, P., et al.: Diagnostic assessment of
deep learning algorithms for detection of lymph node metastases in women with breast cancer.
JAMA 318(22), 2199–2210 (2017)
7. Elias, J.M., Gown, A.M., Nakamura, R.M., et al.: Special report: quality control in immuno-
histochemistry: report of a workshop sponsored by the biological stain commission. Am. J.
Clin. Pathol. 92(6), 836–843 (1989)
8. Janowczyk, A., Zuo, R., Gilmore, H., et al.: HistoQC: an open-source quality control tool for
digital pathology slides. JCO Clin. Cancer Inform. 3, 1–7 (2019)
9. Jurgas, A.: Jarartur/pcbbe23-histseg: multiresolution and multistain background segmentation
in WSIs (2023)
10. Levy, J.J., Jackson, C.R., Haudenschild, C.C., et al.: PathFlow-MixMatch for whole slide image
registration: an investigation of a segment-based scalable image registration method (2020)
11. Litjens, G., Sánchez, C.I., Timofeeva, N., et al.: Deep learning as a tool for increased accuracy
and efficiency of histopathological diagnosis. Sci. Rep. 6, 26286 (2016)
12. Muñoz-Aguirre, M., Ntasis, V.F., Rojas, S., et al.: PyHIST: a histological image segmentation
tool. Plos Comput. Biol. 16(10), e1008349 (2020)
13. Naylor, P., Laé, M., Reyal, F., et al.: Segmentation of nuclei in histopathology images by deep
regression of the distance map. IEEE Trans. Med. Imaging 38(2), 448–459 (2019)
14. Oskal, K.R.J., Risdal, M., Janssen, E.A.M., et al.: A U-net based approach to epidermal tissue
segmentation in whole slide histopathological images. SN Appl. Sci. 1(7), 672 (2019)
15. Shelhamer, E., Long, J., Darrell, T.: Fully convolutional networks for semantic segmentation.
IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 640–651 (2017)
16. Tellez, D., Litjens, G., Bándi, P., et al.: Quantifying the effects of data augmentation and stain
color normalization in convolutional neural networks for computational pathology. Med. Image
Anal. 58, 101544 (2019)
17. Tsutsumi, Y.: Pitfalls and caveats in applying chromogenic immunostaining to histopatholog-
ical diagnosis. Cells 10(6), 1501 (2021)
18. Weitz, P., Valkonen, M., Solorzano, L., et al.: ACROBAT—A multi-stain breast cancer histolog-
ical whole-slide-image data set from routine diagnostics for computational pathology (2022).
arxiv:abs/2211.13621
Impact of Visual Image Quality on
Lymphocyte Detection Using YOLOv5
and RetinaNet Algorithms

A. Polejowska, M. Sobotka, M. Kalinowski, M. Kordowski, and T. Neumann

Abstract Lymphocytes, a type of leukocytes, play a vital role in the immune system.
The precise quantification, spatial arrangement and phenotypic characterization of
lymphocytes within haematological or histopathological images can serve as a diag-
nostic indicator of a particular lesion. Artificial neural networks, employed for the
detection of lymphocytes, not only can provide support to the work of histopathol-
ogists but also enable better disease monitoring and faster analysis of the general
immune system condition. In this study, the impact of visual quality on the perfor-
mance of state-of-the-art algorithms for detecting lymphocytes in medical images
was examined. Two datasets were used, and image modifications such as blur, sharp-
ness, brightness, and contrast were applied to assess the performance of YOLOv5
and RetinaNet models. The study revealed that the visual quality of images exerts
a substantial impact on the effectiveness of the deep learning methods in detect-
ing lymphocytes accurately. These findings have significant implications for deep
learning approaches used in digital pathology.

Keywords Digital pathology · Lymphocyte detection · YOLOv5 · RetinaNet ·


Image quality · Image degradation · Histopathology images · Tumor

1 Introduction

Histopathological images, which are images of tissue samples taken from a patient’s
body, are an invaluable tool in the diagnosis and treatment of diseases. Accurate
detection of lymphocytes, which are a type of immune cells, in these images is

A. Polejowska (B) · M. Sobotka · M. Kalinowski · M. Kordowski · T. Neumann


Department of Biomedical Engineering, Faculty of Electronics, Telecommunications and
Informatics and BioTechMed Center, Gdańsk University of Technology, ul. Narutowicza 11/12,
80-233 Gdańsk, Poland
e-mail: [email protected]
T. Neumann
e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 41


P. Strumiłło et al. (eds.), The Latest Developments and Challenges in Biomedical
Engineering, Lecture Notes in Networks and Systems 746,
https://doi.org/10.1007/978-3-031-38430-1_4
Another Random Scribd Document
with Unrelated Content
Sometimes I attend the Theatre. This is divided into boxes, which
families hire for a year. If the play be uninteresting, they visit each
other's box, and pass the evening in conversation. It is diverting to
observe the gentlemen take from their pockets a flint and steel for
the purpose of lighting their cigars, and then to extend the favor of a
light to the ladies; and sometimes the whole theatre seems as if
filled with fire-flies.

Immediately on rising, a Mexican takes a small cup of chocolate with


a little bread and a glass of water. At ten, they take what they call
breakfast—it is in fact equivalent to a dinner, consisting not of tea or
coffee, but of meats, sweetmeats and wine. At about three, dinner is
served. At six or seven, they again take chocolate; and at ten, an
enormous supper is laid of hot meats, &c. equal to a third dinner. At
these meals, three or four dishes of meats, with very few
vegetables, are brought on in various courses—the olla podrida, a
mixture of meats, fruits, and vegetables boiled together—always
constitutes a part of the first course—frijoles—beans boiled—
invariably precede the sweetmeats, of which the Mexicans are
extremely fond. Perhaps this is the reason why good teeth are
seldom seen in Mexico.

* * * * *

23d November, 1825. I have stated that few parties are given in
Mexico. Balls are sometimes held by the American and English
Legations. If, on these occasions, fifty ladies attend, it is considered
a prodigious number to assemble together. The expenses of
preparation which they incur are enormous, and deter many,
however devoted they may be to pleasure, from partaking in
frequent diversions of this kind. Society, too, has not acquired that
equilibrium which the democratical institutions of the country must
produce eventually. A powerful aristocracy, as may reasonably be
supposed, still exists in the capital—time alone will level this—it will
die with the present generation, taking for granted that the
republicanism of Mexico will be permanent. Aristocracy, of course,
reduces the highest class of society to a limited number, so that a
large assemblage of ladies here would be thought small in the
United States.

At whatever hour you invite company, it will not collect before nine,
and the most fashionable appear between ten and eleven. The
music soon invites them to the waltz, or to the Spanish country-
dance, both of which are graceful, and perhaps voluptuous, when
danced, as in Mexico, to the music of guitars or of bandolines. They
dance upon brick floors—there are none other in Mexican houses—
generally bare, but foreigners have introduced the more comfortable
fashion of covering them with canvass; and as the steps are simple,
without the hopping and restlessness of our cotillons or quadrilles, it
is not so unpleasant as would be supposed; they glide over the
pavement without much exertion. The dancing continues, not
uninterruptedly as with us, but at intervals, until twelve o'clock,
when the ladies are conducted to the supper table, which must be
loaded with substantial as well as sweet things. After supper,
dancing is continued, and the company begins to disperse between
one and two in the morning, and sometimes not until near daybreak.

None of the wealthy families have followed the example set them by
foreigners. They give no balls or dinners. Although I have now been
here six months, I have never dined in a Mexican house in the city.
Their hospitality consists in this: they place their houses and all they
possess at your disposal, and are the better pleased the oftener you
visit them, but they rarely, if ever, offer you refreshments of any
kind. It is said that they are gratified if you will dine with them
unceremoniously, but they never invite you.

31st December, 1825. I can scarcely persuade myself that to-morrow


will be New-Year's day. The weather is most delightful. We are now
sitting with our windows open—at night too. About a fortnight ago
the mornings were uncomfortably cool; but the sun at mid-day is
always hot. What a delightful climate! And we are now eating the
fruits of a northern mid-summer. We have always had fresh oranges
since our arrival. A week since we had green peas; and to-day five
different kinds of fruit appeared upon our table—oranges, apples,
walnuts, granadites de China, and chirimoyas—the last, la reina de
los frutos, (the queen of fruit,) tasting like strawberries and cream.
The markets contain numerous other sorts. Our friends at home are
now gathering around the glowing coals, or treading the snow
without. We see the former in the kitchen only—the latter on the
valcanoes which tower in the distance.

* * * * *

7th December, 1827. A letter from home affords me the satisfaction


of knowing that our friends generally continue to enjoy good health,
and are subject to none other than the ordinary ills of life, such as
cut-throat weather, squalling brats, or a twinge or two of gout or
rheumatism. These are evils which humanity is decreed to suffer
throughout the world; but in Mexico we are more exempt from most
of them than elsewhere. The sun now shines twelve hours of every
day, and either the moon or stars give light to the other twelve.
Such will the weather continue to be until May or June, when the
rains fall with such regularity and certainty, that very slight
observation enables us to know when to go out, or to shelter
ourselves. The mornings now are only a little cool, although we are
in mid-winter; and our tables are supplied with fruit as bountifully as
in the months of July and August. Our other ills are in like manner
trivial. We are sometimes ennuyés for want of society, but books,
and sometimes a game of chess, enable us to live without being
driven to the commission of suicide. And as a dernier resort, we
throw ourselves into the arms of Morpheus, this being the peculiar
delightful climate for sleep—no mosquitos, nor extremes of heat or
cold. The thermometer ordinarily ranges at about 70° of Fahrenheit.
SCENES FROM AN UNPUBLISHED DRAMA,

BY EDGAR A. POE.

I.

ROME. A Lady's apartment, with a window open and looking into a garden. Lalage, in
deep mourning, reading at a table on which lie some books and a hand mirror. In the
back ground Jacinta (a servant maid) leans carelessly upon a chair.

Lalage. Jacinta! is it thou?

Jacinta (pertly.) Yes, Ma'am, I'm here.

Lalage. I did not know, Jacinta, you were in waiting.


Sit down!—let not my presence trouble you—
Sit down!—for I am humble, most humble.

Jacinta (aside.) 'Tis time.

(Jacinta seats herself in a side-long manner upon the chair,


resting her elbows upon the back, and regarding her mistress
with a contemptuous look. Lalage continues to read.)

Lalage. "It in another climate, so he said,


Bore a bright golden flower, but not i' this soil!"

(pauses—turns over some leaves, and resumes.)

"No lingering winters there, nor snow, nor shower—


But Ocean ever to refresh mankind
Breathes the shrill spirit of the western wind."
Oh, beautiful!—most beautiful!—how like
To what my fevered soul doth dream of Heaven!
O happy land! (pauses.) She died!—the maiden died!
O still more happy maiden who could'st die!
Jacinta!

(Jacinta returns no answer, and Lalage presently resumes.)

Again!—a similar tale


Told of a beauteous dame beyond the sea!
Thus speaketh one Ferdinand in the words of the play—
"She died full young"—one Bossola answers him—
"I think not so!—her infelicity
Seem'd to have years too many"—Ah luckless lady!
Jacinta! (still no answer.)
Here's a far sterner story
But like—oh! very like in its despair—
Of that Egyptian queen, winning so easily
A thousand hearts—losing at length her own.
She died. Thus endeth the history—and her maids
Lean over her and weep—two gentle maids
With gentle names—Eiros and Charmion!
Rainbow and Dove!——Jacinta!

Jacinta (pettishly.) Madam, what is it?

Lalage. Wilt thou, my good Jacinta, be so kind


As go down in the library and bring me
The Holy Evangelists.

Jacinta. Pshaw! (exit.)

Lalage. If there be balm


For the wounded spirit in Gilead it is there!
Dew in the night time of my bitter trouble
Will there be found—"dew sweeter far than that
Which hangs like chains of pearl on Hermon hill."

(re-enter Jacinta, and throws a volume on the table.)

There, ma'am's, the book. Indeed she is very troublesome.


(aside.)

Lalage (astonished.) What didst thou say Jacinta? Have I done aught
To grieve thee or to vex thee?—I am sorry.
For thou hast served me long and ever been
Trust-worthy and respectful. (resumes her reading.)

Jacinta. I can't believe


She has any more jewels—no—no—she gave me all. (aside.)

Lalage. What didst thou say, Jacinta? Now I bethink me


Thou hast not spoken lately of thy wedding.
How fares good Ugo?—and when is it to be?
Can I do aught?—is there no farther aid
Thou needest, Jacinta?

Jacinta. Is there no farther aid?


That's meant for me. (aside.) I'm sure, Madam, you need not
Be always throwing those jewels in my teeth.

Lalage. Jewels! Jacinta,—now indeed, Jacinta,


I thought not of the jewels.

Jacinta. Oh! perhaps not!


But then I might have sworn it. After all,
There's Ugo says the ring is only paste,
For he's sure the Count Castiglione never
Would have given a real diamond to such as you;
And at the best I'm certain, Madam, you cannot
Have use for jewels now. But I might have sworn it. (exit.)
(Lalage bursts into tears and leans her head upon the table—
after a short pause raises it.)

Lalage. Poor Lalage!—and is it come to this?


Thy servant maid!—but courage!—'tis but a viper
Whom thou hast cherished to sting thee to the soul! (taking up
the mirror.)
Ha! here at least's a friend—too much a friend
In earlier days—a friend will not deceive thee.
Fair mirror and true! now tell me (for thou canst)
A tale—a pretty tale—and heed thou not
Though it be rife with woe. It answers me.
It speaks of sunken eyes, and wasted cheeks,
And Beauty long deceased—remembers me
Of Joy departed—Hope, the Seraph Hope,
Inurned and entombed!—now, in a tone
Low, sad, and solemn, but most audible,
Whispers of early grave untimely yawning
For ruin'd maid. Fair mirror and true!—thou liest not!
Thou hast no end to gain—no heart to break—
Castiglione lied who said he loved——
Thou true—he false!—false!—false!

(while she speaks a monk enters her apartment, and


approaches unobserved.)

Monk. Refuge thou hast


Sweet daughter! in Heaven. Think of eternal things!
Give up thy soul to penitence, and pray!

Lalage (arising hurriedly.) I cannot pray!—My soul is at war with


God!
The frightful sounds of merriment below
Disturb my senses—go! I cannot pray—
The sweet airs from the garden worry me!
Thy presence grieves me—go!—thy priestly raiment
Fills me with dread—thy ebony crucifix
With horror and awe!

Monk. Think of thy precious soul!

Lalage. Think of my early days!—think of my father


And mother in Heaven! think of our quiet home,
And the rivulet that ran before the door!
Think of my little sisters!—think of them!
And think of me!—think of my trusting love
And confidence—his vows—my ruin—think! think!
Of my unspeakable misery!——begone!
Yet stay! yet stay!—what was it thou saidst of prayer
And penitence? Didst thou not speak of faith
And vows before the throne?

Monk. I did.

Lalage. 'Tis well.


There is a vow were fitting should be made—
A sacred vow, imperative, and urgent,
A solemn vow!

Monk. Daughter, this zeal is well!

Lalage. Father, this zeal is any thing but well!


Hast thou a crucifix fit for this thing?
A crucifix whereon to register
A vow—a vow. (he hands her his own.)
Not that—Oh! no!—no!—no! (shuddering.)
Not that! Not that!—I tell thee, holy man,
Thy raiments and thy ebony cross affright me!
Stand back! I have a crucifix myself,—
I have a crucifix! Methinks 'twere fitting The deed—the vow—the
symbol of the deed—
And the deed's register should tally, father! (draws a cross-
handled dagger and raises it on high.)
Behold the cross wherewith a vow like mine
Is written in Heaven!

Monk. Thy words are madness, daughter!


And speak a purpose unholy—thy lips are livid—
Thine eyes are wild—tempt not the wrath divine—
Pause ere too late—oh be not—be not rash!
Swear not the oath—oh swear it not!

Lalage. 'Tis sworn!

II.

ROME. An apartment in a palace. Politian and Baldazzar, his friend.

Baldazzar.——Arouse thee now, Politian!


Thou must not—nay indeed, indeed, thou shalt not
Give way unto these humors. Be thyself!
Shake off the idle fancies that beset thee,
And live, for now thou diest!

Politian. Not so, Baldazzar,


I live—I live.

Baldazzar. Politian, it doth grieve me


To see thee thus.

Politian. Baldazzar, it doth grieve me


To give thee cause for grief, my honored friend.
Command me, sir, what wouldst thou have me do?
At thy behest I will shake off that nature
Which from my forefathers I did inherit,
Which with my mother's milk I did imbibe,
And be no more Politian, but some other.
Command me, sir.

Baldazzar. To the field then—to the field,


To the senate or the field.

Politian. Alas! Alas!


There is an imp would follow me even there!
There is an imp hath followed me even there!
There is——what voice was that?

Baldazzar. I heard it not.


I heard not any voice except thine own,
And the echo of thine own.

Politian. Then I but dreamed.

Baldazzar. Give not thy soul to dreams: the camp—the court


Befit thee—Fame awaits thee—Glory calls—
And her the trumpet-tongued thou wilt not hear
In hearkening to imaginary sounds
And phantom voices.

Politian. It is a phantom voice,


Didst thou not hear it then?

Baldazzar. I heard it not.

Politian. Thou heardst it not!——Baldazzar, speak no more


To me, Politian, of thy camps and courts.
Oh! I am sick, sick, sick, even unto death,
Of the hollow and high sounding vanities
Of the populous Earth! Bear with me yet awhile!
We have been boys together—school-fellows—
And now are friends—yet shall not be so long.
For in the eternal city thou shalt do me
A kind and gentle office, and a Power—
A Power august, benignant, and supreme—
Shall then absolve thee of all farther duties
Unto thy friend.

Baldazzar. Thou speakest a fearful riddle


I will not understand.

Politian. Yet now as Fate


Approaches, and the hours are breathing low,
The sands of Time are changed to golden grains,
And dazzle me, Baldazzar. Alas! Alas!
I cannot die, having within my heart
So keen a relish for the beautiful
As hath been kindled within it. Methinks the air
Is balmier now than it was wont to be—
Rich melodies are floating in the winds—
A rarer loveliness bedecks the earth—
And with a holier lustre the quiet moon
Sitteth in Heaven.—Hist! hist! thou canst not say
Thou hearest not now, Baldazzar!

Baldazzar. Indeed I hear not.

Politian. Not hear it!—listen now,—listen!—the faintest sound


And yet the sweetest that ear ever heard!
A lady's voice!—and sorrow in the tone!
Baldazzar, it oppresses me like a spell!
Again!—again!—how solemnly it falls
Into my heart of hearts! that voice—that voice
I surely never heard—yet it were well
Had I but heard it with its thrilling tones
In earlier days!

Baldazzar. I myself hear it now.


Be still!—the voice, if I mistake not greatly,
Proceeds from yonder lattice—which you may see
Very plainly through the window—that lattice belongs,
Does it not? unto this palace of the Duke.
The singer is undoubtedly beneath
The roof of his Excellency—and perhaps
Is even that Alessandra of whom he spoke
As the betrothed of Castiglione,
His son and heir.

Politian. Be still!—it comes again!

Voice (very faintly.)


And is thy heart so strong
As for to leave me thus
Who hath loved thee so long
In wealth and wo among?
And is thy heart so strong
As for to leave me thus?
Say nay—say nay!

Baldazzar. The song is English, and I oft have heard it


In merry England—never so plaintively—
Hist—hist! it comes again!

Voice (more loudly.)


Is it so strong
As for to leave me thus,
Who hath loved thee so long
In wealth and wo among?
And is thy heart so strong
As for to leave me thus?
Say nay—say nay!

Baldazzar. 'Tis hush'd and all is still!

Politian. All is not still.

Baldazzar. Let us go down.


Politian. Go down, Baldazzar! go!

Baldazzar. The hour is growing late—the Duke awaits us,—


Thy presence is expected in the hall
Below. What ails thee, Earl Politian?

Voice (distinctly.)
Who hath loved thee so long,
In wealth and wo among,
And is thy heart so strong?
Say nay!—say nay!

Baldazzar. Let us descend!—'tis time. Politian, give


These fancies to the wind. Remember, pray,
Your bearing lately savored much of rudeness
Unto the Duke. Arouse thee! and remember!

Politian. Remember? I do. Lead on! I do remember. (going.)


Let us descend. Baldazzar! Oh I would give,
Freely would give the broad lands of my earldom
To look upon the face hidden by yon lattice,
To gaze upon that veiled face, and hear
Once more that silent tongue.

Baldazzar. Let me beg you, sir,


Descend with me—the Duke may be offended.
Let us go down I pray you.

Voice (loudly.) Say nay!—say nay!

Politian (aside.) 'Tis strange!—'tis very strange—methought the voice


Chimed in with my desires and bade me stay! (approaching the
window.)
Sweet voice! I heed thee, and will surely stay.
Now be this Fancy, by Heaven, or be it Fate,
Still will I not descend. Baldazzar, make
Apology unto the Duke for me,
I go not down to-night.

Baldazzar. Your lordship's pleasure


Shall be attended to. Good night, Politian.

Politian. Good night, my friend, good night.

III.

The Gardens of a Palace—Moonlight. Lalage and Politian.

Lalage. And dost thou speak of love


To me, Politian?—dost thou speak of love
To Lalage?—ah wo—ah wo is me!
This mockery is most cruel—most cruel indeed!

Politian. Weep not! oh, weep not thus—thy bitter tears


Will madden me. Oh weep not, Lalage—
Be comforted. I know—I know it all,
And still I speak of love. Look at me, brightest,
And beautiful Lalage, and listen to me!
Thou askest me if I could speak of love,
Knowing what I know, and seeing what I have seen.
Thou askest me that—and thus I answer thee—
Thus on my bended knee I answer thee. (kneeling.)
Sweet Lalage, I love thee—love thee—love thee;
Thro' good and ill—thro' weal and wo I love thee.
Not mother, with her first born on her knee,
Thrills with intenser love than I for thee.
Not on God's altar, in any time or clime,
Burned there a holier fire than burneth now
Within my spirit for thee. And do I love? (arising.)
Even for thy woes I love thee—even for thy woes—
Thy beauty and thy woes.

Lalage. Alas, proud Earl,


Thou dost forget thyself, remembering me!
How, in thy father's halls, among the maidens
Pure and reproachless of thy princely line,
Could the dishonored Lalage abide?
Thy wife, and with a tainted memory—
My seared and blighted name, how would it tally
With the ancestral honors of thy house,
And with thy glory?

Politian. Speak not—speak not of glory!


I hate—I loathe the name; I do abhor
The unsatisfactory and ideal thing.
Art thou not Lalage and I Politian?
Do I not love—art thou not beautiful—
What need we more? Ha! glory!—now speak not of it!
By all I hold most sacred and most solemn—
By all my wishes now—my fears hereafter—
By all I scorn on earth and hope in heaven—
There is no deed I would more glory in,
Than in thy cause to scoff at this same glory
And trample it under foot. What matters it—
What matters it, my fairest, and my best,
That we go down unhonored and forgotten
Into the dust—so we descend together.
Descend together—and then—and then perchance——

Lalage. Why dost thou pause, Politian?

Politian. And then perchance


Arise together, Lalage, and roam
The starry and quiet dwellings of the blest,
And still——
Lalage. Why dost thou pause, Politian?

Politian. And still together—together.

Lalage. Now Earl of Leicester!


Thou lovest me, and in my heart of hearts
I feel thou lovest me truly.

Politian. Oh, Lalage!? (throwing himself upon his knee.)


And lovest thou me?

Lalage. Hist!—hush! within the gloom


Of yonder trees methought a figure past—
A spectral figure, solemn, and slow, and noiseless—
Like the grim shadow Conscience, solemn and
noiseless.? (walks across and returns.)
I was mistaken—'twas but a giant bough
Stirred by the autumn wind. Politian!

Politian. My Lalage—my love! why art thou moved?


Why dost thou turn so pale? Not Conscience' self,
Far less a shadow which thou likenest to it,
Should shake the firm spirit thus. But the night wind
Is chilly—and these melancholy boughs
Throw over all things a gloom.

Lalage. Politian!
Thou speakest to me of love. Knowest thou the land
With which all tongues are busy—a land new found—
Miraculously found by one of Genoa—
A thousand leagues within the golden west;
A fairy land of flowers, and fruit, and sunshine,
And crystal lakes, and over-arching forests,
And mountains, around whose towering summits the winds
Of Heaven untrammelled flow—which air to breathe
Is Happiness now, and will be Freedom hereafter
In days that are to come?
Politian. O, wilt thou—wilt thou
Fly to that Paradise—my Lalage, wilt thou
Fly thither with me? There Care shall be forgotten,
And Sorrow shall be no more, and Eros be all.
And life shall then be mine, for I will live
For thee, and in thine eyes—and thou shalt be
No more a mourner—but the radiant Joys
Shall wait upon thee, and the angel Hope
Attend thee ever; and I will kneel to thee,
And worship thee, and call thee my beloved,
My own, my beautiful, my love, my wife,
My all;—oh, wilt thou—wilt thou, Lalage,
Fly thither with me?

Lalage. A deed is to be done—


Castiglione lives!

Politian. And he shall die! (exit.)

Lalage (after a pause.) And—he—shall—die!——alas!


Castiglione die? Who spoke the words?
Where am I?—what was it he said?—Politian!
Thou art not gone—thou art not gone, Politian!
I feel thou art not gone—yet dare not look,
Lest I behold thee not; thou couldst not go
With those words upon thy lips—O, speak to me!
And let me hear thy voice—one word—one word,
To say thou art not gone,—one little sentence,
To say how thou dost scorn—how thou dost hate
My womanly weakness. Ha! ha! thou art not gone—
O speak to me! I knew thou wouldst not go!
I knew thou wouldst not, couldst not, durst not go.
Villain, thou art not gone—thou mockest me!
And thus I clutch thee—thus!——He is gone, he is gone—
Gone—gone. Where am I?——'tis well—'tis very well!
So that the blade be keen—the blow be sure,
'Tis well, 'tis very well—alas! alas! (exit.)

LOGIC.

Among ridiculous conceits may be selected par excellence, the


thought of a celebrated Abbé—"that the heart of man being
triangular, and the world spherical in form, it was evident that all
worldly greatness could not fill the heart of man." The same person
concluded, "that since among the Hebrews the same word expresses
death and life, (a point only making the difference,) it was therefore
plain that there was little difference between life and death." The
chief objection to this is, that no one Hebrew word signifies life and
death.

AN ADDRESS ON EDUCATION,

AS CONNECTED WITH THE PERMANENCE OF OUR REPUBLICAN


INSTITUTIONS.

Delivered before the Institute of Education of Hampden Sidney College, at its


Anniversary Meeting, September the 24th, 1835, on the invitation of that body,—by
Lucian Minor, Esq. of Louisa.

[Published by request of the Institute.]

Mr. President, and Gentlemen of the Institute:

I am to offer you, and this large assembly, some thoughts upon


EDUCATION, as a means of preserving the Republican Institutions of
our country.

The sentiment of the Roman Senate, who, upon their general's


return with the shattered remains of a great army from an almost
annihilating defeat, thanked and applauded him for not despairing of
the Republic, has, in later times, been moulded into an apothegm of
political morality; and few sayings, of equal dignity, are now more
hackneyed, than that "A good citizen will never despair of the
commonwealth."

I shall hope to escape the anathema, and the charge of disloyalty to


our popular institutions, implied in the terms of this apothegm, if I
doubt, somewhat, its unqualified truth; when you consider how
frequently omens of ruin, overclouding the sky of our country, have
constrained the most unquestionable republican patriot's heart to
quiver with alarm, if not to sink in despair.

When a factious minority, too strong to be punished as traitors,


treasonably refuse to rally under their country's flag, in defence of
her rights and in obedience to her laws; when a factious majority, by
partial legislation, pervert the government to the ends of self-
aggrandizement or tyranny; when mobs dethrone justice, by
assuming to be her ministers, and rush madly to the destruction of
property or of life; when artful demagogues, playing upon the
credulity or the bad passions of a confiding multitude, sway them to
measures the most adverse to the public good; or when a popular
chief (though he were a Washington) contrives so far to plant his will
in the place of law and of policy, that the people approve or
condemn both measures and men, mainly if not solely, by his
judgment or caprice; and when all history shews these identical
causes (the offspring of ignorance and vice) to have overthrown
every proud republic of former times;—then, surely, a Marcus Brutus
or an Algernon Sidney,—the man whose heart is the most
irrevocably sworn to liberty, and whose life, if required, would be a
willing sacrifice upon her altars—must find the most gloomy
forebodings often haunting his thoughts, and darkening his hopes.

Indeed, at the best, it is no trivial task, to conduct the affairs of a


great people. Even in the tiny republics of antiquity, some twenty of
which were crowded into a space less than two-thirds of Virginia,—
government was no such simple machine, as some fond enthusiasts
would have us believe it might be. The only very simple form of
government, is despotism. There, every question of policy, every
complicated problem of state economy, every knotty dispute
respecting the rights or interests of individuals or of provinces, is at
once solved by the intelligible and irreversible sic volo of a Nicholas
or a Mohammed. But in republics, there are passions to soothe;
clashing interests to reconcile; jarring opinions to mould into one
result, for the general weal. To effect this, requires extensive and
accurate knowledge, supported by all the powers of reasoning and
persuasion, in discussing not only systems of measures, but their
minutest details, year after year, before successive councils, in
successive generations: and supposing the machinery of Legislative,
Executive, and Judiciary to be so simple or so happily adjusted, that
an idiot might propel it, and a school-lad with the first four rules of
arithmetic—or even "a negro boy with his knife and tally stick"1—
might regulate its movements and record their results; still, those
other objects demand all the comprehension and energies of no
contracted or feeble mind. Nor are these qualities needful only to the
actual administrators of the government. Its proprietors, the people,
must look both vigilantly and intelligently to its administration: for so
liable is power to continual abuse; so perpetually is it tending to
steal from them to their steward or their agent; that if they either
Welcome to Our Bookstore - The Ultimate Destination for Book Lovers
Are you passionate about books and eager to explore new worlds of
knowledge? At our website, we offer a vast collection of books that
cater to every interest and age group. From classic literature to
specialized publications, self-help books, and children’s stories, we
have it all! Each book is a gateway to new adventures, helping you
expand your knowledge and nourish your soul
Experience Convenient and Enjoyable Book Shopping Our website is more
than just an online bookstore—it’s a bridge connecting readers to the
timeless values of culture and wisdom. With a sleek and user-friendly
interface and a smart search system, you can find your favorite books
quickly and easily. Enjoy special promotions, fast home delivery, and
a seamless shopping experience that saves you time and enhances your
love for reading.
Let us accompany you on the journey of exploring knowledge and
personal growth!

ebookball.com

You might also like