(EBOOK PDF) The Latest Developments and Challenges in Biomedical Engineering Proceedings of the 23rd P 1st Edition 3031384296 9783031384295 full chapters - Quickly download the ebook to never miss any content
(EBOOK PDF) The Latest Developments and Challenges in Biomedical Engineering Proceedings of the 23rd P 1st Edition 3031384296 9783031384295 full chapters - Quickly download the ebook to never miss any content
https://ebookball.com/product/ebook-pdf-the-latest-
developments-and-challenges-in-biomedical-engineering-
proceedings-of-the-23rd-p-1st-
edition-3031384296-9783031384295-full-chapters-25524/
OR CLICK BUTTON
DOWLOAD NOW
ebookball.com
ebookball.com
ebookball.com
ebookball.com
https://ebookball.com/product/ebook-pdf-careers-in-biomedical-
engineering-1st-edition-by-michael-levin-epstein-9780128148174-full-
chapters-22418/
ebookball.com
ebookball.com
Lecture Notes in Networks and Systems 746
Paweł Strumiłło
Artur Klepaczko
Michał Strzelecki
Dorota Bociąga Editors
The Latest
Developments
and Challenges
in Biomedical
Engineering
Proceedings of the 23rd Polish
Conference on Biocybernetics and
Biomedical Engineering, Lodz, Poland,
September 27–29, 2023
Lecture Notes in Networks and Systems
Volume 746
Series Editor
Janusz Kacprzyk , Systems Research Institute, Polish Academy of Sciences,
Warsaw, Poland
Advisory Editors
Fernando Gomide, Department of Computer Engineering and Automation—DCA,
School of Electrical and Computer Engineering—FEEC, University of
Campinas—UNICAMP, São Paulo, Brazil
Okyay Kaynak, Department of Electrical and Electronic Engineering,
Bogazici University, Istanbul, Türkiye
Derong Liu, Department of Electrical and Computer Engineering, University of
Illinois at Chicago, Chicago, USA
Institute of Automation, Chinese Academy of Sciences, Beijing, China
Witold Pedrycz, Department of Electrical and Computer Engineering, University of
Alberta, Alberta, Canada
Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland
Marios M. Polycarpou, Department of Electrical and Computer Engineering,
KIOS Research Center for Intelligent Systems and Networks, University of Cyprus,
Nicosia, Cyprus
Imre J. Rudas, Óbuda University, Budapest, Hungary
Jun Wang, Department of Computer Science, City University of Hong Kong,
Kowloon, Hong Kong
The series “Lecture Notes in Networks and Systems” publishes the latest
developments in Networks and Systems—quickly, informally and with high quality.
Original research reported in proceedings and post-proceedings represents the core
of LNNS.
Volumes published in LNNS embrace all aspects and subfields of, as well as new
challenges in, Networks and Systems.
The series contains proceedings and edited volumes in systems and networks,
spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor
Networks, Control Systems, Energy Systems, Automotive Systems, Biological
Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems,
Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems,
Robotics, Social Systems, Economic Systems and other. Of particular value to both
the contributors and the readership are the short publication timeframe and
the world-wide distribution and exposure which enable both a wide and rapid
dissemination of research output.
The series covers the theory, applications, and perspectives on the state of the art
and future developments relevant to systems and networks, decision making, control,
complex processes and related areas, as embedded in the fields of interdisciplinary
and applied sciences, engineering, computer science, physics, economics, social, and
life sciences, as well as the paradigms and methodologies behind them.
Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago.
All books published in the series are submitted for consideration in Web of Science.
For proposals from Asia please contact Aninda Bose ([email protected]).
Paweł Strumiłło · Artur Klepaczko ·
Michał Strzelecki · Dorota Boci˛aga
Editors
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature
Switzerland AG 2024
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse
of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
We are honored to hand over to the readers the Proceedings of the 23rd Polish Confer-
ence on Biocybernetics and Biomedical Engineering, which will be held in Lodz
from September 27 to 29, 2023. The conference was organized by the Committee
of Biocybernetics and Biomedical Engineering of the Polish Academy of Sciences
and hosted by the Lodz University of Technology. Due to the complex and multi-
disciplinary area of issues covered by biomedical engineering, two TUL units were
involved in the organization of this conference, namely the Institute of Electronics and
the Institute of Materials Science and Engineering. The conference is a continuation
of the cyclical, bi-annual meetings of the biomedical engineers’ community, which
attract scientists and industry representatives from various fields of engineering, IT,
biomaterials, biotechnology, and medicine.
The ongoing and dynamic advancement of AI-based data processing and anal-
ysis methods is playing an increasingly vital role in medicine. These methods find
application in various areas, such as disease diagnosis, prediction, and monitoring,
particularly through the utilization of image data analysis algorithms. Other areas of
application include personalized medicine, where multimodal patient data is acquired
and analyzed, as well as robot-assisted surgery and clinical decision support.
These Proceedings contain 35 publications on the above issues as well as other
relevant hot topics regarding the most important challenges of modern biomedical
engineering. The papers are organized in the following five chapters:
• Biomedical Imaging & Analysis
• Modeling and Machine Learning
• Signal Processing
• Telemonitoring & Measurement
• Biomaterials and Implants.
v
vi Preface
The editors would like to express their gratitude to the authors for their submissions
and to all the reviewers for their meticulous evaluation of the papers and valuable
comments, which undoubtedly enhanced the scientific merit of the accepted papers.
We believe that through this collaborative effort, these Proceedings will serve as
a significant scientific resource for the biocybernetics and biomedical engineering
community.
vii
viii Contents
Signal Processing
Using Frequency Correction of Stethoscope Recordings to Improve
Classification of Respiratory Sounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Adam Biniakowski, Krzysztof Szarzyński, and Tomasz Grzywalski
Bioimpedance Spectroscopy—Niche Applications in Medicine:
Systematic Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
Ilona Karpiel, Mirella Urzeniczok, and Ewelina Sobotnicka
Evaluation of Neurological Disorders in Isokinetic Dynamometry
and Surface Electromyography Activity of Biceps and Triceps
Muscles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Anna Roksela, Anna Poświata, Jarosław Śmieja, Dominika Kozak,
Katarzyna Bienias, Jakub Ślaga, and Michał Mikulski
EMG Mapping Technique for Pinch Meter Robot Extension . . . . . . . . . . . 339
Marcel Smolinski, Michal Mikulski, and Jaroslaw Śmieja
Data Glove for the Recognition of the Letters of the Polish Sign
Language Alphabet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
Jakub Piskozub and Paweł Strumiłło
x Contents
Abstract This paper considers the problem of corneal endothelium image segmen-
tation using a method that combines a CNN model with a watershed transform.
Specifically, first CNN predicts cell bodies, edges, and centers. Next, cell centers are
used as markers that guide the watershed transform performed concerning the cell
edge probability maps inferred by the CNN to outline cell edges. Different variants
of the method are considered. Specifically, a downscaled U-Net is compared with
the Attention U-Net in the image-to-image and sliding window setup. Results show
that using a marker-driven watershed transform to post-process cell edge probability
maps allows for replacing the sliding window setup with an image-to-image setup,
reducing prediction time while maintaining similar or better segmentation accuracy.
Also, when used as a backbone, Attention U-Net outperforms classical U-Net in
determining cell morphometric parameters with high accuracy.
1 Introduction
Over time, segmentation methods for corneal endothelial cells have progressed from
unsupervised to supervised machine learning systems. Recently, supervised machine
learning methods using deep learning solutions based on convolutional neural net-
works (CNNs) have become state-of-the-art. The U-Net model [1] has emerged as
a popular choice for this task, with the downscaled version applied to overlapping
image tiles in a patch-based setup outperforming the full image-based approach [2,
3]. Using a CNN for overlapping image tiles cut out of an image increases the number
of training samples, improving the model’s ability to detect weak and blurred cell
edges due to a smaller image region being analyzed [4]. However, despite these ben-
efits, these methods still have limitations in determining weak and fuzzy boundaries.
They also result in additional computational overhead and increased inference time.
Recent studies have addressed these limitations by integrating convolutional
encoder-decoder models with conventional image processing methods, including
the watershed transform, to improve the post-processing of cell boundaries [5, 6].
Another alternative explored in [7] is extending the U-Net model by build-in sub-
modules to better adapt to weak and fuzzy cell boundaries. This paper adopts this
method and proposes an image segmentation technique for corneal endothelium that
utilizes the Attention U-Net in conjunction with CNN-Watershed [5]. This approach
addresses the challenges posed by weak and discontinuous cell edges. Specifically,
we apply a marker-based watershed transform to the cell edge probability maps
inferred by the Attention U-Net to outline endothelial cells precisely. We also test
the approach in patch-based and full-image scenarios to determine whether addi-
tional computational overhead from the patch-based setup can be avoided using the
watershed transform in the post-processing step.
2 Dataset Details
The study utilized the Rotterdam dataset [8], which is publicly available and consists
of 52 confocal corneal microscopy images. The resolution of these images varies
from 324 × 385 pixels to 763 × 525 pixels. To handle varying sizes, we resized
each image to 384 × 384 pixels. Although the dataset does not include ground truth
results for cell edges, it does contain manually marked cell centers. We used these
centers to generate ground truth segmentation results by applying marker-guided
watershed segmentation, followed by manual correction as required. The resulting
segmentation masks contain three classes: cell bodies, edges, and centers. Sample
images from the Rotterdam dataset used in this study are displayed in Fig. 1.
3 Methods
a) b) c) d)
Fig. 1 A sample corneal endothelial image from the Rotterdam dataset. a a grayscale microscopy
image, b cell edges, c manually marked cell centers, d a mask that depicts cell edges, cell centers,
and cell bodies, with each class represented by a different color
cell centers
binarization labelling
prediction
The process is summarized in Fig. 2. Firstly, the CNN predicts the positions of
cell bodies, edges, and centers. Next, the predicted cell centers are used as markers
that guide the watershed segmentation. Unlike the traditional setup, the watershed
transform is not performed concerning the image gradient but with the cell edge
probability maps inferred by the CNN. The resulting watershed dams created around
the markers are always continuous and correspond to the resulting cell edges.
The baseline CNN-watershed approach [5] employs the downscaled U-Net model
in a sliding window setup. In our study, we modify the baseline approach by using
the Attention U-Net in both the image-to-image (I2I) and the sliding window (SW)
setup.
6 A. Kucharski and A. Fabijańska
In the I2I approach, a CNN was trained to output a segmentation mask directly for
a given input image. Input images of size 384 × 384 pixels were considered. This
approach considers global information about the image, which can be helpful in cases
where the cells are irregularly shaped or arranged in complex patterns.
For the SW approach, input images were divided into overlapping tiles of size
64 × 64 pixels. A CNN model was trained to perform image segmentation in each
window, and the predicted segmentation masks were merged to obtain a seamless cell
edge probability map. Incorporating multiple image tiles during the prediction stage
increases computational complexity. Furthermore, the SW approach may encounter
challenges when dealing with large cell clusters or variations in cell size and shape
within an image.
U-Net The configuration of the U-Net model varied depending on the setup. The
input to the SW U-Net model (see Fig. 3) was an image of size 64 × 64 × 1 pixels
and 384 × 384 × 1 pixels for the I2I model. The contracting path included (three
for the SW and four for the I2I architecture) downsampling levels with three 2D
convolutional layers (with a filter size 3 × 3) per level, each followed by a ReLU
activation function. The number of filters in each downsampling level was (32, 64,
128) for the SW and (64, 128, 256, 512) for the I2I. Max-pooling 2D with a pool
size of 2 × 2 was performed after each downsampling block (except the last one).
The expansive path had three for the SW and four for the I2I upsampling levels, with
three Conv2D layers per level. The number of filters in each upsampling level was
[128, 64, 32], and a ReLU activation function followed each convolutional layer.
After each upsampling layer, the corresponding feature maps from the contracting
and expansive paths were concatenated and passed through three 2D convolutional
layers, each followed by a ReLU activation function. The number of filters in each
ConvBlock(x)
Conv2D(x)
ReLU
Conv2D(x)
ReLU
ConvBlock(128)
Output 64x64x3
UpSampling2D
UpSampling2D
Conv2D(x)
ConvBlock(32)
MaxPooling2D
ConvBlock(64)
MaxPooling2D
ConvBlock(64)
ConvBlock(32)
Input 64x64x1
Concatenate
Concatenate
Conv2D(64)
Conv2D(32)
Conv2D(3)
Softmax
ReLU
ReLU
ReLU
Output
Conv2D(32)
Multiply
Gate
ConvBlock(x) Gate
Conv2D(x) Add
UpSampling2D
ConvBlock(64)
Concatenate
Conv2D(16)
ReLU ReLU
Conv2D(x) Conv2D(1)
ReLU Sigmoid
ConvBlock(128)
Output 64x64x3
UpSampling2D
ConvBlock(32)
MaxPooling2D
ConvBlock(64)
MaxPooling2D
ConvBlock(32)
Input 64x64x1
Concatenate
Conv2D(32)
Conv2D(16)
Conv2D(3)
Conv2D(x)
Multiply
Softmax
Gate
ReLU Output
Output
upsampling level was (128, 64, 32) for the SW and (512, 256, 128, 64) for the I2I. The
final layer of the model was a 2D convolutional layer with one filter of size 1 × 1 and
softmax activation, which outputted the predicted segmentation mask. The number
of output labels was three, with the probabilities of the cell edges, cell centers, and
cell bodies.
Attention U-Net The Attention U-Net [9] was derived from the baseline U-Nets for
both I2I and SW setups. The contracting path remained unaltered, while the expansive
path was modified to include an attention mechanism that helped the model focus
on critical features in the input. At each upsampling block, the attention mechanism
was incorporated by concatenating the relevant feature maps from the contracting
and expansive paths and subsequently utilizing a weighting mechanism to assess the
significance of the features in the concatenation. The add attention mechanism was
used (see Fig. 4).
The Rotterdam dataset was augmented to increase its variability using a set of geo-
metric transformations with randomized parameters. The following transformations
were applied to each image in the dataset:
– A shear operation introduces distortion along both axes to mimic natural defor-
mation in the corneal endothelium (random value between –0.3 and 0.3).
– A scale transformation to simulate cell size and spacing variations (random scaling
factor from 0.5 to 1.5).
– A rotation transformation to simulate random rotational changes in the micro-
scope’s imaging plane (random angle from –0.15 to 0.15).
– A vertical flipping operation with a 50% probability and a horizontal flipping
operation with a 50% probability to increase dataset variability.
After each training epoch, the transformation parameters were randomized again.
8 A. Kucharski and A. Fabijańska
3.5 Training
The I2I and SW models were trained using the Adam optimizer with a learning rate
of 1e-4 and utilized categorical cross-entropy loss as the loss function.
For the image-to-image Attention U-Net and U-Net models, the training was
conducted for 100 epochs with a batch size of 4 and 34 steps per epoch.
Meanwhile, the sliding-window models were trained for 100 epochs with a batch
size of 128 and 102 steps per epoch. The training dataset was augmented using the
data augmentation process described in Sect. 3.4, and patches of size 64 × 64 were
extracted from the augmented images. A total of 13,000 patches were extracted per
training epoch.
Trained models generated three probability maps for cell borders, centers, and bodies
(see Fig. 5). Cell bodies were not used for further processing.
While I2I Attention U-Net and U-Net models process a whole image simultane-
ously, SW Attention U-Net and U-Net models were applied to consecutive image
patches of size 64 × 64 in a sliding window setup. The patches overlapped by a stride
of 4 pixels in horizontal and vertical directions to obtain seamless probability maps.
The model’s predictions for overlapping patch regions were averaged.
Predicted cell centers were then used to generate markers for watershed markers-
controlled segmentation. To transform a cell center’s probability maps to 1-pixel size
seeds, a mean filter of size 3 × 3 was first applied. Then, the results were binarized via
the minimum cross-entropy approach proposed by [10]. Next, morphological erosion
with a square structuring element of size 3 × 3 pixels was applied to the binary image.
Finally, the centroid’s positions were calculated for each connected region to generate
1-pixel size markers, and markers-controlled watershed segmentation was performed
on cell borders probability maps outputted by CNN’s models and prepared seeds.
a) b) c) d)
Fig. 5 Probability maps outputted by I2I U-Net models. a an original image, b cell bodies, c cell
borders, d cell centers
Modified CNN-Watershed for Corneal Endothelium Segmentation … 9
4 Results
The quality of endothelial cell segmentation was evaluated visually and quantita-
tively using image segmentation accuracy measures and measures derived from the
cells’ morphology. The assessment was performed using a three-fold cross-validation
approach. The available corneal endothelium image data for each setup was randomly
divided into two approximately equal subsets. Two subsets were used to train the
models, while the third subset was used to evaluate their performance. The assess-
ment was repeated thrice with different training and testing fold configurations.
Visual results of the CNN-Watershed used in I2I and SW setup and running at the top
of the U-Net and the Attention U-Net models are presented in Fig. 6. Specifically, the
top panel presents the resulting probability maps, with intensities of different colors
denoting probabilities of cell edges, cell centers, and cell bodies. The middle panel
presents the resulting cell edges overlaid on an original sample image. Finally, the
bottom panel compares the ground truth edges in green with the inferred edges in
red. Overlapping edges are shown in white.
Additionally, Fig. 7 visualizes the resulting attention maps of the Attention U-Net
used in the sliding window and image-to-image setup.
The DICE coefficient (see Eq. 1) was computed between the resulting P and the
ground truth T edges. Prior to the DICE calculation, the edges, which were only one
pixel wide, were dilated using a square structural element of size 4 × 4.
2|T P|
DICE = (1)
|T | + |P|
The longest distance between a pixel of ground truth edges and the nearest pixel of
predicted edges was quantified with the modified Hausdorff distance [11]. The ideal
case MHD value between two images is 0. High MHD values suggest the presence
of false or missing edges.
The summary of cell segmentation accuracy measures obtained for each consid-
ered version of the CNN-watershed for each testing fold is shown in Table 1. The
best scores are shown in bold.
10 A. Kucharski and A. Fabijańska
a) b) c) d)
Fig. 6 Visual results of segmentation of corneal endothelial images. Results obtained with a I2I
Attention U-Net, b I2I U-Net, c SW attention U-Net, d SW U-Net. Top—results obtained by CNNs.
Middle-predicted edges overlaid on the original image. Bottom—comparison between the ground
truth and edges generated by the CNN-watershed algorithm. The edges identified as ground truth
are highlighted in red, the edges generated by the CNN-watershed algorithm are highlighted in
green, and overlapping edges are highlighted in white
a) b) c) d)
Fig. 7 Visual results of attention maps outputted by SW and I2I attention U-Nets. a Attention maps
obtained with SW Attention U-Net (pixels value from 0.62 to 0.93), b attention maps obtained with
I2I Attention U-Net (pixels value from 0.47 to 0.68), c an original image, d a target ground truth
(red: cell bodies, green: cell boundaries, blue: cell centers)
Modified CNN-Watershed for Corneal Endothelium Segmentation … 11
Table 1 Image segmentation accuracy measures for each testing fold (F1, F2, F3). DSC—the DICE
coefficient, MHD—the modified Hausdorff distance
Model DSC MHD
F1 F2 F3 F1 F2 F3
I2I attention 0.875 0.869 0.848 0.352 0.379 0.480
U-Net
SW 0.877 0.871 0.848 0.390 0.374 0.585
attention
U-Net
I2I U-Net 0.885 0.883 0.860 0.329 0.345 0.477
SW U-Net 0.869 0.873 0.831 0.382 0.368 0.636
The Pearson correlation coefficient (PCC) [12], the mean absolute error of the number
of cell neighbors (M AE N ), and the relative error of cell hexagonality (R E H ) were
used to assess the resulting cell morphology.
To compare the number of cell neighbors in the ground truth T and predicted P
images, the mean absolute error between the number of neighbors (see Eq. 2) was
calculated. The reference number of cells and their positions in both images were
based on the T image.
1
NT
M AE N = |Ti − Pi | (2)
NT i=1
To measure the degree of correlation between the sizes (in pixels) of corresponding
cells in the T and P images, the Pearson correlation coefficient (PCC) was utilized.
A strong correlation is indicated by PCC values within the [−1; −0.5] [0.5; 1.0],
while a perfect match is attained when the PCC value is equal to either 1 or –1.
The summary of the cell morphometry accuracy measures obtained for each con-
sidered version of the CNN-watershed for each testing fold is shown in Table 2. The
best scores are shown in bold.
12 A. Kucharski and A. Fabijańska
Table 2 Morphometric parameters accuracy measures for each testing fold (F1, F2, F3). PCC—the
Pearson correlation coefficient, MAE N —the mean absolute error between the number of neighbors,
RE H —the relative hexagonality error
Model PCC MAE N RE H
F1 F2 F3 F1 F2 F3 F1 F2 F3
I2I 0.898 0.915 0.748 0.136 0.111 0.239 0.035 0.055 0.048
atten-
tion
U-Net
SW 0.765 0.909 0.729 0.319 0.123 0.480 0.085 0.071 0.251
atten-
tion
U-Net
I2I 0.885 0.930 0.750 0.156 0.106 0.263 0.053 0.050 0.094
U-Net
SW 0.688 0.911 0.688 0.196 0.117 0.493 0.040 0.056 0.154
U-Net
Table 3 Average prediction time in seconds for an image with the size of 384 × 384 × 1
Model Average time (s)
I2I attention U-Net 0.080
SW attention U-Net 3.190
I2I U-Net 0.062
SW U-Net 3.041
Finally, the prediction time was measured for each considered CNN model and
prediction setup. Table 3 shows the average time for a single image prediction for
each considered model. Time measurements were performed on an NVIDIA GTX
1070 with 8 GB RAM DDR5 combined with an AMD Ryzen 5 5600X and 64 GB
of RAM DDR4. All the experiments were conducted on a TensorFlow 2.9.1 with the
Keras [13] library, and GPU was utilized for computations.
5 Discussion
Based on the visual assessment presented in the top panel of Fig. 6, it can be observed
that the probability maps generated by the models used in the image-to-image (I2I)
setup are more precise and less blurry compared to those produced by the sliding
window (SW) setup. The probability for each class is higher for the I2I models,
Modified CNN-Watershed for Corneal Endothelium Segmentation … 13
particularly for the cell centers, where the probabilities outputted by the SW models
are notably weaker. This suggests that the sliding window models are less confident
in their predictions, potentially due to the lack of global information related to the
cells and their neighbors. This observation is further supported by the attention maps
displayed in Fig. 7 for the Attention U-Net model, where the attention for the cell
centers in the I2I setup is more concentrated compared to the SW counterparts.
However, this shortcoming is primarily limited to the cell centers, as all model outputs
exhibit similar levels of confidence in the case of the cell edges. As a result, this leads
to comparable cell segmentation outcomes.
The numerical evaluation supports this finding, particularly when examining the
accuracy measures for image segmentation, as shown in Table 1. Although the DICE
scores vary by no more than 3.4% (and no less than 1.5%) between the worst and
best-performing variants, the CNN models used in the I2I setup display slightly
better results than their SW counterparts, with an average difference in segmentation
scores of less than 0.5% for the Attention U-Net and around 2% for the U-Net.
This small difference may be attributed to the application of the watershed transform
to the edge probability maps, which resolves discontinuous edges resulting from
the threshold-based post-processing that is typically applied to cell edge probability
maps in state-of-the-art methods.
When the accuracy measures derived from cell morphometry are considered, the
image-to-image setup still outperforms the sliding window approach for both variants
of the U-Net model. However, the Attention U-Net is more accurate. The advantage
of the model is, on average, 0.15% for the Pearson correlation coefficient of cell sizes,
6.7% for the mean absolute error of the number of neighboring cells, and almost 46%
for the relative hexagonality error.
Finally, experiments confirmed that using a CNN model in a sliding window
setup is an essential computational hurdle. Specifically, it increased the prediction
time 40–50 times compared to the image-to-image setup (see Table 3).
Indirectly comparing our results to the corneal endothelium image segmentation
results reported by selected authors is auspicious. In particular, our proposed I2I
Attention U-Net scored the average cell-based DICE coefficient of 0.979, edge-
based DICE of 0.876, and MHD of 0.400 on a challenging Rotterdam dataset. The
corresponding scores for I2I U-Net were 0.980, 0.880, and 0.384. Vigueras-Guillen et
al. [14] utilized a dataset of 50 images and corresponding masks, achieving an average
DICE coefficient (for cells) of 0.981 and an MHD of 0.22, which is a comparable
result in terms of cell-based DICE. In another study by [15], the authors proposed
a method to segment the relatively simple Alizarine dataset [16]. With the best-fit
algorithm [17], a DICE coefficient (for cells) on this dataset was equal to 0.94 and
an MHD to 0.14, which is worse than we achieved in terms of the cell-based DICE.
Lower MHD scores were the results of applied postprocessing using the best-fit
method. Without the best-fit postprocessing, they achieved a DICE coefficient of 0.62
and an MHD of 1.26. our method avoids postprocessing, reducing the complexity
and time required for segmentation while maintaining excellent results.
14 A. Kucharski and A. Fabijańska
6 Conclusions
This study has shown that applying a marker-driven watershed transform to post-
process the cell edges probability maps in the CNN-based corneal endothelium image
segmentation can significantly improve the method’s sensitivity to discontinuous
edges. This approach allows replacing the sliding window setup with an image-to-
image setup, decreasing computational overhead and shortening prediction times
while maintaining similar or even better segmentation accuracy. Additionally, our
results indicate that Attention U-Net outperforms the classical U-Net regarding seg-
mentation cell quality measured in terms of cell morphometric parameters. These
findings demonstrate the potential of our proposed method for efficient and accurate
corneal endothelium image segmentation, which can have practical applications in
diagnosing and treating various eye diseases.
References
1. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image
segmentation. In: Medical Image Computing and Computer-Assisted Intervention (MICCAI),
vol. 9351 of LNCS, pp. 234–241. Springer (2015)
2. Fabijańska, A.: Segmentation of corneal endothelium images using a u-net-based convolutional
neural network. Artif. Intell. Med. 88, 1–13 (2018). https://doi.org/10.1016/j.artmed.2018.04.
004
3. Daniel, M., Atzrodt, L., Bucher, F., Wacker, K., Böhringer, S., Reinhard, T., Böhringer, D.:
Automated segmentation of the corneal endothelium in a large set of “real-world” specular
microscopy images using the u-net architecture. Sci. Rep. 9, 4752 (2019). https://doi.org/10.
1038/s41598-019-41034-2
4. Vigueras-Guillén, J.P., Sari, B., Goes, S.F., Lemij, H.G., van Rooij, J., Vermeer, K.A., van Vliet,
L.J.: Fully convolutional architecture versus sliding-window CNN for corneal endothelium cell
segmentation. BMC Biomed. Eng. 1, 4 (2019). https://doi.org/10.1186/s42490-019-0003-2
5. Kucharski, A., Fabijańska, A.: CNN-watershed: a watershed transform with predicted markers
for corneal endothelium image segmentation. Biomed. Signal Process. Control 68, 102805
(2021). https://doi.org/10.1016/j.bspc.2021.102805
6. Vigueras-Guillén, J.P., van Rooij, J., van Dooren, B.T.H., Lemij, H.G., Islamaj, E., van Vliet,
L.J., Vermeer, K.A.: Denseunets with feedback non-local attention for the segmentation of
specular microscopy images of the corneal endothelium with guttae (2022). https://doi.org/10.
48550/ARXIV.2203.01882. arxiv:2203.01882
7. Zhang, Y., Higashita, R., Fu, H., Xu, Y., Zhang, Y., Liu, H., Zhang, J., Liu, J.: A multi-branch
hybrid transformer network for corneal endothelial cell segmentation. In: de Bruijne, M, Cattin,
P.C., Cotin, S., Padoy, N., Speidel, S., Zheng, Y., Essert, C. (eds.) Medical Image Computing
and Computer Assisted Intervention—MICCAI 2021, pp. 99–108. Springer International Pub-
lishing, Cham (2021). https://doi.org/10.1007/978-3-030-87193-2_10
8. Selig, B., Vermeer, K.A., Rieger, B., Hillenaar, T., Hendriks, C.L.L.: Fully automatic evaluation
of the corneal endothelium from in vivo confocal microscopy. BMC Med. Imaging 15(1), 13
(2015). https://doi.org/10.1186/s12880-015-0054-3
9. Oktay, O., Schlemper, J., Folgoc, L.L., Lee, M., Heinrich, M., Misawa, K., Mori, K., McDonagh,
Hammerla, N.Y., Kainz, B., Glocker, B., Rueckert, D.: Attention u-net: learning where to look
for the pancreas (2018). https://doi.org/10.48550/ARXIV.1804.03999. arxiv:1804.03999
Modified CNN-Watershed for Corneal Endothelium Segmentation … 15
10. Li, C., Tam, P.: An iterative algorithm for minimum cross entropy thresholding. Pattern Recogn.
Lett. 19(8), 771–776 (1998). https://doi.org/10.1016/s0167-8655(98)00057-9
11. Dubuisson, M.-P., Jain, A.: A modified hausdorff distance for object matching. In: Proceedings
of 12th International Conference on Pattern Recognition, vol. 1, pp. 566–568 (1994). https://
doi.org/10.1109/ICPR.1994.576361
12. Freedman, D., Pisani, R., Purves, R.: Statistics (international student edition), Pisani, R. Purves,
4th edn. WW Norton & Company, New York
13. Sha, Y.: Keras-u-net-collection (2021). https://github.com/yingkaisha/keras-unet-collection.
https://doi.org/10.5281/zenodo.5449801
14. Vigueras-Guillén, J.P., Sari, B., Goes, S.F., Lemij, H.G., van Rooij, J., Vermeer, K.A., van Vliet,
L.J.: Fully convolutional architecture versus sliding-window CNN for corneal endothelium cell
segmentation. BMC Biomed. Eng. 1(1) (2019). https://doi.org/10.1186/s42490-019-0003-2
15. Nurzynska, K.: Deep learning as a tool for automatic segmentation of corneal endothe-
lium images. Symmetry 10(3). https://doi.org/10.3390/sym10030060. https://www.mdpi.com/
2073-8994/10/3/60
16. Ruggeri, A., Scarpa, F., Luca, M.D., Meltendorf, C., Schroeter, J.: A system for the automatic
estimation of morphometric parameters of corneal endothelium in Alizarine red-stained images.
Br. J. Ophthalmol 94(5), 643–647 (2010). arXiv:https://bjo.bmj.com/content/94/5/643.full.pdf,
https://doi.org/10.1136/bjo.2009.166561. https://bjo.bmj.com/content/94/5/643
17. Piórkowski, A.: Best-fit segmentation created using flood-based iterative thinning. In: Advances
in Intelligent Systems and Computing, pp. 61–68. Springer International Publishing (2016).
https://doi.org/10.1007/978-3-319-47274-4_7
Tissue Pattern Classification with CNN in
Histological Images
1 Introduction
Tissue architecture is critical for cell homeostasis and physiological functions [1].
It can be also crucial factor in disease discrimination or establishing subtypes of the
disease. Typically tissue architecture can be described by pathologist as hypocellular,
hypercellular or classical. Such classification might aid the expert in evaluation of the
tissue and in cooperation with other features. It might also help the expert pathologist
in disease differentiation, diagnosis and prognosis.
This article presents a study used to establish classification model for tissue com-
pactness. The model was tested on the set of digital whole slide images from patients
with inflammatory spindle cell lesions (ISCLs) [2], which are treated as a heterogenic
group of diseases and the biology of many of them remains not fully understood.
The histomorphology of these tumors seems to be diverse enough to examine if the
correlation of features related to tissue compactness with subtypes of the analysed
lesions exists. To achieve that goal, the convolutional neural network (CNN) was
trained to classify compactness in the images of the tissue samples.
Genetic and epigenetic factors have major influence on cancer cell phenotype and
tumor architecture [3]. Other scientific teams presented different approaches to deter-
mining spatial contexts and tissue architecture, such as fast Fourier transform [4] or
spatially resolved transcriptomics [5] and apply them to multiple tissues, i.e. mus-
cle [6]. Also, some scientists provided insight on how tissue fixation protocol affects
tissue architecture [7].
Up to date, there is no study focusing strictly on the automatic classification of
tissue architecture. The method presented in this study could be used in the future to
aid in differentiation of multiple diseases.
2.1 Disease
Fig. 1 Dataset sample distribution annotated by expert; compactness classes: hypercellular in blue,
hypocellular in green, classical in yellow for patch size of 256 × 256
these methods are expensive in money and resources. Finding correlation of tissue
compactness features with subtypes of the analysed disease would provide efficient
diagnostic markers or prognostic factors.
Three basic histologic patterns of ISCLs are distinguished: hypocellular (scle-
rotic, scar-like), classical (nodular fasciitis-like, myxoid) and hypercellular (com-
pact, proliferating) [8, 13, 14]. The presence of giant cells, myxoid intercellular
content, ganglion-like cells, lymphovascular invasion, necrosis, high mitotic activity
and increased cellularity are considered as adverse factors that worsen the prognosis
after tumor resection [8, 15, 16]. That is why assessment of the histologic pattern
by the deep neural network may improve histopathologial diagnostics.
This study dataset consisted of histological slides from patients with ISCLs collected
from the archives of the Academic Center of Pathomorphological and Genetic-
Molecular Diagnostics in Bialystok (Poland). The study obtained the consent of
the bioethics committee at the Medical University of Bialystok—number APK.002.
339.2020. The hematoxylin and eosin stained microscopic slides were digitalized
with Hamamatsu NanoZoomer SQ slide scanner. The 85 whole-slide images of 77
patients were collected in total. Then arbitrarily selected 3200 × 1800 pixels images
were annotated by an expert pathologist with masks representing each histological
tissue architecture type (example presented in Fig. 2). The annotated images were
then subdivided into smaller fragments (patches) for deep learning (DL) model train-
ing. They were divided into patches of size 256 × 256, 128 × 128 and 64 × 64 pixels
which resulted in about 9, 35 and 150 thousand images respectively. The distribu-
tion of samples with size 256 × 256 pixels that contained classes of hypocellular,
hypercellular and classical tissue architecture are presented in Fig. 1.
20 K. Siemion et al.
The available annotations in the dataset are relatively rough, due to uncertainty of the
data. We established a crucial parameter, called class_tresh (class threshold), which
value decides about the class assignment to the sample patch. Moreover, the labeled
regions do not correspond directly to patch splitting as the annotations were made in
bigger images (see Sect. 2.2). The set value of class_tresh is a minimal ratio of area
in an image patch taken by annotation label. For example, when the class_tresh is
set to 0.2 the label has to take up more than 20% of the area of the image patch to be
included in the considered class.
Due to progressive manner of dataset creation during project development, at the
first stage of our research the number of examples in our dataset was very limited
and we wanted to try lowering the threshold to increase the number of examples in
each class. Nevertheless, the logical assumption would be to set the class_tresh at
0.5, and that is the set value in current stage of the research.
The dataset comprised of image patches with different tissue architecture was
quantitatively heterogeneous with the majority of classical architecture, as can be
seen in Fig. 1. In case of strongly biased dataset if the validation set is too small there
might occur a situation where all samples in the validation set will derivate from one
class (the one in majority). This inevitably results in pushing the updated weights in
the direction of assigning every sample to class in the majority of examples. To avoid
this problem, instead of fully randomly, we assigned the samples with consideration
of classes into each (train/validation/test) set. The other employed strategy was to
limit the sets to the number of example in the least populated class. So, all subsets
contained equally distributed classes with randomly selected examples.
Lastly, we arbitrarily opted for 50/10/40 (train/validation/test) dataset split. The
test set was separated, while train and validation test were used for cross-validation.
In these experiments, we used a well known VGG16 model [17] where its input and
output layers were modified to fit our purposes. The model with about 14 million
trainable parameters was initialized with random weights. Primarily stochastic gra-
dient descent (SGD) optimizer was used with default parameters of learning rate and
decay, and typical loss function—the “categorical crossentropy” was used. Batch
size was set to 16 in all of the experiments.
The model, training and inference was implemented in Python with Keras Tensor-
flow framework. The model was obtained from the open-access Github repository.1
1 https://github.com/qubvel/classification_models.
Tissue Pattern Classification with CNN in Histological Images 21
Fig. 2 Classification model with sample input image; classes: hypercellular, classical and hypocel-
lular
In situations where there are very few examples, the data augmentation can improve
the achieved results. In this study Albumentations2 package was used to perform
image modification. We proposed several augmentation strategies and called them:
simple, med (medium) and heavy. We tested the influence of geometric deforma-
tions and ColorJitter on our data by implementing additional augmentation schemes:
med+colorJitter, med+CLAHE, no_deform and no_deform+no_colorJitter. For clear
comparison between tested augmentation schemes all of them are presented in
Table 1.
Simple The simple augmentation consisted of only vertical and horizontal flips,
image rotation and random cropping. This kind of augmentation has low impact
on image content as it’s not influential on values of pixels. Nevertheless it can
improve the training capabilities of the model.
Medium (med) Along simple augmentation this strategy consisted of several addi-
tional distortions: grid distortion, optical distortion and elastic deformation. This
2 https://github.com/albumentations-team/albumentations.
22 K. Siemion et al.
We have organized the dataset so that all patches containing the annotation (with
sufficient class_tresh) were treated as either classical, hypo- or hypercellular, while
the rest of the samples were considered as “other” class. This resulted in a heavily
biased dataset, with the majority of “other” class. We tackled this problem with the
training in which the number of samples was limited per class (to the number of
examples in the least populated class) to make the dataset more balanced.
Moreover, we tested two different approaches with and without inclusion of neg-
ative examples in the dataset. First, the 4-class approach included 3 positive classes
and one negative. In the other approach the 3-class scenario was reduced to predict
only classical, hypo- and hypercellular classes.
Tissue Pattern Classification with CNN in Histological Images 23
3 Results
First, we tested the capability of the VGG16 model to contain the knowledge sufficient
for the task. In this experiment we did not use any data augmentation, since we wanted
to try and overfit the model, see Fig. 3. The results in Table 2. confirm that the
model is able to learn the subtle differences in tissue compactness, but regularization
techniques are necessary to tackle the problem of overfitting.
Second, we concluded a comparative analysis for different class_tresh values.
The evaluation was performed with 3-fold cross-validation and 3-class model. The
resulting accuracy of models with different class_tresh parameter value showed the
proportional increase as seen in Table 3.
Third, we compared the proposed augmentation strategies. The Table 4. shows
that medium augmentation gives the best numerical results according to evaluation
performed on test set. We tested two tile-sizes and achieved consistent results. For
this evaluation the same 3-class model was used with 0.5 class_tresh parameter and
same limit of samples.
Fourth, results of average compactness classification accuracy with 2 different
training schemes are presented in Table 5. Distinct models with different number of
classes explicitly treated are compared. For this evaluation the class_tresh parameter
value was set to 0.5 as well as the same augmentation strategy, limit of samples, and
train/val/test split was used to achieve comparable results.
Fig. 3 Example of VGG16 model overfitting to ISCL dataset, as presented with train and validation
sets. Around 18th epoch the model overfits as the loss of validation set is no longer decreasing
Table 2 Overfit results for two tile-sizes (where, acc stands for accuracy)
Train Validation Test Evaluation
on train set
tile_size Loss Acc Loss Acc Loss Acc binary_acc Acc
64 0,00 1,00 0,16 0,89 0,46 0,89 0,95 1,00
128 0,00 1,00 0,15 0,91 0,36 0,90 0,95 1,00
24 K. Siemion et al.
Table 3 Comparison of classification accuracy with different class_tresh parameter value. Given
values are means acquired on test set with 3-fold cross-validation
class_tresh Accuracy Loss Binary accuracy Precision
0.90 97.07 6.05 98.05 97.09
0.75 96.84 6.88 97.90 96.85
0.50 95.57 9.23 97.05 95.60
0.33 94.83 10.65 96.42 95.55
0.10 93.47 11.37 95.26 95.83
0.00 82.35 14.41 88.27 70.91
Table 4 Comparison of augmentation schemes. Given values are means acquired on test set with
3-fold cross-validation. Headers description: epochs—number of epochs before the training was
terminated; acc—accuracy on test set; bin_acc—binary accuracy o test set
tile_size augmentation Epochs Acc Loss bin_acc Precision
128 Simple 39 95.08 12.10 96.71 95.10
Med 49 95.39 9.60 96.92 95.43
med+colorJitter 69 93.89 12.70 95.93 93.95
med+CLAHE 71 94.92 10.21 96.62 94.96
Heavy 77 94.38 10.36 96.25 94.44
no_deform 73 92.62 14.67 95.09 92.74
no_deform+no_ 56 94.22 11.21 96.13 94.21
colorJitter
256 Simple 49 91.79 15.72 94.56 91.97
Med 63 95.83 9.91 97.24 95.95
med+colorJitter 104 89.29 18.13 92.87 89.43
med+CLAHE 73 89.00 18.32 92.64 89.12
Heavy 104 95.63 8.99 97.10 95.70
no_deform 133 91.63 17.92 94.39 91.66
no_deform+no_ 83 95.81 8.83 97.23 95.89
colorJitter
Table 5 Results of average compactness classification accuracy of models trained with different
training scenarios. Randomly selected 4 images were used for F-score evaluation. Where: 3-class
model means discrimination of classical, hypo- and hypercellular compactness; 4-class is the model
with explicit “other” class included in the dataset; TP - true positive, FP—false positive, FN—false
negative, TN—true negative. The best results marked in bold
Class tile_size epochs train_acc val_acc test_acc test_bin_acc TP FP FN TN F-score
4 64 127 89.00 90.85 90.84 95.42 1929 601 671 9399 75.20
128 111 92.19 91.86 91.98 96.01 504 177 148 2321 75.62
256 153 89.82 93.14 92.54 96.28 123 28 20 585 83.67
3 64 53 92.15 93.39 93.59 95.74 2153 2024 447 7976 63.54
128 72 95.27 95.83 94.30 96.20 565 406 87 2092 69.62
256 63 91.11 97.04 94.70 96.50 131 106 12 507 68.95
Tissue Pattern Classification with CNN in Histological Images 25
4 Discussion
ISCLs can occur as masses in every localization of the human body including the
inflammatory myofibroblastic tumors that are intermediate-grade neoplasms charac-
terized by high recurrence rate after excision and low metastatic potential [9]. Three
basic histologic patterns can be distinguished, i.e. hypocellular, classical and hyper-
cellular [8, 13, 14]. Increased cellularity and presence of myxoid content in the clas-
sical morphology can worsen the prognosis of a patient [8]. That is why assessment
of the histologic pattern by the deep neural network may improve histopathologial
diagnostics.
We encountered two main problems with the automatic compactness classifica-
tion. First, was the strong bias in number of examples in each class, with the classical
tissue as far more common in our samples. It created a vast discrepancy in the training
data. To avoid this problem, instead of fully randomly, we assigned the samples with
consideration of classes into each (train,val,test) subset. All of the subsets contained
equally distributed classes, and were limited to the number of example in the least
populated class. This resulted in balanced datasets that provided consistent good
classification results.
The second problem, compactness classification in locations where tissue with
different patterns are mingled. This creates intertwined class labels (there were image
patches with multiple types of compactness) and consequently that might confuse
the learning model. There are two solutions that might work in this situation: (1)
lowering the size of the image patch; (2) developing a model that gives continuous
value, corresponding to the ratio of area taken by each class. Lowering the size of
the image patch should give more precise results, and this was partially confirmed
with the achieved results.
To cope with the mingled classes in patches, we established a crucial parameter,
called class_tresh, which value decides about the class assignment to the sample
patch. The set value of class_tresh is a minimal ratio of area in an image patch taken
by annotation label. In general, the models accuracy increased with higher class_tresh
parameter value (see Table 3). This confirmed that the model could achieve better
results with fewer samples but consisting more reliable information.
The integration of augmentation during training of the model significantly
improved the results. It provided for longer training without suffering from overfit-
ting. The training should be sufficiently extended, gradually increasing the accuracy
of the model.
According to the augmentation comparison, presented in Table 4, the medium
(med) augmentation strategy gives best numerical results according to evaluation
performed on test set. Applying different augmentation schemes we wanted to test
the influence of different augmentation methods on our type of data. Based on the
achieved results we have concluded that color manipulation (implemented as col-
orJitter) in histology images has negative influence on classification results. These
might give too drastic changes to the images that contain very subtle information.
26 K. Siemion et al.
Fig. 4 Example of inference on sample image, generating separate heatmaps for each class. Relation
of localization of hypercellular class could be clearly seen in the results image. Heatmaps (soft
predictions) are presented with “hot” colormap, where white equals max value
This consequently makes it more difficult to classify tissue architecture, hence lower
numerical results of these more complex augmentation strategies.
On the other hand, geometric deformations improve the results as they increase
the generalisation of learning process. Also, the process of brightness and contrast
manipulation have positive impact on learning process. To sum up, the heavy aug-
mentation combines the augmentation techniques that are beneficial for tissue archi-
tecture classification. The numerical results are comparable between med and heavy
augmentations as well as time taken per epoch. In the final experiments we decided to
use heavy augmentation as more complex augmentation has better chance to prevent
overfitting and because this method allowed for the longer effective training.
Most importantly we compared different training scenarios with different number
of classes explicitly given to the model. The main difference was how the negative
samples were provided and we tried to answer the question are they actually improv-
ing the classification accuracy. The model that was trained on patches with only
Tissue Pattern Classification with CNN in Histological Images 27
relevant classes (3-class model) without negative examples achieved best accuracy.
The example of inference in comparison to labels is shown in Fig. 4.
Our hypothesis was that introduction of the 4th class (“other”) might increase
the generality of the model, but the high variability of tissue in that class might
be misleading for the trained model. A seen in Table 5 the 4-class model achieved
comparable accuracy value to 3-class model. However, according to patch based
evaluation of test set images the number of false positives significantly decreases.
The achieved results in automatic classification of tissue architecture were satis-
factory, yet they were achieved only based on ISCL dataset. In the future we plan to
further develop this method and test it with different kind of tissue.
5 Conclusions
To sum up, based on the current results, we can conclude that the differentiation
between classical, hypo- and hypercellular tissue compactness is possible with appli-
cation of deep learning classification model VGG16. The achieved accuracy was
promising, although we hope to produce even better results by increasing the num-
ber of samples used in training the classification model. In the future we hope to find
correlation of features related to tissue compactness with markers of prognostic or
predictive factors of ISCLs.
Acknowledgements Ethics approval Consent of the Bioethics Committee at the Medical Uni-
versity of Bialystok—number APK.002.339.2020.
Funding This work has received support from Nalecz Institute of Biocybernetics and Biomedical
Engineering Polish Academy of Sciences statutory financing. The research is partially funded by two
subsidies from the Medical University of Bialystok: SUB/1/DN/22/002/1155 and SUB/1/DN/21/
002/1194.
References
1. Nelson, C.M., Bissell, M.J.: Of extracellular matrix, scaffolds, and signaling: tissue architecture
regulates development, homeostasis, and cancer. Ann. Rev. Cell Dev. Biol. 22(1), 287–309
(2006)
2. Kutok, et al.: Inflammatory pseudotumor of lymph node and spleen: an entity biologically
distinct from inflammatory myofibroblastic tumor. Hum. Pathol. 32(12), 1382–1387 (2001)
3. Almagro, J., Messal, H.A., Elosegui-Artola, A., van Rheenen, J., Behrens, A.: Tissue architec-
ture in tumor initiation and progression. Trends Cancer 8(6), 494–505 (2022)
4. Zak, J., Siemion, K., Roszkowiak, L., Korzynska, A.: Fourier transform layer for fast fore-
ground segmentation in samples’ images of tissue biopsies. In: Biocybernetics and Biomedical
Engineering—Current Trends and Challenges, pp. 118–125. Springer International Publishing
(2021)
28 K. Siemion et al.
5. Chang, Y., He, F., Wang, J., Chen, S., Li, J., Liu, J., Yu, Y., Su, L., Ma, A., Allen, C., Lin,
Y., Sun, S., Liu, B., Otero, J., Chung, D., Fu, H., Li, Z., Xu, D., Ma, Q.: Define and visualize
pathological architectures of human tissues from spatially resolved transcriptomics using deep
learning (2021)
6. Morris, T.A., Eldeen, S., Tran, R.D.H., Grosberg, A.: A comprehensive review of computational
and image analysis techniques for quantitative evaluation of striated muscle tissue architecture.
Biophys. Rev. 3(4), 041302 (2022)
7. Singhal, P.: Evaluation of histomorphometric changes in tissue architecture in relation to alter-
ation in fixation protocol—An invitro study. J. Clin. Diagn. Res. (2016)
8. Siemion, K., Reszec-Gielazyn, J., Kisluk, J., Roszkowiak, L., Zak, J., Korzynska, A.: What do
we know about inflammatory myofibroblastic tumors?—A systematic review. Adv. Med. Sci.
67(1), 129–138 (2022)
9. Antonescu, C.R., et al.: WHO classification of tumours. Soft tissue and bone tumours, Inter-
national Agency for Research on Cancer (2020)
10. Gros, L., Tos, A.P.D., Jones, R.L., Digklia, A.: Inflammatory myofibroblastic tumour: state of
the art. Cancers 14(15), 3662 (2022)
11. Lindberg, M.R.: Diagnostic Pathology: soft Tissue Tumors E-Book. Elsevier Health Sciences
(2019)
12. Zhu, et al.: Pulmonary inflammatory myofibroblastic tumor versus IgG4-related inflammatory
pseudotumor: differential diagnosis based on a case series. J. Thorac. Disease 9(3), 598–609
(2017)
13. Shenawi, H.A., Al-Shaibani, S.A., Saad, S.K.A., Al-Sindi, F., Al-Sindi, K., Shenawi, N.A.,
Naguib, Y., Yaghan, R.: An extremely rare case of malignant jejunal mesenteric inflammatory
myofibroblastic tumor in a 61-year-old male patient: a case report and literature review. Front.
Med. 9 (2022)
14. Khatri, A., Agrawal, A., Sikachi, R., Mehta, D., Sahni, S., Meena, N.: Inflammatory myofi-
broblastic tumor of the lung. Adv. Respir. Med. 86(1), 27–35 (2018)
15. Coffin, C.M., Hornick, J.L., Fletcher, C.D.M.: Inflammatory myofibroblastic tumor. Am. J.
Surg. Pathol. 31(4), 509–520 (2007)
16. Bennett, J.A., Nardi, V., Rouzbahman, M., Morales-Oyarvide, V., Nielsen, G.P., Oliva, E.:
Inflammatory myofibroblastic tumor of the uterus: a clinicopathological, immunohistochemi-
cal, and molecular analysis of 13 cases highlighting their broad morphologic spectrum. Modern
Pathol. 30(10), 1489–1503 (2017)
17. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recog-
nition
18. Pizer, S.M., Amburn, E.P., Austin, J.D., Cromartie, R., Geselowitz, A., Greer, T., ter
Haar Romeny, B., Zimmerman, J.B., Zuiderveld, K.: Adaptive histogram equalization and
its variations. Comput. Vis. Graph. Image Process. 39(3), 355–368 (1987)
Robust Multiresolution and Multistain
Background Segmentation in Whole
Slide Images
1 Introduction
Background segmentation is a basic step in most preprocessing tasks for whole slide
images (WSIs), which are digital scans of tissue slides used for cancer diagnosis
and prognosis. Segmentation aims to separate the foreground tissue regions from
the background glass regions, which can reduce computational costs and improve
accuracy for subsequent analysis such as classification, detection, grading, and regis-
tration [10]. However, background segmentation of WSIs is challenging due to vari-
ations in tissue appearance, staining quality, illumination conditions, and scanning
artifacts. Existing methods for background segmentation of WSIs are either based
on handcrafted features or deep learning models. Up to our knowledge, the currently
existing methods are limited to H&E (hematoxylin and eosin) staining. Moreover,
most existing methods for background segmentation are either not publicly available
or require manual tuning of parameters for different datasets.
Many studies have explored histopathology segmentation, which is a technique
to identify different regions in tissue images like [1, 5, 13] However, most of these
studies focus on segmenting nuclei, which are small and distinct structures in the
tissue. Segmenting the whole tissue is also challenging because it involves large areas
that have low contrast and that are often similar to the background, especially when
immunohistochemistry (IHC) staining is used. This type of staining is much less
normalized and quality controlled [7, 17]. Even for H&E staining, the differences in
dyes are problematic [16]. Therefore, existing methods for segmentation cannot be
easily adapted to whole tissue segmentation.
Some previous studies [12] used a conventional method to segment tissue regions
of interest (ROI) from histopathological images. However, this method works well
only for images stained with H&E, and not for those stained with immunohisto-
chemistry (IHC). Moreover, our goal is different from theirs. We want to segment
the background from the tissue, not just the ROI within the tissue. This is impor-
tant for some preprocessing techniques that require selecting the entire tissue area,
including its folds and artifacts.
Recent studies have demonstrated that deep learning can achieve remarkable
results in histopathology, enabling more accurate and detailed predictions for various
diseases [6, 11, 13, 15] However, most of the existing work focuses on specific types
of tissue, such as epidermal tissue [14]. While there is some research in background
segmentation [2], the proposed models are not publicly available or accessible, lim-
iting their reproducibility and applicability.
In this paper, we propose a deep learning-based pipeline for background segmen-
tation of WSIs that can handle diverse types of tissues and stains without requiring any
prior information or user intervention. Our framework consists of two main compo-
nents: a patch-level segmentation network that predicts foreground probability maps
for small patches extracted from WSIs, and a slide-level fusion during inference that
combines the patch-level predictions into a final binary mask for the whole slide.
We evaluate our framework on a public dataset from ACROBAT challenge, covering
mainly breast cancer patients. We show that our framework achieves solid perfor-
Robust Multiresolution and Multistain Background Segmentation … 31
mance and high generalizability across different tissues. Additionally, we show solid
performance in multiple resolutions of those images, as well as both H&E and IHC
staining.
One of the applications of our background segmentation method is to improve
the quality and efficiency of other algorithms that process histopathology images.
Software tools like QuPath [3] and HistoQC [8] are widely used for various tasks
such as tissue detection, annotation, classification, and quantification and are proven
to benefit histopathology analysis [4]. However, these tools often rely on manual or
semi-automatic methods to remove the background regions from the images, which
can be time-consuming and inconsistent. Our background segmentation method can
help streamline this process.
The source code, ground-truth segmentation masks, and model weights will be
made publicly available [9].
2.1 Dataset
The ground-truth used for training is defined by manual segmentation of nine different
tissues from the ACROBAT dataset [18]. The dataset is a collection of tissues in a
pyramidal tiff format with varying resolutions at each level of the pyramid. Starting
at 10x, authors provide 7 to 9 lower resolutions per image, with a downsampling
factor of 2 in between. The second level which made up most of the training data has
an average resolution of 12146 ± 2189 pixels along the X axis, and 24007 ± 4854
pixels along the Y axis. Each slide at this resolution can potentially generate up to
4449 unique, non-overlapping patches of size 256 by 256 pixels. We utilize a random
sampling strategy. In each iteration, we sample 256 patches in random locations. This
means over the course of training, there is some overlap of those patches. The final
number of patches is not defined and depends on how long we train the network.
The dataset includes various artifacts such as markers, scratches, out-of-focus
regions and coverslips. It also consists of two types of staining: H&E and IHC. We use
three distinct IHC dyes during training and testing. We selected this dataset because
it represents the real-world challenges of image analysis. We used eight images for
training, one image for validation, and four images each in its two staining variants
for qualitative evaluation without prior manual segmentations. We aimed to achieve
high-quality segmentation by using a patch-based approach for both training and
inference. This allows us to exploit the heterogeneity of the tissue structures across
different slides and to generalize better with less data (Table 1).
We used Nvidia Tesla V100 graphics card configured with 300W TDP and 32 GB
of memory hosted on PLGrid HPC cluster Prometheus. We did not utilize the tensor
cores as of the time of writing, which means the inference times in Table 2 could be
further improved.
32 A. Jurgas et al.
2.2 Model
Table 3 Model architecture with each convolution kernel size, the output shape of the layer, and
the layer’s number of parameters
Layer type Kernel shape Output shape Param #
UNet – [1, 1, 256, 256] –
+ Sequential – [1, 1, 128, 128] –
+ Conv2d [4, 4] [1, 1, 128, 128] 17
+ GroupNorm – [1, 1, 128, 128] 2
+ LeakyReLU – [1, 1, 128, 128] –
+ Sequential – [1, 32, 64, 64] –
+ ResidualBlock [3, 3] [1, 32, 128, 128] 9,760
+ Conv2d [4, 4] [1, 32, 64, 64] 16,416
+ GroupNorm – [1, 32, 64, 64] 64
+ LeakyReLU – [1, 32, 64, 64] –
+ Sequential – [1, 64, 32, 32] –
+ ResidualBlock [3, 3] [1, 64, 64, 64] 57,792
+ ResidualBlock [3, 3] [1, 64, 64, 64] 78,272
+ Conv2d [4, 4] [1, 64, 32, 32] 65,600
+ GroupNorm – [1, 64, 32, 32] 128
+ LeakyReLU – [1, 64, 32, 32] –
+ Sequential – [1, 128, 16, 16] –
+ ResidualBlock [3, 3] [1, 128, 32, 32] 230,272
+ ResidualBlock [3, 3] [1, 128, 32, 32] 312,192
+ Conv2d [4, 4] [1, 128, 16, 16] 262,272
+ GroupNorm – [1, 128, 16, 16] 256
+ LeakyReLU – [1, 128, 16, 16] –
+ Sequential – [1, 128, 32, 32] –
+ ResidualBlock [3, 3] [1, 128, 16, 16] 312,192
+ ResidualBlock [3, 3] [1, 128, 16, 16] 312,192
+ ConvTranspose2d [4, 4] [1, 128, 32, 32] 262,272
+ GroupNorm – [1, 128, 32, 32] 256
+ LeakyReLU – [1, 128, 32, 32] –
+ Sequential – [1, 64, 64, 64] –
+ ResidualBlock [3, 3] [1, 64, 32, 32] 160,192
+ ConvTranspose2d [4, 4] [1, 64, 64, 64] 65,600
+ GroupNorm – [1, 64, 64, 64] 128
+ LeakyReLU – [1, 64, 64, 64] –
+ Sequential – [1, 32, 128, 128] –
+ ResidualBlock [3, 3] [1, 32, 64, 64] 40,160
+ ConvTranspose2d [4, 4] [1, 32, 128, 128] 16,416
+ GroupNorm – [1, 32, 128, 128] 64
+ LeakyReLU – [1, 32, 128, 128] –
+ Sequential – [1, 1, 256, 256] –
+ ResidualBlock [3, 3] [1, 1, 128, 128] 346
+ ConvTranspose2d [4, 4] [1, 1, 256, 256] 17
+ GroupNorm – [1, 1, 256, 256] 2
+ LeakyReLU – [1, 1, 256, 256] –
+ Sequential – [1, 1, 256, 256] –
+ Conv2d [1, 1] [1, 1, 256, 256] 2
34 A. Jurgas et al.
Fig. 1 A schematic diagram of the proposed deep learning pipeline. The pipeline consists of two
stages: training and inference. Each block has marked inputs described in the legend
Fig. 2 A schematic illustration of the UNet-like model used in this study. The model consists of
an encoder-decoder architecture with both short skip connections described in ResidualBlock and
long skip connections based on concatenating feature maps
3 Results
The Figs. 3, 4 and 5 illustrate some examples of our method’s output. We compared
the results of the upscaled and multiresolution model in Tables 1 and 2. The scores
reported in those tables are the average scores of all patches in each batch.
Results in Table 1 indicated that the multiresolution model had a similar per-
formance on the training dataset but a slightly higher performance on the valida-
tion dataset than the single-resolution model. This indicates that the multiresolution
model reduced overfitting and improved generalization. Figures 4 and 5 illustrate
some examples of image reconstruction from both models on the test set Table 4
shows expanded descriptions.
Fig. 4 Exemplary visualization from the test dataset (IHC staining). Segmentation output is marked
with green color
different resolutions and significantly reduced the inference time. Our method is a
general approach for histopathology slide analysis that does not target any specific
regions of tissues. Instead, it aims to segment the background of the slide by inverting
the segmentation of other structures, such as cells, nuclei, glands, vessels, etc.
According to Table 2 mutliresolution model achieved substantially better perfor-
mance on lower magnification levels, which is supported by comparing visualizations
in Figs. 5 and 6. The multiresolution model had also a faster inference time than the
single-resolution model, which is beneficial for practical applications.
The second method required a larger model with a wider receptive field, but it
is still feasible to deploy on even IoT class hardware such as Nvidia Jetson. This
allows for deployment on modern WSIs digitization hardware or integration into
most computational histopathology software.
Additionally, the first method incurred additional computational costs due to two
interpolation steps. The first method suffered from information loss and artifact
introduction when upscaling patches from lower resolutions. The difference in per-
formance between the two methods can be intuitively explained by considering that
the second method used more information from the original image and did not intro-
duce any artificial structures due to patches interpolation.
Robust Multiresolution and Multistain Background Segmentation … 37
Fig. 5 Exemplary visualization from the test dataset (H&E staining). Segmentation output is
marked with green color
Table 4 Qualitative analysis of the model’s output in the test set. Each case’s number matches the
number in the original dataset. The images will be published as the supplementary material in the
associated repository
Case from dataset Output commentary
4 Small artifacts from aggregation in blurry
regions in H&E variant. They disappear while
going down the magnification levels. On the
3rd level, artifacts from scratches appear. IHC
variant has no artifacts, even though they are
present in the image
5 While correctly segmenting tissue, there is a
larger artifact area in the HE variant. It
disappears on lower resolution levels. IHC
variant is segmented similarly to H&E one
8 Solid performance on both HE and IHC
variants. A little noise in the largest IHC
sample where coverslip is present in the
original, though most of the coverslip is
properly not segmented by the model. The
artifact disappears at lower magnifications
14 Solid HE segmentation. Similar performance in
IHC, although some noise is present on the
highest resolution in less visible regions of the
tissue
29 Solid HE segmentation. The coverslip on the
IHC variant has been partly segmented
30 Solid HE and IHC segmentation. The coverslip
on the IHC variant was properly not segmented
One of the limitations of our method is the sensitivity of the model to the res-
olution of the input images. Our model was trained on images with a fixed set of
magnifications, and it may not perform well on images with a very different magni-
fication level. Another limitation is the possibility of large or significantly different
artifacts than in our dataset influencing the segmentation results. Particularly with
higher resolutions, such artifacts could be segmented as part of the background,
leading to inaccurate or incomplete segmentation.
In this paper, we proposed a novel method for fast and generalizable background
segmentation in histopathological images. We tackled the problem from two per-
spectives: an upscaling of lower resolution images, and training the model on all
magnification levels. We demonstrated that our method can achieve high accuracy
and robustness on various tissues with different resolution levels and staining dyes.
Robust Multiresolution and Multistain Background Segmentation … 39
Fig. 6 Exemplary visualization from the test dataset (H&E staining) on the upscaling model.
Segmentation output is marked with green color
Compared to other works like [12] our method does not require adjusting any hyper-
parameters. We also showed that our deep learning model can learn to segment tissues
effectively even with a small amount of data, thanks to the patch-based approach.
Our method can be useful for preprocessing histopathological images for further
analysis and diagnosis.
Acknowledgements This work was done as a part of the IMI BigPicture project (IMI945358).
We gratefully acknowledge Poland’s high-performance computing infrastructure PLGrid (HPC
Centers: ACK Cyfronet AGH) for providing computer facilities and support within computational
grant no. PLG/ 2023/016239.
40 A. Jurgas et al.
References
1. Al-Kofahi, Y., Lassoued, W., Lee, W., et al.: Improved automatic detection and segmentation
of cell nuclei in histopathology images. IEEE Trans. Biomed. Eng. 57(4), 841–852 (2010)
2. Bándi, P., Balkenhol, M., van Ginneken, B., et al.: Resolution-agnostic tissue segmentation in
whole-slide histopathology images with convolutional neural networks. PeerJ 7, e8242 (2019)
3. Bankhead, P., Loughrey, M.B., Fernández, J.A., et al.: QuPath: open source software for digital
pathology image analysis. Sci. Rep. 7(1), 16878 (2017)
4. Chen, Y., Zee, J., Smith, A., et al.: Assessment of a computerized quantitative quality control
tool for whole slide images of kidney biopsies. J. Pathol. 253(3), 268–278 (2021)
5. Cui, Y., Zhang, G., Liu, Z., et al.: A deep learning algorithm for one-step contour aware nuclei
segmentation of histopathology images. Med. Biol. Eng. Comput. 57(9), 2027–2043 (2019)
6. Ehteshami Bejnordi, B., Veta, M., Johannes van Diest, P., et al.: Diagnostic assessment of
deep learning algorithms for detection of lymph node metastases in women with breast cancer.
JAMA 318(22), 2199–2210 (2017)
7. Elias, J.M., Gown, A.M., Nakamura, R.M., et al.: Special report: quality control in immuno-
histochemistry: report of a workshop sponsored by the biological stain commission. Am. J.
Clin. Pathol. 92(6), 836–843 (1989)
8. Janowczyk, A., Zuo, R., Gilmore, H., et al.: HistoQC: an open-source quality control tool for
digital pathology slides. JCO Clin. Cancer Inform. 3, 1–7 (2019)
9. Jurgas, A.: Jarartur/pcbbe23-histseg: multiresolution and multistain background segmentation
in WSIs (2023)
10. Levy, J.J., Jackson, C.R., Haudenschild, C.C., et al.: PathFlow-MixMatch for whole slide image
registration: an investigation of a segment-based scalable image registration method (2020)
11. Litjens, G., Sánchez, C.I., Timofeeva, N., et al.: Deep learning as a tool for increased accuracy
and efficiency of histopathological diagnosis. Sci. Rep. 6, 26286 (2016)
12. Muñoz-Aguirre, M., Ntasis, V.F., Rojas, S., et al.: PyHIST: a histological image segmentation
tool. Plos Comput. Biol. 16(10), e1008349 (2020)
13. Naylor, P., Laé, M., Reyal, F., et al.: Segmentation of nuclei in histopathology images by deep
regression of the distance map. IEEE Trans. Med. Imaging 38(2), 448–459 (2019)
14. Oskal, K.R.J., Risdal, M., Janssen, E.A.M., et al.: A U-net based approach to epidermal tissue
segmentation in whole slide histopathological images. SN Appl. Sci. 1(7), 672 (2019)
15. Shelhamer, E., Long, J., Darrell, T.: Fully convolutional networks for semantic segmentation.
IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 640–651 (2017)
16. Tellez, D., Litjens, G., Bándi, P., et al.: Quantifying the effects of data augmentation and stain
color normalization in convolutional neural networks for computational pathology. Med. Image
Anal. 58, 101544 (2019)
17. Tsutsumi, Y.: Pitfalls and caveats in applying chromogenic immunostaining to histopatholog-
ical diagnosis. Cells 10(6), 1501 (2021)
18. Weitz, P., Valkonen, M., Solorzano, L., et al.: ACROBAT—A multi-stain breast cancer histolog-
ical whole-slide-image data set from routine diagnostics for computational pathology (2022).
arxiv:abs/2211.13621
Impact of Visual Image Quality on
Lymphocyte Detection Using YOLOv5
and RetinaNet Algorithms
Abstract Lymphocytes, a type of leukocytes, play a vital role in the immune system.
The precise quantification, spatial arrangement and phenotypic characterization of
lymphocytes within haematological or histopathological images can serve as a diag-
nostic indicator of a particular lesion. Artificial neural networks, employed for the
detection of lymphocytes, not only can provide support to the work of histopathol-
ogists but also enable better disease monitoring and faster analysis of the general
immune system condition. In this study, the impact of visual quality on the perfor-
mance of state-of-the-art algorithms for detecting lymphocytes in medical images
was examined. Two datasets were used, and image modifications such as blur, sharp-
ness, brightness, and contrast were applied to assess the performance of YOLOv5
and RetinaNet models. The study revealed that the visual quality of images exerts
a substantial impact on the effectiveness of the deep learning methods in detect-
ing lymphocytes accurately. These findings have significant implications for deep
learning approaches used in digital pathology.
1 Introduction
Histopathological images, which are images of tissue samples taken from a patient’s
body, are an invaluable tool in the diagnosis and treatment of diseases. Accurate
detection of lymphocytes, which are a type of immune cells, in these images is
* * * * *
23d November, 1825. I have stated that few parties are given in
Mexico. Balls are sometimes held by the American and English
Legations. If, on these occasions, fifty ladies attend, it is considered
a prodigious number to assemble together. The expenses of
preparation which they incur are enormous, and deter many,
however devoted they may be to pleasure, from partaking in
frequent diversions of this kind. Society, too, has not acquired that
equilibrium which the democratical institutions of the country must
produce eventually. A powerful aristocracy, as may reasonably be
supposed, still exists in the capital—time alone will level this—it will
die with the present generation, taking for granted that the
republicanism of Mexico will be permanent. Aristocracy, of course,
reduces the highest class of society to a limited number, so that a
large assemblage of ladies here would be thought small in the
United States.
At whatever hour you invite company, it will not collect before nine,
and the most fashionable appear between ten and eleven. The
music soon invites them to the waltz, or to the Spanish country-
dance, both of which are graceful, and perhaps voluptuous, when
danced, as in Mexico, to the music of guitars or of bandolines. They
dance upon brick floors—there are none other in Mexican houses—
generally bare, but foreigners have introduced the more comfortable
fashion of covering them with canvass; and as the steps are simple,
without the hopping and restlessness of our cotillons or quadrilles, it
is not so unpleasant as would be supposed; they glide over the
pavement without much exertion. The dancing continues, not
uninterruptedly as with us, but at intervals, until twelve o'clock,
when the ladies are conducted to the supper table, which must be
loaded with substantial as well as sweet things. After supper,
dancing is continued, and the company begins to disperse between
one and two in the morning, and sometimes not until near daybreak.
None of the wealthy families have followed the example set them by
foreigners. They give no balls or dinners. Although I have now been
here six months, I have never dined in a Mexican house in the city.
Their hospitality consists in this: they place their houses and all they
possess at your disposal, and are the better pleased the oftener you
visit them, but they rarely, if ever, offer you refreshments of any
kind. It is said that they are gratified if you will dine with them
unceremoniously, but they never invite you.
* * * * *
BY EDGAR A. POE.
I.
ROME. A Lady's apartment, with a window open and looking into a garden. Lalage, in
deep mourning, reading at a table on which lie some books and a hand mirror. In the
back ground Jacinta (a servant maid) leans carelessly upon a chair.
Lalage (astonished.) What didst thou say Jacinta? Have I done aught
To grieve thee or to vex thee?—I am sorry.
For thou hast served me long and ever been
Trust-worthy and respectful. (resumes her reading.)
Monk. I did.
II.
Voice (distinctly.)
Who hath loved thee so long,
In wealth and wo among,
And is thy heart so strong?
Say nay!—say nay!
III.
Lalage. Politian!
Thou speakest to me of love. Knowest thou the land
With which all tongues are busy—a land new found—
Miraculously found by one of Genoa—
A thousand leagues within the golden west;
A fairy land of flowers, and fruit, and sunshine,
And crystal lakes, and over-arching forests,
And mountains, around whose towering summits the winds
Of Heaven untrammelled flow—which air to breathe
Is Happiness now, and will be Freedom hereafter
In days that are to come?
Politian. O, wilt thou—wilt thou
Fly to that Paradise—my Lalage, wilt thou
Fly thither with me? There Care shall be forgotten,
And Sorrow shall be no more, and Eros be all.
And life shall then be mine, for I will live
For thee, and in thine eyes—and thou shalt be
No more a mourner—but the radiant Joys
Shall wait upon thee, and the angel Hope
Attend thee ever; and I will kneel to thee,
And worship thee, and call thee my beloved,
My own, my beautiful, my love, my wife,
My all;—oh, wilt thou—wilt thou, Lalage,
Fly thither with me?
LOGIC.
AN ADDRESS ON EDUCATION,
ebookball.com