Assessment-of-deep-learning-technique-for-fully-au
Assessment-of-deep-learning-technique-for-fully-au
Introduction: This study aimed to assess the precision of an open-source, clinician-trained, and user-friendly
convolutional neural network-based model for automatically segmenting the mandible. Methods: A total of 55
cone-beam computed tomography scans that met the inclusion criteria were collected and divided into test
and training groups. The MONAI (Medical Open Network for Artificial Intelligence) Label active learning tool
extension was used to train the automatic model. To assess the model’s performance, 15 cone-beam
computed tomography scans from the test group were inputted into the model. The ground truth was
obtained from manual segmentation data. Metrics including the Dice similarity coefficient, Hausdorff 95%,
precision, recall, and segmentation times were calculated. In addition, surface deviations and volumetric
differences between the automated and manual segmentation results were analyzed. Results: The automated
model showed a high level of similarity to the manual segmentation results, with a mean Dice similarity coefficient
of 0.926 6 0.014. The Hausdorff distance was 1.358 6 0.466 mm, whereas the mean recall and precision values
were 0.941 6 0.028 and 0.941 6 0.022, respectively. There were no statistically significant differences in the
arithmetic mean of the surface deviation for the entire mandible and 11 different anatomic regions. In terms of
volumetric comparisons, the difference between the 2 groups was 1.62 mm3, which was not statistically signif-
icant. Conclusions: The automated model was found to be suitable for clinical use, demonstrating a high degree
of agreement with the reference manual method. Clinicians can use open-source software to develop custom
automated segmentation models tailored to their specific needs. (Am J Orthod Dentofacial Orthop
2025;167:242-9)
T
he application of artificial intelligence and analysis tasks. CNNs, a subset of deep learning, are
machine learning has significantly advanced the specialized neural network architectures commonly
capabilities of computer systems in mimicking employed in image analysis.1 In the field of dentistry,
human cognitive functions such as problem-solving CNN technology is applied to identify pathologic struc-
and decision-making. Among these technologies, deep tures such as cysts and tumors in the jaws, diagnose
learning, and specifically convolutional neural networks dental caries, and classify teeth.2-4 In recent years, it
(CNNs), have proven to be particularly effective in image has also been used for the automatic segmentation of
craniofacial structures and teeth based on cone-beam
a
Department of Orthodontics, Faculty of Dentistry, Mu gla Sıtkı Koçman Univer-
computed tomography (CBCT) scans.5,6
sity, Mu gla, Turkey. The segmentation of CBCT scans is a crucial step in
b
Department of Orthodontics, Gulhane Faculty of Dentistry, University of Health the digital clinical workflow. In applications that require
Sciences, Ankara, Turkey.
c
Department of Orthodontics, Faculty of Dentistry, Çanakkale Onsekiz Mart Uni-
precise planning, such as dental implant surgeries and
versity, Çanakkale, Turkey. 3-dimensional (3D) virtual surgical planning, the region
All authors have completed and submitted the ICMJE Form for Disclosure of Po- of interest is isolated and extracted from the surrounding
tential Conflicts of Interest, and none were reported.
This research did not receive any specific grant from funding agencies in the pub-
hard and soft tissues through 3D segmentation
lic, commercial, or not-for-profit sectors. processes.7-9 The segmentation process can be done in
Address correspondence to: Ebru Yurdakurban, Department of Orthodontics, 3 ways: manual, automatic, and semiautomatic. The
Faculty of Dentistry, Mugla Sıtkı Koçman University, Mu gla, Turkey, 48000;
e-mail, [email protected].
most commonly used traditional method involves the
Submitted, June 2024; revised, August 2024; accepted, September 2024. operator manually delineating the region of interest on
0889-5406/$36.00 each slice. This method is time-consuming and may
Ó 2024 by the American Association of Orthodontists. All rights are reserved,
including those for text and data mining, AI training, and similar technologies.
produce inaccurate results if performed by an operator
https://doi.org/10.1016/j.ajodo.2024.09.006 with limited experience. In contrast, segmentation with
242
Yurdakurban, S€
uk€
ut, and Duran 243
CNN-based models eliminates the potential for operator specifically designed for the automatic segmentation of
error and speeds up the clinical workflow by providing the mandible to meet clinical demands.
quick results.10,11
Accurate manual segmentation of the mandible can MATERIAL AND METHODS
be challenging and time-consuming because of the The study was conducted using retrospective archival
complex anatomy of the region and image artifacts. records. Before the study, ethical approval was obtained
The presence of metal-containing amalgam fillings, from the University of Health Sciences Gulhane Scientific
implant materials, or orthodontic brackets further Research Ethics Committee, with decision no. 2024-62.
complicates the delineation of anatomic boundaries. All study protocols were conducted in compliance with
Moreover, the variability in density within anatomic the Helsinki Declaration.
structures with different bone densities, such as the The CBCT scans were selected from the digital imag-
coronoid process and corpus, and the low-contrast ing and communications in medicine archive for orthog-
imaging of the thin condylar process reduce the preci- nathic surgical planning of patients who were admitted
sion of segmentation.12-14 In recent years, automatic to the Department of Orthodontics between 2013 and
segmentation models have been developed to 2022. The CBCT images were acquired by an expert us-
overcome the limitations of manual segmentation. ing the same device (HiRes 3D-Plus dental; LargeV, Bei-
Verhelst et al15 evaluated the effectiveness of a layered jing, China), featuring a field of view measuring 200 3
deep learning-based mandibular segmentation model 170 mm, a voltage of 60 kV, a current of 6 mA, and
using full skull CBCT scans of orthognathic surgery an isotropic voxel size of 0.3 mm3. The patients assumed
patients, regardless of the application of additional a resting position throughout the scanning process.
user refinement. The results indicated that the algo- The inclusion criteria were as follows: absence of any
rithms of the 3D U-Net architecture reduced operator image artifacts, leveled and aligned mandibular dental
error and achieved excellent accuracy. Lo Giudice arch, completion of permanent dentition, absence of
et al16 developed a model that automatically segmented impacted teeth, no prior operations involving the
the mandible using the CNN method and compared its mandible, and no pathologic formations (eg, tumors,
effectiveness with the manual method. The researchers cysts, or craniofacial deformities). Out of the 185 CBCT
found that the CNN-based model performed segmenta- images that met these criteria, 55 were selected for inclu-
tion as accurately as an experienced image reader. Both sion in the study.
studies highlighted that the model was significantly The authors trained a new CNN-based segmentation
faster than traditional methods. Pankert et al17 reported model to evaluate the automatic segmentation method.
that the augmented 2-step CNN for virtual surgical plan- A total of 55 CBCT datasets were randomly allocated as
ning provided highly accurate, fast, objective, and repro- follows: 35 for model training, 5 for validation, and 15
ducible results in mandibular segmentation. Recently, for testing. The training of the automatic model used
researchers have developed a CNN-based tool for multi- the Medical Open Network for Artificial Intelligence
class segmentation of dentomaxillofacial computed (MONAI) Label active learning tool extension available
tomography and CBCT images. Dot et al18 introduced in the 3D Slicer software (version 4.12; www.slicer.org;
a new tool that allows for fully automated segmentation Harvard). The deep learning framework was run using
of 5 dentofacial structures. The researchers stated that Python (version 3.9).
this publicly available tool provides accurate results, as The 35 CBCT scans designated for training were
evidenced by its high Dice similarity coefficient (DSC) transferred to the software. Two labels were created to
and normalized surface distance. Because of the advan- distinguish the mandible from other structures, such as
tages offered by CNN-based models in the segmentation the maxilla and cranial base, within the wide field of
process, researchers are currently focusing on the devel- view range. The label assigned to the mandible for seg-
opment of new automatic models, and their use is mentation was defined as foreground, whereas the label
becoming widespread. for the remaining anatomic structures was defined as
CNN-based models for automatic segmentation pro- background. The Hounsfield unit threshold was set at
vide numerous benefits. However, creating these models 200-2000 to mask the cortical and cancellous bones.
demands a high level of technical expertise and deep The outer borders of the cortical bone were used as a
learning knowledge. In addition, most commercial reference to delineate the mandible. The mandible was
models are not open-source and require a license fee manually labeled by 2 operators using tools such as level
for extended usage. Taking these factors into account, tracing, color, and erase in the coronal, axial, and
this study aimed to assess the precision of an open- sagittal slices. The lower edge of the corpus mandible,
source CNN model, which has been trained by clinicians
American Journal of Orthodontics and Dentofacial Orthopedics February 2025 Vol 167 Issue 2
244 Yurdakurban, S€
uk€
ut, and Duran
Fig 1. Preparation of mandibular model for digital superimposition: A, Mandible obtained by fully auto-
matic segmentation; B, Creation of the horizontal cutting plane and extraction of the teeth and sur-
rounding alveolar bone; C, Determination of the landmarks; D, Separation of the mandible into 11
different anatomic regions.
as well as the anterior, posterior, and superior edges of true positive, false positive, true negative, and false
the ramus, mandibular angle, coronoid process, condylar negative values. Specifically, true positive represents
process, condyle neck, and mental foramen borders, the pixels correctly identified in the foreground, false
were marked with the level tracing tool. positive represents the incorrectly identified or missed
The validation set was used to optimize and evaluate pixels in the foreground, true negative represents the
the performance of the model at regular intervals during pixels correctly identified in the background, and false
the training process. The model, trained locally using negative represents the pixels incorrectly identified or
CBCTs obtained for orthognathic surgery planning, is missed in the background. In addition, the volumetric
not publicly available. To assess the effectiveness of the differences between the segmentation data obtained
trained model, 15 CBCT datasets were transferred to it, from manual and automated methods were calculated
and the automatic segmentation process was completed. to assess accuracy further.
The data obtained from the manual segmentation The segmentation data were exported from the soft-
method were considered ground truth (GT) and used as a ware in the standard tessellation language file format to
reference to assess the accuracy of the automatic segmen- analyze contour differences between the 3D segmenta-
tation model. The 15 CBCT datasets reserved for testing tion models obtained through automatic and manual
were imported into the 3D Slicer software. The manual seg- methods. The reference and test models were first
mentation was performed based on the consensus deci- imported into 3D modeling software (Meshmixer,
sions of 2 operators (E.Y. and Y.S.). In situations in which version 3.5; Autodesk, San Rafael, Calif). To ensure accu-
a consensus could not be reached, a third operator racy in the surface superimposition, potential image
(G.S.D.) was consulted to resolve any discrepancies. scattering and distortion caused by metal fillings and
The performance of the automatic model was brackets on the mandibular teeth were removed, elimi-
assessed using several metrics: DSC, Hausdorff 95%, nating these artifacts (Fig 1, A). A horizontal cutting
precision, and recall. These metrics were computed using plane was established by marking the cusp tips of the
February 2025 Vol 167 Issue 2 American Journal of Orthodontics and Dentofacial Orthopedics
Yurdakurban, S€
uk€
ut, and Duran 245
American Journal of Orthodontics and Dentofacial Orthopedics February 2025 Vol 167 Issue 2
246 Yurdakurban, S€
uk€
ut, and Duran
Table II. Descriptive statistics of evaluation metrics Table III. Arithmetic surface deviation analysis of the
different anatomic regions
25th-75th
Metric Mean 6 SD Min-Max percentiles Anatomic Minimum Maximum Mean 6 SD
Dice 0.926 6 0.014 0.883-0.950 0.925-0.932 regions (mm) (mm) (mm)
Hausdorff 1.358 6 0.466 1.012-2.480 1.067-1.364 Angulus
Recall 0.941 6 0.028 0.867-0.972 0.934-0.960 Right 0 0.29 0.11 6 0.08
Precision 0.914 6 0.022 0.889-0.967 0.899-0.919 Left 0.01 0.29 0.11 6 0.08
SD, standard deviation; Min, minimum; Max, maximum. Coronoid
process
Right 0.05 0.56 0.22 6 0.12
ranging 0-1.22 The model achieved an average recall Left 0.01 0.37 0.21 6 0.11
value of 0.941 6 0.028. Similarly, the model achieved Ramus
Right 0.07 0.38 0.25 6 0.10
a high precision value close to 1, with an average of Left 0.03 0.25 0.15 6 0.15
0.941 6 0.022 (Table II). Condyle
The arithmetic mean of surface deviations calcu- Right 0.03 0.45 0.17 6 0.12
lated for the entire mandible and for 11 different Left 0.02 0.32 0.13 6 0.09
anatomic regions was not statistically significantly Corpus
Right 0.01 0.32 0.19 6 0.09
different (P .0.05) (Table III). The lowest mean devia- Left 0.01 0.22 0.12 6 0.07
tion was observed in the symphysis and angulus regions Symphysis 0.01 0.26 0.11 6 0.08
(0.11 6 0.08 mm), whereas the highest mean deviation Total 0.05 0.23 0.16 6 0.06
was found in the right coronoid process (0.56 mm)
Note. Arithmetic means of surface deviations of the mandible and 11
(Table III). different anatomic regions were not statistically significantly
A statistically significant difference was observed be- different (P .0.05).
tween the segmentation times for the automatic and SD, standard deviation.
manual methods (P \0.0001) (Table IV). The average
segmentation time for the manual method was approx-
imately 1 hour, whereas for the automatic method, it was 1.62 6 3.14 mm3 and was not statistically significant
approximately 1 minute. In terms of volumetric compar- (P 5 0.077). The ICC analyses, which assessed the reli-
isons, the mean mandibular volume was 59.95 6 15.67 ability of measurements made by the consensus deci-
mm3 for the GT and 61.56 6 15.31 mm3 for the auto- sions of 2 observers, demonstrated high reliability,
matic model. The difference between the 2 groups was with ICC values ranging 0.89-0.95.
February 2025 Vol 167 Issue 2 American Journal of Orthodontics and Dentofacial Orthopedics
Yurdakurban, S€
uk€
ut, and Duran 247
Table IV. Comparisons of segmentation times between manual and fully automated methods
Method Minimum Maximum Mean 6 SD 25th-75th percentiles
Manual 37:01 min 61:03 min 47:25 6 8:54 min 41:35-53:54 min
Automated 0:56 min 1:40 min 1:17 6 0:17 min 1:20-1:29 min
Note. A statistically significant difference was observed between the segmentation times for the automatic and manual methods (P \0.0001).
SD, standard deviation.
American Journal of Orthodontics and Dentofacial Orthopedics February 2025 Vol 167 Issue 2
248 Yurdakurban, S€
uk€
ut, and Duran
The application of deep learning in developing auto- because the model is being trained primarily on the
mated segmentation models offers the potential for sub- external borders of the mandible. For evaluations that
stantial reductions in the time required for manual require detailed analysis of these radiolucent internal
segmentation. In our study, the average time for manual structures, adjustments to the model training would be
segmentation of the mandible was approximately 47 mi- necessary to incorporate recognition of these features.
nutes, whereas the proposed automated method reduced In addition, the study did not include patient records
this time to just 1 minute. Jindanil et al12 developed a with pathologic conditions such as cysts, tumors, frac-
novel artificial intelligence tool for automatic segmenta- tures, or significant anatomic variations. As a result,
tion of the mandibular incisor canal on CBCT scans, which the generalizability of the model’s segmentation results
proved to be 284 times faster than manual segmentation. is limited. Future research could enhance the model’s
Dot et al18 reported that their open-source tool enabled applicability by incorporating datasets that include a
fully automatic segmentation of 5 different dentofacial broader range of anatomic variations and pathologic
structures in an average of 178 6 100 seconds. Auto- conditions, thereby improving the model’s robustness
mated segmentation methods not only significantly cut and suitability for diverse clinical scenarios.
down the time and effort needed for digital workflows
but also reduce the workforce requirements and the CONCLUSIONS
need for specialized personnel. Moreover, these methods In this study, we evaluated the performance of a
enhance segmentation accuracy by eliminating operator CNN-based automatic mandibular segmentation model
errors, particularly those made by less experienced clini- trained by clinicians using open-source software. The
cians. The integration of automated segmentation into model demonstrated high accuracy and reliability,
clinical practice thus represents a significant advance- showing strong agreement with the reference manual
ment in efficiency and precision. method. Clinicians can leverage this open-source soft-
The development of new CNN architectures and al- ware to develop custom automated segmentation
gorithms offers significant advantages over existing models tailored to their specific needs. This approach en-
methods, but their development often requires extensive hances digital clinical workflows by offering fast and
technical expertise and specialized equipment. However, precise segmentation results, thereby improving both
the model training tool used in our study does not efficiency and accuracy in clinical practice.
require advanced technical knowledge and is accessible
to less experienced practitioners. This CNN-based AUTHOR CREDIT STATEMENT
method provides an open-source, cost-free model
training tool, eliminating the expenses associated with Ebru Yurdakurban contributed to conceptualization,
software licensing fees. In addition, the mandibular data curation, formal analysis, investigation, methodol-
morphology of patients from different ethnic back- ogy, resources, visualization, original draft preparation,
grounds varies in both dimensions and shape.29-31 and manuscript review and editing; Ya gızalp S€
uk€ut
These anatomic differences may affect the accuracy of contributed to conceptualization, data curation, formal
automatic segmentation models. Therefore, it is analysis, investigation, methodology, resources, soft-
beneficial for clinicians to develop and train their ware, visualization, and original draft preparation; and
models based on their specific clinical needs, using G€
okhan Serhat Duran contributed to conceptualization,
datasets that reflect the diverse anatomic project administration, supervision, validation, and
characteristics of their patient population. This manuscript review and editing.
approach ensures that the segmentation models are
REFERENCES
better aligned with the unique variations in
mandibular morphology encountered in their practice. 1. Gupta J, Pathak S, Kumar G. Deep learning (CNN) and transfer
This study has several limitations that should be learning: a review. J Phys.: Conf. Ser 2022;2273:012029.
2. Lee S, Oh SI, Jo J, Kang S, Shin Y, Park JW. Deep learning for early
considered. First, the dataset used for model training dental caries detection in bitewing radiographs. Sci Rep 2021;
was acquired using a single imaging device with fixed 11:16807.
scanning parameters. This uniformity may introduce 3. Tian S, Dai N, Zhang B, Yuan F, Yu Q, Cheng X. Automatic classifi-
bias, as the model’s performance could vary when cation and segmentation of teeth on 3D dental model using hierar-
applied to CBCT images obtained with different devices chical deep learning networks. IEEE Access 2019;7:84817-28.
4. Yu D, Hu J, Feng Z, Song M, Zhu H. Deep learning based diagnosis
or scanning parameters, potentially affecting accuracy for cysts and tumors of jaw with massive healthy samples. Sci Rep
and consistency. Second, the automatic segmentation 2022;12:1855.
results do not account for hollow anatomic structures, 5. Gillot M, Baquero B, Le C, Deleat-Besson R, Bianchi J, Ruellas A,
such as the mandibular canal and foramen linguale, et al. Automatic multi-anatomical skull structure segmentation
February 2025 Vol 167 Issue 2 American Journal of Orthodontics and Dentofacial Orthopedics
Yurdakurban, S€
uk€
ut, and Duran 249
of cone-beam computed tomography scans using 3D UNETR. augmented two-stepped convolutional neural network. Int J Com-
PLoS One 2022;17:e0275033. put Assist Radiol Surg 2023;18:1479-88.
6. Jang TJ, Kim KC, Cho HC, Seo JK. A fully automated method for 3D 18. Dot G, Chaurasia A, Dubois G, Charles S, Haghighat S, Azimian S,
individual tooth identification and segmentation in dental CBCT. et al. DentalSegmentator: robust deep learning-based CT and
IEEE Trans Pattern Anal Mach Intell 2022;44:6562-8. CBCT image segmentation. J Dent 2024;147:105130.
7. Pohlenz P, Blessmann M, Blake F, Gbara A, Schmelzle R, 19. Tao B, Yu X, Wang W, Wang H, Chen X, Wang F, et al. A deep
Heiland M. Major mandibular surgical procedures as an indication learning-based automatic segmentation of zygomatic bones
for intraoperative imaging. J Oral Maxillofac Surg 2008;66:324-9. from cone-beam computed tomography images: a proof of
8. Bayrakdar SK, Orhan K, Bayrakdar IS, Bilgir E, Ezhov M, Gusarev M, concept. J Dent 2023;135:104582.
et al. A deep learning approach for dental implant planning in 20. Shamir RR, Duchin Y, Kim J, Sapiro G, Harel N. Continuous Dice
cone-beam computed tomography images. BMC Med Imaging Coefficient: a Method for Evaluating Probabilistic Segmentations.
2021;21:86. arXiv 2019;135:1906.11031.
9. Deng HH, Liu Q, Chen A, Kuang T, Yuan P, Gateno J, et al. Clinical 21. Zhao C, Shi W, Deng Y. A new Hausdorff distance for image match-
feasibility of deep learning-based automatic head CBCT image ing. Pattern Recognit Lett 2005;26:581-6.
segmentation and landmark detection in computer-aided surgical 22. Taha AA, Hanbury A. Metrics for evaluating 3D medical image seg-
simulation for orthognathic surgery. Int J Oral Maxillofac Surg mentation: analysis, selection, and tool. BMC Med Imaging 2015;
2023;52:793-800. 15:29.
10. Vaitiekunas M, Jegelevicius D, Sakalauskas A, Grybauskas S. Auto- 23. Qiu B, van der Wel H, Kraeima J, Glas HH, Guo J, Borra RJH, et al.
matic method for bone segmentation in cone beam computed to- Automatic segmentation of mandible from conventional methods
mography data set. Appl Sci 2019;10:236. to deep learning-a review. J Pers Med 2021;11:629.
11. Lo Giudice A, Ronsivalle V, Grippaudo C, Lucchese A, Muraglie S, 24. Egger J, Pfarrkirchner B, Gsaxner C, Lindner L, Schmalstieg D,
Lagravere MO, et al. One step before 3D printing-evaluation of im- Wallner J. Fully convolutional mandible segmentation on a valid
aging software accuracy for 3-dimensional analysis of the ground-truth dataset. Annu Int Conf IEEE Eng Med Biol Soc
mandible: a comparative study using a surface-to-surface match- 2018;2018:656-60.
ing technique. Materials (Basel) 2020;13:2798. 25. Yan M, Guo J, Tian W, Yi Z. Symmetric convolutional neural network
12. Jindanil T, Marinho-Vieira LE, de-Azevedo-Vaz SL, Jacobs R. A for mandible segmentation. Knowl Based Syst 2018;159:63-71.
unique artificial intelligence-based tool for automated CBCT seg- 26. Vinayahalingam S, Berends B, Baan F, Moin DA, van Luijn R,
mentation of mandibular incisive canal. Dentomaxillofac Radiol Berge S, et al. Deep learning for automated segmentation of the
2023;52:20230321. temporomandibular joint. J Dent 2023;132:104475.
13. Wang L, Chen KC, Gao Y, Shi F, Liao S, Li G, et al. Automated bone 27. Hunter A, Kalathingal S. Diagnostic imaging for temporomandibular
segmentation from dental CBCT images using patch-based sparse disorders and orofacial pain. Dent Clin North Am 2013;57:405-18.
representation and convex optimization. Med Phys 2014;41:043503. 28. Brosset S, Dumont M, Bianchi J, Ruellas A, Cevidanes L, Yatabe M,
14. Qiu B, van der Wel H, Kraeima J, Glas HH, Guo J, Borra RJH, et al. et al. 3D auto-segmentation of mandibular condyles. Annu Int
Mandible segmentation of dental CBCT scans affected by metal arti- Conf IEEE Eng Med Biol Soc 2020;2020:1270-3.
facts using coarse-to-fine learning model. J Pers Med 2021;11:560. 29. Puişoru M, Forna N, Fatu AM, Fatu R, Fatu C. Analysis of mandib-
15. Verhelst P-J, Smolders A, Beznik T, Meewis J, Vandemeulebroucke A, ular variability in humans of different geographic areas. Ann Anat
Shaheen E, et al. Layered deep learning for automatic mandibular seg- 2006;188:547-54.
mentation in cone-beam computed tomography. J Dent 2021; 30. Bee MT, Rabban M, Sethi H, Tran T, Baker C, Forbes B. Variability
114:103786. in the location of the mandibular foramen in African-American
16. Lo Giudice A, Ronsivalle V, Spampinato C, Leonardi R. Fully auto- and Caucasian populations of male and female skulls. The FASEB
matic segmentation of the mandible based on convolutional neu- Journal 2008;22:771-3.
ral networks (CNNs). Orthod Craniofac Res 2021;24:100-7. 31. Buck TJ, Vidarsdottir US. A proposed method for the identification
17. Pankert T, Lee H, Peters F, H€olzle F, Modabber A, Raith S. Mandible of race in sub-adult skeletons: a geometric morphometric analysis
segmentation from CT data for virtual surgical planning using an of mandibular morphology. J Forensic Sci 2004;49:1159-64.
American Journal of Orthodontics and Dentofacial Orthopedics February 2025 Vol 167 Issue 2