Literature DB >> 33916800

Classification of Dental Radiographs Using Deep Learning.

Jose E Cejudo1, Akhilanand Chaurasia2,3, Ben Feldberg1, Joachim Krois1,2, Falk Schwendicke1,2.   

Abstract

OBJECTIVES: To retrospectively assess radiographic data and to prospectively classify radiographs (namely, panoramic, bitewing, periapical, and cephalometric images), we compared three deep learning architectures for their classification performance.
METHODS: Our dataset consisted of 31,288 panoramic, 43,598 periapical, 14,326 bitewing, and 1176 cephalometric radiographs from two centers (Berlin/Germany; Lucknow/India). For a subset of images L (32,381 images), image classifications were available and manually validated by an expert. The remaining subset of images U was iteratively annotated using active learning, with ResNet-34 being trained on L, least confidence informative sampling being performed on U, and the most uncertain image classifications from U being reviewed by a human expert and iteratively used for re-training. We then employed a baseline convolutional neural networks (CNN), a residual network (another ResNet-34, pretrained on ImageNet), and a capsule network (CapsNet) for classification. Early stopping was used to prevent overfitting. Evaluation of the model performances followed stratified k-fold cross-validation. Gradient-weighted Class Activation Mapping (Grad-CAM) was used to provide visualizations of the weighted activations maps.
RESULTS: All three models showed high accuracy (>98%) with significantly higher accuracy, F1-score, precision, and sensitivity of ResNet than baseline CNN and CapsNet (p < 0.05). Specificity was not significantly different. ResNet achieved the best performance at small variance and fastest convergence. Misclassification was most common between bitewings and periapicals. For bitewings, model activation was most notable in the inter-arch space for periapicals interdentally, for panoramics on bony structures of maxilla and mandible, and for cephalometrics on the viscerocranium.
CONCLUSIONS: Regardless of the models, high classification accuracies were achieved. Image features considered for classification were consistent with expert reasoning.

Entities:  

Keywords:  artificial intelligence; classification deep learning; dental; machine learning; radiographs; teeth

Year:  2021        PMID: 33916800     DOI: 10.3390/jcm10071496

Source DB:  PubMed          Journal:  J Clin Med        ISSN: 2077-0383            Impact factor:   4.241


  3 in total

1.  Artificial Intelligence for Classifying and Archiving Orthodontic Images.

Authors:  Shihao Li; Zizhao Guo; Jiao Lin; Sancong Ying
Journal:  Biomed Res Int       Date:  2022-01-27       Impact factor: 3.411

2.  Benchmarking Deep Learning Models for Tooth Structure Segmentation.

Authors:  L Schneider; L Arsiwala-Scheppach; J Krois; H Meyer-Lueckel; K K Bressem; S M Niehues; F Schwendicke
Journal:  J Dent Res       Date:  2022-06-09       Impact factor: 8.924

3.  Robust Estimation of the Chronological Age of Children and Adolescents Using Tooth Geometry Indicators and POD-GP.

Authors:  Katarzyna Zaborowicz; Tomasz Garbowski; Barbara Biedziak; Maciej Zaborowicz
Journal:  Int J Environ Res Public Health       Date:  2022-03-03       Impact factor: 3.390

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.