| Literature DB >> 32782269 |
Shinpei Matsuda1, Takashi Miyamoto2, Hitoshi Yoshimura3, Tatsuhito Hasegawa2.
Abstract
Forensic dental examination has played an important role in personal identification (PI). However, PI has essentially been based on traditional visual comparisons of ante- and postmortem dental records and radiographs, and there is no globally accepted PI method based on digital technology. Although many effective image recognition models have been developed, they have been underutilized in forensic odontology. The aim of this study was to verify the usefulness of PI with paired orthopantomographs obtained in a relatively short period using convolutional neural network (CNN) technologies. Thirty pairs of orthopantomographs obtained on different days were analyzed in terms of the accuracy of dental PI based on six well-known CNN architectures: VGG16, ResNet50, Inception-v3, InceptionResNet-v2, Xception, and MobileNet-v2. Each model was trained and tested using paired orthopantomographs, and pretraining and fine-tuning transfer learning methods were validated. Higher validation accuracy was achieved with fine-tuning than with pretraining, and each architecture showed a detection accuracy of 80.0% or more. The VGG16 model achieved the highest accuracy (100.0%) with pretraining and with fine-tuning. This study demonstrated the usefulness of CNN for PI using small numbers of orthopantomographic images, and it also showed that VGG16 was the most useful of the six tested CNN architectures.Entities:
Mesh:
Year: 2020 PMID: 32782269 PMCID: PMC7419525 DOI: 10.1038/s41598-020-70474-4
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Characteristics of participants in the study.
| No | Gender | Age | aTooth extraction sites | Number of tooth extractions | Number of teeth before extraction | Number of teeth after extraction | Interval between initial and final panoramic radiography (days) |
|---|---|---|---|---|---|---|---|
| 1 | F | 33 | 38 | 1 | 28 | 27 | 34 |
| 2 | F | 29 | 38, 48 | 2 | 30 | 28 | 49 |
| 3 | M | 28 | 38, 48 | 2 | 30 | 28 | 20 |
| 4 | F | 32 | 18, 28, 38, 48 | 4 | 32 | 28 | 309 |
| 5 | F | 25 | 18, 28, 38, 48 | 4 | 32 | 28 | 59 |
| 6 | F | 29 | 38, 48 | 2 | 30 | 28 | 84 |
| 7 | F | 21 | 38, 48 | 2 | 32 | 30 | 14 |
| 8 | M | 18 | 38 | 1 | 32 | 31 | 5 |
| 9 | M | 24 | 38, 48 | 2 | 30 | 28 | 35 |
| 10 | F | 21 | 28 | 1 | 31 | 30 | 119 |
| 11 | F | 33 | Not applicable | 0 | 27 | 27 | 199 |
| 12 | F | 33 | 38, 48 | 2 | 29 | 27 | 41 |
| 13 | M | 15 | 38, 48 | 2 | 32 | 30 | 78 |
| 14 | F | 36 | 28, 38, 48 | 3 | 31 | 28 | 49 |
| 15 | M | 27 | 18, 48 | 2 | 31 | 29 | 35 |
| 16 | M | 25 | 28, 38, 48 | 3 | 31 | 28 | 89 |
| 17 | F | 63 | 48 | 1 | 13 | 12 | 32 |
| 18 | F | 20 | 18, 28, 38, 48 | 4 | 30 | 26 | 149 |
| 19 | F | 28 | 38 | 1 | 31 | 30 | 21 |
| 20 | F | 24 | 38, 48 | 2 | 32 | 30 | 21 |
| 21 | M | 86 | 38 | 1 | 1 | 0 | 188 |
| 22 | M | 29 | 38 | 1 | 32 | 31 | 7 |
| 23 | M | 27 | 48 | 1 | 30 | 29 | 13 |
| 24 | M | 34 | 18, 17, 27 | 3 | 31 | 28 | 4 |
| 25 | M | 25 | 18, 38, 48 | 3 | 31 | 28 | 40 |
| 26 | F | 64 | 38 | 1 | 30 | 29 | 15 |
| 27 | F | 18 | 38, 48 | 2 | 32 | 30 | 30 |
| 28 | M | 42 | 18, 28, 38, 48 | 4 | 31 | 27 | 134 |
| 29 | M | 21 | 38, 48 | 2 | 31 | 29 | 36 |
| 30 | M | 20 | 38, 48 | 2 | 27 | 25 | 12 |
| Average | 31.00 | 2.03 | 29.00 | 26.97 | 64.03 | ||
| Standard deviation | 15.21 | 1.07 | 6.34 | 6.08 | 70.29 | ||
aTooth numbering system proposed by the Fédération dentaire internationale (FDI).
Validation accuracies for each CNN architecture and transfer learning method.
| VGG16 (%) | ResNet50 (%) | Inception-v3 (%) | Inception ResNet-v2 (%) | Xception (%) | MobileNet-v2 (%) | |
|---|---|---|---|---|---|---|
| Pretraining | 100.0 | 3.3 | 20.0 | 26.7 | 20.0 | 16.7 |
| Fine-tuning | 100.0 | 93.3 | 83.3 | 96.7 | 80.0 | 80.0 |
Figure 1The loss (blue line) and accuracy (red line) in training. The x-axis measures epochs, the left y-axis measures loss, and the right y-axis measures accuracy (%). Focusing on the red line, the training and testing accuracies converge after 30 epochs. Although the training data are perfectly classified, the testing data are sometimes misclassified.
Figure 2The design of this study was based on paired orthopantomographs analyzed with six convolutional neural network architectures. The paired orthopantomographs of participant No. 1 are shown as (A) “before orthopantomography” and (B) “after orthopantomography”, and tooth extraction sites are indicated by white arrows in this figure.