| Literature DB >> 36010368 |
Cansu Buyuk1, Nurullah Akkaya2, Belde Arsan3, Gurkan Unsal4, Secil Aksoy4, Kaan Orhan5,6,7.
Abstract
The study aimed to generate a fused deep learning algorithm that detects and classifies the relationship between the mandibular third molar and mandibular canal on orthopantomographs. Radiographs (n = 1880) were randomly selected from the hospital archive. Two dentomaxillofacial radiologists annotated the data via MATLAB and classified them into four groups according to the overlap of the root of the mandibular third molar and mandibular canal. Each radiograph was segmented using a U-Net-like architecture. The segmented images were classified by AlexNet. Accuracy, the weighted intersection over union score, the dice coefficient, specificity, sensitivity, and area under curve metrics were used to quantify the performance of the models. Also, three dental practitioners were asked to classify the same test data, their success rate was assessed using the Intraclass Correlation Coefficient. The segmentation network achieved a global accuracy of 0.99 and a weighted intersection over union score of 0.98, average dice score overall images was 0.91. The classification network achieved an accuracy of 0.80, per class sensitivity of 0.74, 0.83, 0.86, 0.67, per class specificity of 0.92, 0.95, 0.88, 0.96 and AUC score of 0.85. The most successful dental practitioner achieved a success rate of 0.79. The fused segmentation and classification networks produced encouraging results. The final model achieved almost the same classification performance as dental practitioners. Better diagnostic accuracy of the combined artificial intelligence tools may help to improve the prediction of the risk factors, especially for recognizing such anatomical variations.Entities:
Keywords: deep learning; mandibular canal; panoramic radiography; segmentation; third molar
Year: 2022 PMID: 36010368 PMCID: PMC9407570 DOI: 10.3390/diagnostics12082018
Source DB: PubMed Journal: Diagnostics (Basel) ISSN: 2075-4418
Figure 1The examples for the classification of the relationship between the mandibular third molar (blue area) and mandibular canal (red area). (A) represents class I where there is no connection between the root tip and MC. (B) represents class II where the direct contact of M3 with the superior cortical boundary of the MC. (C) represents class III where the M3 and MC are superimposed and the (D) represents class IV where the root tip is under the inferior cortical line of MC.
Distribution of the data for training and testing groups. For U-Net, the n represents the number of OPGs, whereas the n * represents the number of image patches for AlexNet.
| U-Net ( | AlexNet ( | ||||
|---|---|---|---|---|---|
| I | II | III | IV | ||
|
| 1504 | 598 | 671 | 850 | 217 |
|
| 376 | 149 | 168 | 213 | 27 |
|
| 1880 (100%) | 747 (25.82%) | 839 (29.00%) | 1063 (36.74%) | 244 (8.43%) |
Figure 2The workflow for the segmentation and the classification processes of mandibular third molar and mandibular canal. The first schema represents the encoder-decoder style network of U-Net. The output of this network feeds the AlexNet architecture which is seen in the second diagram.
Per class sensitivity, specificity, positive predictive value, and negative predictive value results.
| Class I | Class II | Class III | Class IV | |
|---|---|---|---|---|
|
| 0.74 | 0.83 | 0.86 | 0.67 |
|
| 0.92 | 0.95 | 0.88 | 0.96 |
|
| 0.79 | 0.88 | 0.80 | 0.68 |
|
| 0.90 | 0.94 | 0.93 | 0.96 |
Figure 3Examples of the misclassified mandibular canal and mandibular third molar overlap by the algorithm and the possible characteristics that cause misclassification: (A), Ill-defined superior border of MC; (B), Narrowing of the MC; (C), Dilacerated root morphology of M3; (D), The ghost image of the opposite inferior border of mandible; (E), Superimposition of the lateral border of the tongue; (F), Failed segmentation; (G), Bifid MC; (H), Superimposition of the mylohyoid ridge.