| Literature DB >> 34594483 |
Yangdong Lin1, Miao He1.
Abstract
In order to deeply study oral three-dimensional cone beam computed tomography (CBCT), the diagnosis of oral and facial surgical diseases based on deep learning was studied. The utility model related to a deep learning-based classification algorithm for oral neck and facial surgery diseases (deep diagnosis of oral and maxillofacial diseases, referred to as DDOM) is brought out; in this method, the DDOM algorithm proposed for patient classification, lesion segmentation, and tooth segmentation, respectively, can effectively process the three-dimensional oral CBCT data of patients and carry out patient-level classification. The segmentation results show that the proposed segmentation method can effectively segment the independent teeth in CBCT images, and the vertical magnification error of tooth CBCT images is clear. The average magnification rate was 7.4%. By correcting the equation of R value and CBCT image vertical magnification rate, the magnification error of tooth image length could be reduced from 7.4. According to the CBCT image length of teeth, the distance R from tooth center to FOV center, and the vertical magnification of CBCT image, the data closer to the real tooth size can be obtained, in which the magnification error is reduced to 1.0%. Therefore, it is proved that the 3D oral cone beam electronic computer based on deep learning can effectively assist doctors in three aspects: patient diagnosis, lesion localization, and surgical planning.Entities:
Mesh:
Year: 2021 PMID: 34594483 PMCID: PMC8478532 DOI: 10.1155/2021/4676316
Source DB: PubMed Journal: J Healthc Eng ISSN: 2040-2295 Impact factor: 2.682
Figure 1Inception module.
Figure 2Dimension reduction of the inception module.
Algorithm 1First stage algorithm of patient classification.
Algorithm 2ASNet algorithm.
Figure 3Results of different methods using different amounts of labeled data on the test set.
Results of model simplification experiment.
| Method | 10% | 30% | 50% | 100% |
|---|---|---|---|---|
| Number of labeled data | ||||
| Supervised ASNet without adv | 0.689 | 0.821 | 0.847 | 0.863 |
| Supervised ASNet with adv | 0.685 | 0.823 | 0.889 | 0.879 |
| ASNet without adv | 0.812 | 0.856 | 0.871 | 0.892 |
| ASNet | 0.832 | 0.865 | 0.871 | 0.906 |
Results of different methods using different amounts of labeled data on the test set.
| Method | 10% | 30% | 50% | 100% |
|---|---|---|---|---|
| Number of labeled data | ||||
| Supervised ASNet without adv | 0.702 | 0.799 | 0.823 | 0.863 |
| Supervised ASNet with adv | 0.735 | 0.845 | 0.861 | 0.875 |
| ASNet without adv | 0.819 | 0.854 | 0.871 | 0.871 |
| ASNet | 0.837 | 0.865 | 0.872 | 0.906 |
Results of model simplification experiment.
| Method | 10% | 30% | 50% | 100% |
|---|---|---|---|---|
| Number of labeled data | ||||
| Supervised ASNet without adv | 0.692 | 0.812 | 0.847 | 0.889 |
| Supervised ASNet with adv | 0.695 | 0.823 | 0.863 | 0.889 |
| ASNet without adv | 0.827 | 0.863 | 0.872 | 0.893 |
| ASNet | 0.837 | 0.865 | 0.872 | 0.906 |