| Literature DB >> 34046262 |
Ruiyang Ren1, Haozhe Luo2, Chongying Su1, Yang Yao3, Wen Liao4,5.
Abstract
Artificial intelligence has been emerging as an increasingly important aspect of our daily lives and is widely applied in medical science. One major application of artificial intelligence in medical science is medical imaging. As a major component of artificial intelligence, many machine learning models are applied in medical diagnosis and treatment with the advancement of technology and medical imaging facilities. The popularity of convolutional neural network in dental, oral and craniofacial imaging is heightening, as it has been continually applied to a broader spectrum of scientific studies. Our manuscript reviews the fundamental principles and rationales behind machine learning, and summarizes its research progress and its recent applications specifically in dental, oral and craniofacial imaging. It also reviews the problems that remain to be resolved and evaluates the prospect of the future development of this field of scientific study.Entities:
Keywords: Dental, oral and craniofacial imaging; Machine learning; Oral cancer; Orthodontics
Year: 2021 PMID: 34046262 PMCID: PMC8136280 DOI: 10.7717/peerj.11451
Source DB: PubMed Journal: PeerJ ISSN: 2167-8359 Impact factor: 2.984
Figure 1The popular branches of artificial intelligence used in medical imaging.
Figure 2The fundamental machine learning procedure to achieve a final model.
Figure 3The main machine learning algorithms used in medical image processing.
Figure 4An example of a machine learning method (artificial neural network) utilized in orthodontic treatment design.
(A) The data processing workflow of the artificial neural network which provides detailed guidance for the extraction and anchorage patterns. (B) The main inputting data and the structure of the three-layer neural network for tooth extraction prediction. Reprinted from (Li et al., 2019).
Applications of ML methods in dental, oral and craniofacial imaging.
| Fields | Subfields | Types of ML | Researches |
|---|---|---|---|
| Orthodontics | Landmark identification | Active shape model (ASM) | The algorithm functions by capturing variations of region shape and grey profile, based on segmentation of lateral cephalograms. High imagery quality and tedious works are needed ( |
| Customized open-source CNN deep learning algorithm (Keras & Google Tensorflow) | Study uses high quality training data for supervised learning. With a huge set of 1792 lateral cephalograms, the algorithm demonstrates comparable precision with experienced examiners ( | ||
| You-Only-Look-Once version 3 (YOLOv3) | The study uses 1028 cephalograms as training data, which consists of both hard and soft tissue landmarks. The mean detection errors are not clinically significant between AI and manual examination. Reproducibility seems better than manual identification ( | ||
| Hybrid: 2D active shape model (ASM) & 3D knowledge-based models | The study uses a holistic ASM search to get initial 2D cephalogram projections. Further it utilizes 3D approaches for landmark identification. With the preprocessing of 2D algorithms, the accuracy and speed of landmark annotation are heightened ( | ||
| Entire image-based CNN, patch-based CNN & variational autoencoder | With only a small amount of CT data, the hierarchical method (consists of 4 steps) reaches higher accuracy than former researches on 3D landmark annotation with deep learning methods. The mean point-to-point error is 3.63 mm (Yun et al., 2020) | ||
| VGG-Net | The study has trained VGG-Net with a large amount of diverse shadowed 2D images. Each image has different lighting and shooting angles. The VGG-Net is able to form stereoscopic craniofacial morphological structure ( | ||
| determination of cervical vertebrae stages (CVS) | k-nearest neighbors (k-NN), Naive Bayes (NB), decision tree (Tree), artificial neural networks (ANN), support vector machine (SVM), random forest (RF), and logistic regression (Log.Regr.) | The study suggests that the seven AI algorithms have different precision of determination. ANN reaches the highest stability, the lowest accuracy occurs in Log.Regr. and kNN. By overall consideration, ANN is recommended to CVS determination ( | |
| Teeth-extraction decision | A two-layer neural network | The process consists of three steps: initial determination of teeth extraction, the choice of differential extraction, and determination of specific teeth to be extracted. The neural network gives a detailed plan of teeth extraction in orthodontic treatment ( | |
| Oral cancer | Detection of oral cancers | Texture-map based branch-collaborative | Deep CNN is used for cancer detection as well as localization, the detection sensitivity and specificity achieve 93.14% and 94.75% respectively ( |
| Alexnet, VGG-16, VGG-19, Resnet-50, & a proposed CNN | The study utilizes five CNNs for automated OSCC grading. The proposed CNN performs best with accuracy of 97.5% ( | ||
| Regression-based deep CNN with 2 partitioned layers, Google Net Inception V3 CNN architecture | The deep learning method is implemented on hyperspectral images, with the amount of training data growing from 100 to 500, the tissue classification accuracy (benign or cancerous) increases by 4.5% ( | ||
| Cancer margin assessment | SVM, Random Forests, 6-layer 1-D CNN | Fiber probes are utilized to collect FLIm data with ML methods. Random Forest demonstrates best performance in tissue region division (healthy, benign and cancerous tissue), displaying potential in tumor surgical visualization ( | |
| Prognosis of oral cancer | 3-D residual CNN (rCNN) | The study uses three types of labels as inputting data: CT images, radiotherapy dose distribution and contours of oral cancers. And the rCNN model is able to extract features on CT images to predict post-therapeutic xerostomias with best accuracy of 76% ( | |
| Deep learning method, AlexNet architecture | The system is implemented on contrast-enhanced CT to assess cervical lymph node metastasis in patients carrying oral cancers. The diagnostic results demonstrate little differences between manual and automated evaluation ( | ||
| back propagation (BP), | Three ML approaches are utilized for cancerous patients’ survivial time prediction. PGA-BP has the best performance with an error of of average survival time for less than 2 years ( | ||
| Dental endodontics | Detection of dental caries | CNN, the basic DeepLab network, DeepLabV3+ model | The dental plaque detection model was trained using natural photos based on a CNN framework and transfer learning. Photos of deciduous teeth before and after the usage of a dental plaque exposure agent were used. Results show that the AI model is more accurate ( |
| Root morphology | CNN, the standard DIGITS algorithm | This study analyzed CBCT and panoramic radiographs of mandibular first molars with a total of 760. The root image block is segmented and applied by deep learning system. High accuracy in the differential diagnosis of distal root forms of the mandibular first molar (single or multiple) was observed ( | |
| Periapical lesions | deep CNN | CBCT images of 153 periapical lesions were evaluated by deep CNN, and it was able to detect 142 periapical lesions, which is capable to figure out the location and volume of lesions and detect periapical pathosis based on CBCT images ( | |
| The deep learning approach based on a U-Net architecture | This study achieved periapical lesions detections by segmenting CBCT images. The accuracy of DLS lesion detection reaches 0.93 ( | ||
| Periodontology | CNN, the GoogLeNet Inception-v3 architecture | The study utilized panoramic and CBCT images to detect three types of odontogenic cystic lesions (OCLs) based on CNN and transfer learning. Results suggest that CBCT-based training performs better than panoramic image-based training ( | |
| deep CNN architecture and a self-trained network | The study utilized deep CNN algorithm for periodontally compromised teeth (PCT) diagnosis and prediction. The accuracy of PCT diagnosis on premolars reaches high level than that on molars ( | ||
| Orthognathic surgery | facial attractiveness | CNN, VGG-16 architecture | The study viewed the photos of 146 orthognathic patients before and after treatment, assessed their facial attractiveness and apparent age using CNN, and found that the appearance of most patients improved after treatment ( |
| CNN, VGG-16 architecture | Full-face and lateral pictures of left cleft lip patients and controls were assessed and facial attractiveness was evaluated. Results showed that CNN is capable to assess facial attractiveness with similar score of manual evaluation ( | ||
| Others | CNN | CBCT images combined with AI can also be used to measure the bone mineral density of the implant area, evaluate the bone mass of the surgical area, and assist in the construction of static guide plate system ( |