Literature DB >> 35935725

Development of a deep learning model for automatic localization of radiographic markers of proposed dental implant site locations.

Mona Alsomali1, Shatha Alghamdi1, Shahad Alotaibi2, Sara Alfadda3, Najwa Altwaijry4, Isra Alturaiki5, Asma'a Al-Ekrish6.   

Abstract

Objectives: To develop a Deep Learning Artificial Intelligence (AI) model that automatically localizes the position of radiographic stent gutta percha (GP) markers in cone beam computed tomography (CBCT) images to identify proposed implant sites within the images, and to test the performance of the newly developed AI model. Materials and
Methods: Thirty-four CBCT datasets were used for initial model training, validation and testing. The CBCT datasets were those of patients who had a CBCT examination performed wearing a radiographic stent for implant treatment planning. The datasets were exported in Digital Imaging and Communications in Medicine (DICOM), then imported into the software Horos ®. Each GP marker was manually labelled for object detection and recognition by the deep learning model by drawing rectangles around the GP markers in all axial images, then the labelled images were split into training, validation, and test sets. The axial sections of 30 CBCT datasets were randomly divided into training and validation sets. four CBCT datasets were used for testing the performance of the deep learning model. Descriptive statistics were calculated for the number of GP markers present, number of correct and incorrect identifications of GP markers. Result: The AI model had an 83% true positive rate for identification of the GP markers. Of the areas labelled by the AI model as GP markers, 28 % were not truly GP markers, but the overall false positive rate was 2.8 %.
Conclusion: An AI model for localization of GP markers in CBCT images was able to identify most of the GP markers, but 2.8% of the results were false positive and 17% were missed GP markers. Using only axial images for training an AI program is not enough to give an accurate AI model performance.
© 2022 The Authors.

Entities:  

Keywords:  Algorithms; Artificial intelligence; Computed Tomography; Cone Beam; Deep learning; Dental Implant; Stents

Year:  2022        PMID: 35935725      PMCID: PMC9346930          DOI: 10.1016/j.sdentj.2022.01.002

Source DB:  PubMed          Journal:  Saudi Dent J        ISSN: 1013-9052


Introduction

Nowadays dental implants have become the standard of care in restoring missing teeth. When multiple implants are needed, considerable time may be required to prepare an ideal implant treatment plan, which may delay the surgical implant placement for a patient. The use of machine learning (ML) methods, a branch of artificial intelligence (AI), especially artificial neural networks (ANN), may help in formulating the treatment plan in a shorter period of time, and thus expedite implant placement (Amato, López et al. 2013). Artificial intelligence has been used to aid in the performance of numerous dental tasks. In a systematic review of 50 studies which reported the use of AI programs in dentomaxillofacial radiology, the studies mainly involved automated localization of cephalometric landmarks, diagnosis of osteoporosis, classification/segmentation of maxillofacial cysts and/ or tumors, and identification of periodontitis/periapical disease (Hung, Montalvao et al. 2020). Other published studies reported the development and use of AI based systems for use in dental implantology. Polášková et al. (2013) presented a web-based tool which utilized patient history and clinical data input into a program and preset threshold levels for various parameters to formulate a decision on whether or not implants may be placed, and if bone grafting is needed, and how long after grafting should implants be placed? (Polášková, Feberová et al. 2013). Sadighpour et al. (2014) developed an ANN model which utilized a number of input factors to formulate a decision regarding the type of prosthesis (fixed or removable) and the specific design of the prosthesis for rehabilitation of the edentulous maxilla (Sadighpour, Rezaei et al. 2014). Lee et al. (2012) applied a decision making system (fuzzy recognition map) for implant abutment selection (Lee, Yang et al. 2012). Additionally, Szejka et al., 2011, Szejka et al., 2013 developed an interactive reasoning system which requires the dentist to select the region of interest within a 3D model of the bone based on computed tomography (CT) images, then aids in selection of the optimum implant length and design (Szejka et al., 2011, Szejka et al., 2013). Furthermore, AI has been used for implant placement in other areas of the body. In a study performed on 27 subjects, a fully convolutional deep learning model was used to determine the position and orientation of the articular marginal plane of the proximal humerus based on CT scans (Kulyk, Vlachopoulos et al. 2018). Carrillo et al. (2017) generated, in a fully automatic manner, a surgical plan for corrective osteotomies of malunited radius bones (Carrillo, Vlachopoulos et al. 2017). However, none of the previous studies demonstrated use of AI to automatically place simulated implants in the optimum position and angulation within the CT images of the jaws during implant treatment planning. Therefore, the overall goal of the present research group’s project is to develop a deep learning AI model that automatically places simulated implants within cone beam CT (CBCT) images using the optimum size implant and placed within the optimum prosthetically driven position and orientation within the bone. Such a model would expedite and streamline implant treatment planning, especially in cases which require numerous implants. The first step to achieve the above goal is to use an AI model to accurately localize the proposed implant sites in CBCT images. Therefore, the aim of the present study is to use axial CBCT sections to develop an AI model that automatically localizes markers in radiographic stents in order to identify proposed implant sites within CBCT images. This is the first phase in a multi-phase development and validation process in which an AI model will be developed using an increasing number of planes of image sections in all three dimensions to identify GP marker positions in CBCT images.

Materials and methods

This study was an experimental study implemented in King Saud University, College of Dentistry (KSUCD) and the College of Computer and Information Sciences (CCIS). Because retrospective patient CBCT images were used to train the AI model, ethical approval was obtained from the King Saud University College of Medicine Institutional Review Board (Project No. E-20–4914). Thirty-four CBCT datasets were used for initial model training, validation and testing. The CBCT datasets were those of patients who had a CBCT examination performed wearing a radiographic stent for implant treatment planning. The list of patients was obtained from: The list of patients from the records of the dental labs and prosthodontic and implant clinics who had radiographic stents requested. A survey of Oral and Maxillofacial Radiologists (OMFR) requesting the list of their patients who had a CBCT interpretation report which indicated patient was wearing a radiographic stent. The inclusion criteria were any retrievable CBCT datasets for patients who had taken CBCTs with a radiographic stent for implant placement purpose. The exclusion criteria were CBCT datasets with artifacts that degrade the image of the edentulous area, cases in which the radiographic stent was not well-fitted in the patient's mouth, and cases in which the implant site required a bone graft. All the 34 cases were organized in an excel sheet and coded as (A01, A02, A03, ………, A34) All the CBCT datasets were accessed in Romexis® 3D software program (Planmeca Romexis® 5.2.0.R, Helsinki, Finland) within the server of KSUCD. The datasets were exported in Digital Imaging and Communications in Medicine (DICOM) format in an anonymized manner using the original voxel size and stored in both a Hard Disk and Google Drive for backup. The CBCT datasets were imported into the software Horos ®, and each GP marker was labelled manually by drawing rectangles around the GP markers in all the axial images which demonstrated the GP marker. These labelled images were then used to train the model on localization of the GP marker. Many AI models for object detection are available in the literature. In the present study, the model used to detect the GP markers was Mask R-CNN, (He, Gkioxari et al. 2017) a state-of-the-art object detection deep learning neural network. Transfer learning, a method enabling reuse of a model trained on some dataset to a new dataset, was used to train the model on our dataset. The CBCT datasets were then converted into a comma-separated values (CSV) file to be further processed in the model. Data preprocessing is an essential step in any prediction model; the data was preprocessed to normalize the grey density value to be in the range [-1,1] which is suitable for the machine learning model. Afterwards, the images were used to train the model to automatically detect the GP markers. In order to train the model, the KERAS open-source software library was used on the Google Colab platform. Training of the AI model was done by backpropagation, which consists of optimizing the weights using the chain rule to propagate the gradient of the loss function backwards into the model weights. The AI model was trained on 30 cases with 16,272 total number of images, these images were randomly divided into training and validation sets, were 90.2% of images for training and 9.8% for validation. The remaining 4 cases were used to test the model performance. The performance of the AI model was then tested using all the axial sections in the four CBCT datasets. Fig. 1a demonstrates the manually identified GP markers used as the reference, and Fig. 1b demonstrates the AI identification of the GP markers. Descriptive statistics were calculated for the number of GP markers present, and the number of correct and incorrect identifications of GP markers.
Fig. 1

(a) Sample of CBCT axial section of the maxilla demonstrating boxes placed manually for identification of the GP markers; the manual labelling appears as dark blue boxes (marked by white arrows). (b) The same section is seen with the AI localization of the GP markers; the areas identified by the AI algorithm appear as lighter blue boxes (marked by the arrowheads). A correct identification of a GP marker is seen marked by the closed arrowhead. The restorations in the upper left incisors were incorrectly identified as GP markers by the AI model (marked by open arrowhead). The GP marker in the area of upper right premolar was not identified by the AI model.

(a) Sample of CBCT axial section of the maxilla demonstrating boxes placed manually for identification of the GP markers; the manual labelling appears as dark blue boxes (marked by white arrows). (b) The same section is seen with the AI localization of the GP markers; the areas identified by the AI algorithm appear as lighter blue boxes (marked by the arrowheads). A correct identification of a GP marker is seen marked by the closed arrowhead. The restorations in the upper left incisors were incorrectly identified as GP markers by the AI model (marked by open arrowhead). The GP marker in the area of upper right premolar was not identified by the AI model. The objective of this study was to build a predictive AI model with sensitivity greater than 80%. Based on the literature and pilot testing we expect sensitivity of 88% in this project. Using G*Power tool (version 3.1.9.2) with effect size of 8%, level of significance 0.05 and desired statistical power of 80% we determined a minimum sample size of 135. These number of GP markers can be observed in a sample of 3–4 cases (on average there are 48 GP markers per case). We decided to use 3–4 cases as the testing set and 10-times larger sample to train the AI model. The total sample size of 34 cases was considered sufficient to build and validate an adequately accurate predictive model.

Results

Table 1 demonstrates the number of sections and GP markers in each dataset used for testing the AI model, and the number of correct and incorrect identifications. A total of 50 image sections with 193 images of GP markers, and 2284 sections which did not have a GP marker, were included in the testing data. Of the 193 existing images of GP markers, 83% were correctly identified by the algorithm. Furthermore, Of the 223 areas labelled by the AI model as GP markers, 28 % were not truly GP markers. However, if each section without a GP marker (n = 2284) was considered as one potential site for identification of presence or absence of a GP marker, then the false positive performance of the AI model was 2.8 %.
Table 1

CBCT examinations (testing set instances) used as the testing dataset, and the number of image sections, and GP markers used for testing the AI model, along with correct and incorrect number of identifications achieved by the AI model.

Code number of the CBCT ExaminationNumber of sections had no markersIdentification number of the axial section in the datasetNumber of GP markers within image section (identified manually)Number of GP markers correctly identified by the AI modelNumber of GP markers missed by the AI modelNumber of areas mistakenly identified as GP by the AI model
A316433682200
3662200
3822200
3732200
3602200
3862200
3522200
3742200
A323720394401
0674400
0564400
0824400
0584400
0894400
0484400
0834400
0904400
A336322926511
3387433
3191101
3367433
3276332
3307433
3417433
3074402
3437522
3025502
2875232
3606332
2744221
3547431
2855412
3094402
2824402
2886512
3114402
A346372993301
2813302
3133300
2642114
3083302
2763302
2843303
3033302
2693212
2973302
2813302
3161100
2953301
3043301
Total2284501931603363
CBCT examinations (testing set instances) used as the testing dataset, and the number of image sections, and GP markers used for testing the AI model, along with correct and incorrect number of identifications achieved by the AI model.

Discussion

This study presents the first AI model developed for identification of GP markers used for localizing prospective dental implant sites within CBCT images. The present algorithm correctly identified most of the GP markers, we consider a false positive rate of 2.8% and missed GP marker rate of 17% reasonable for a newly developed AI model. However, we are aiming for a higher model accuracy by using another deep learning algorithm. A possible reason for the above result may be because the axial images used for training the algorithm did not include a clear and distinct shape of the GP marker in the superior-inferior, buccal-lingual, and mesial-distal perspectives. Additionally, the axial images did not demonstrate the relationship of the GP marker to the bone, a relationship which may aid an AI model in correctly identifying the markers. As far as the authors are aware, the radiographic stents containing the GP markers were produced by the same laboratory and using the same type of acrylic, but it was evident from the CBCT images that the GP markers had variable diameter and length. However, it is not likely that the variable sizes of the markers had an adverse effect on the resultant model’s accuracy, because such variable GP markers were seen in both the training and the test images. At the time of writing, and to the authors’ knowledge, there are no other reported AI models for identification of fiducial markers in CT or CBCT images of the maxillofacial region. A systematic review of AI applications in OMF Radiology reported the use of AI with CT and/or CBCT for the detection of the odontoid process, segmentation and measurement of maxillofacial lesions, classification of jaw lesions and tooth types, identification of the root canal, and localization of 3D cephalometric landmarks (Hung, Montalvao et al. 2020). However, the nature of the landmarks being detected, and the techniques used by the previous researchers to localize the anatomic landmarks, were different than the ones used in the present study. Neelapu, et al. (2018) applied bone segmentation, standardized the position of the image volumes, extracted contours, and detected landmarks based on the definition on the contours and a template matching algorithm (Neelapu, Kharbanda et al. 2018). Codari, et al. (2017) applied thresholding to segment the regions of interest, registered the images by choosing the most inferior point in the mandibular bone to systematize all the CBCTs, and used an adaptive cluster-based and intensity-based algorithm (Codari, Caffini et al. 2017). Gupta, et al. (2015) used an anatomical reference as a “Seed Point” then applied a knowledge-based algorithm (Gupta, Kharbanda et al. 2015). Montúfar, et al. (2018a) computed digitally reconstructed projections then selected an anatomical structure manually to initialize an active shape model (Montúfar et al., 2018a). Montúfar, et al. (2018b) used a knowledge-based local landmark search after initializing an active shape model (Montúfar et al., 2018b). Shahidi, et al. (2014) used adaptive thresholding and volume matching then applied feature-based and voxel similarity-based algorithms (Shahidi, Bahrampour et al. 2014). As such, it may be seen that existing AI models described in previous studies were used to identify anatomical landmarks which were known to be present within the dataset, and have characteristic relationships to the surrounding anatomy. The AI model in the present study, on the other hand, was used to identify the presence or absence of fiducial markers (GP), and search for the markers anywhere within the CBCT volume. Furthermore, the target of localization for the present AI model (GP markers) was highly variable in number and relationship to surrounding anatomy. Therefore, due to the above-mentioned differences between the function of the AI models, it is not possible to compare or contrast the performance of the present AI model, with that of previous models reported in the literature. The present research team is currently working on further refinement of the algorithm through the second phase of research, which is using additional sectional images from the coronal and sagittal planes which include the shape of the marker more clearly, and which include the apical bone with the GP markers within the labelled areas. Also, to reduce the time required for labelling the required GP makers and bone, the number of sections which include the full length of the GP marker may be reduced by exporting the CBCT images using a voxel size of 0.4 mm, which has been reported to provide similar accuracy as 0.2 mm when used for dental implant site analysis (Torres, Campos et al. 2012).

Conclusion

An AI model for localization of GP markers in CBCT images was able to identify most of GP markers, but 2.8% of the results were false positive and 17% were missed GP markers. Use of only axial images for training an AI program for localization of GP markers is not enough to give an accurate AI model performance.

CRediT authorship contribution statement

Mona Alsomali: Investigation, Formal analysis, Writing – original draft, Visualization. Shatha Alghamdi: Investigation, Formal analysis, Writing – original draft, Visualization. Shahad Alotaibi: Methodology, Software, Validation. Sara Alfadda: Methodology, Investigation, Validation, Resources, Writing – review & editing, Supervision. Najwa Altwaijry: Methodology, Software, Validation, Resources, Writing – review & editing, Supervision. Isra Alturaiki: Methodology, Software, Validation, Resources, Writing – review & editing. Asma'a Al-Ekrish: Conceptualization, Methodology, Investigation, Validation, Formal analysis, Resources, Writing – review & editing, Supervision, Project administration.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
  8 in total

1.  Accuracy of linear measurements in cone beam computed tomography with different voxel sizes.

Authors:  Marianna Guanaes Gomes Torres; Paulo Sérgio Flores Campos; Nilson Pena Neto Segundo; Marcus Navarro; Iêda Crusoé-Rebello
Journal:  Implant Dent       Date:  2012-04       Impact factor: 2.454

2.  A knowledge-based algorithm for automatic detection of cephalometric landmarks on CBCT images.

Authors:  Abhishek Gupta; Om Prakash Kharbanda; Viren Sardana; Rajiv Balachandran; Harish Kumar Sardana
Journal:  Int J Comput Assist Radiol Surg       Date:  2015-04-07       Impact factor: 2.924

3.  Hybrid approach for automatic cephalometric landmark annotation on cone-beam computed tomography volumes.

Authors:  Jesús Montúfar; Marcelo Romero; Rogelio J Scougall-Vilchis
Journal:  Am J Orthod Dentofacial Orthop       Date:  2018-07       Impact factor: 2.650

4.  Computer-aided cephalometric landmark annotation for CBCT data.

Authors:  Marina Codari; Matteo Caffini; Gianluca M Tartaglia; Chiarella Sforza; Giuseppe Baselli
Journal:  Int J Comput Assist Radiol Surg       Date:  2016-06-29       Impact factor: 2.924

5.  Automatic 3-dimensional cephalometric landmarking based on active shape models in related projections.

Authors:  Jesús Montúfar; Marcelo Romero; Rogelio J Scougall-Vilchis
Journal:  Am J Orthod Dentofacial Orthop       Date:  2018-03       Impact factor: 2.650

6.  Automatic localization of three-dimensional cephalometric landmarks on CBCT images by extracting symmetry features of the skull.

Authors:  Bala Chakravarthy Neelapu; Om Prakash Kharbanda; Viren Sardana; Abhishek Gupta; Srikanth Vasamsetti; Rajiv Balachandran; Harish Kumar Sardana
Journal:  Dentomaxillofac Radiol       Date:  2018-01-03       Impact factor: 2.419

7.  The use and performance of artificial intelligence applications in dental and maxillofacial radiology: A systematic review.

Authors:  Kuofeng Hung; Carla Montalvao; Ray Tanaka; Taisuke Kawai; Michael M Bornstein
Journal:  Dentomaxillofac Radiol       Date:  2019-08-14       Impact factor: 2.419

8.  The accuracy of a designed software for automated localization of craniofacial landmarks on CBCT images.

Authors:  Shoaleh Shahidi; Ehsan Bahrampour; Elham Soltanimehr; Ali Zamani; Morteza Oshagh; Marzieh Moattari; Alireza Mehdizadeh
Journal:  BMC Med Imaging       Date:  2014-09-16       Impact factor: 1.930

  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.