Literature DB >> 32677567

Automated Measurement of Lumbar Lordosis on Radiographs Using Machine Learning and Computer Vision.

Brian H Cho1,2, Deepak Kaji1,2, Zoe B Cheung1, Ivan B Ye1, Ray Tang1, Amy Ahn1, Oscar Carrillo1, John T Schwartz1, Aly A Valliani1, Eric K Oermann1, Varun Arvind1, Daniel Ranti1, Li Sun1, Jun S Kim1, Samuel K Cho1.   

Abstract

STUDY
DESIGN: Cross sectional database study.
OBJECTIVE: To develop a fully automated artificial intelligence and computer vision pipeline for assisted evaluation of lumbar lordosis.
METHODS: Lateral lumbar radiographs were used to develop a segmentation neural network (n = 629). After synthetic augmentation, 70% of these radiographs were used for network training, while the remaining 30% were used for hyperparameter optimization. A computer vision algorithm was deployed on the segmented radiographs to calculate lumbar lordosis angles. A test set of radiographs was used to evaluate the validity of the entire pipeline (n = 151).
RESULTS: The U-Net segmentation achieved a test dataset dice score of 0.821, an area under the receiver operating curve of 0.914, and an accuracy of 0.862. The computer vision algorithm identified the L1 and S1 vertebrae on 84.1% of the test set with an average speed of 0.14 seconds/radiograph. From the 151 test set radiographs, 50 were randomly chosen for surgeon measurement. When compared with those measurements, our algorithm achieved a mean absolute error of 8.055° and a median absolute error of 6.965° (not statistically significant, P > .05).
CONCLUSION: This study is the first to use artificial intelligence and computer vision in a combined pipeline to rapidly measure a sagittal spinopelvic parameter without prior manual surgeon input. The pipeline measures angles with no statistically significant differences from manual measurements by surgeons. This pipeline offers clinical utility in an assistive capacity, and future work should focus on improving segmentation network performance.

Entities:  

Keywords:  angle measurement; artificial intelligence; computer-assisted; lordosis; lumbar; machine learning; neural networks; radiographic image interpretation; radiography; sagittal balance; spinopelvic parameters

Year:  2019        PMID: 32677567      PMCID: PMC7359685          DOI: 10.1177/2192568219868190

Source DB:  PubMed          Journal:  Global Spine J        ISSN: 2192-5682


Introduction

Spinal alignment is increasingly being recognized as a key, quantitative assessment of spinal health, and is associated with various spinal disorders such as adolescent idiopathic scoliosis, adult spinal deformity, and degenerative spondylolisthesis.[1-4] Malalignment, and the resulting compensatory response to maintain upright posture, places additional strain on key spinal, pelvic, and lower extremity structures that can cause arthritis and pain.[5,6] Restoring proper alignment in the coronal and sagittal plane is therefore essential for improving biomechanical efficiency and preventing further progression of disease.[5,7,8] In clinical practice, preoperative radiographic assessment of spinal alignment is conducted by measuring key angles and distances using various landmarks and comparing them with established alignment targets.[3,6] Poor sagittal alignment in particular has recently been associated with negative health-related quality of life (HRQoL).[2,9] Sagittal alignment is characterized by 5 key radiographic parameters: cervical lordosis (CL), thoracic kyphosis (TK), pelvic incidence minus lumbar lordosis (PI-LL), sagittal vertical axis (SVA), and pelvic tilt (PT).[3,6] While surgeons require these manually acquired radiographic measurements for presurgical planning, the process is both time-consuming and prone to rater-dependent error.[10,11] Furthermore, previous studies have demonstrated significant differences between standing and supine angle measurements.[4,12] Intraoperative surgical measurements better reflect postoperative sagittal balance because rigid fixation from instrumentation can prevent the passive corrections in spinopelvic parameters that occur in the supine position.[13] Therefore, the development of automated tools for making rapid, intraoperative calculation of sagittal parameters may be useful for proper evaluation of alignment correction and to improve surgical decision making. Machine learning, deep learning in particular, is being deployed on medical data to triage patients, automate preoperative planning, and predict outcomes for surgeons.[14-16] Within the field of orthopedics, segmentation using neural networks is a particularly promising technique for automatically identifying bony structures in medical images. Spurred by advances in computer vision techniques, multiple groups have attempted to apply artificial intelligence and other computational methods on radiographs to combat the numerous sources of variability inherent to radiographic angle measurement. While many studies have attempted coronal Cobb angle calculations, they required a large amount of a priori annotation by physicians on the input radiographs. This level of input may help reduce variability but has little intraoperative value and precludes robust spinal curvature assessment at scale.[17,18] Other groups attempted to work directly on unannotated radiographs to identify landmarks but did not measure any sagittal angles.[19,20] The purpose of this study was to develop a novel fully automated machine learning pipeline that reliably measures lumbar lordosis (LL) from radiographic images. Our methods do not require any a priori feature engineering or landmark identification. We report a combined segmentation and computer vision pipeline to measure lumbar lordosis in 0.14 seconds with potential perioperative value.

Materials and Methods

Materials

A total of 780 radiographs were collected from patients who received a lateral lumbar X-ray at our orthopedics department over a 1-year period. All radiographs were standardized and taken by an X-ray technician with the patient standing neutral weightbearing. Only 1 radiograph was selected for each patient to reduce potential bias of training the model on multiple X-rays from a single patient. Any patients with prior spine surgery or spine instrumentation were excluded. Standing lateral X-rays were used because they are higher quality and better standardized than intraoperative X-rays, increasing the probability of successfully training the model. Binary masks were generated by manually annotating every vertebral body in each radiograph using Photoshop (Adobe Systems, San Jose, CA) (Figure 1). This study was approved by our institutional review board.
Figure 1.

Example of a raw radiograph and its corresponding manually generated binary mask.

Example of a raw radiograph and its corresponding manually generated binary mask.

Model Training and Optimization

Radiographs were preprocessed using adaptive histogram equalization to improve contrast and normalize signal intensity.[21] The dataset was split into 629 learning images (80% of total data) and 151 test images (20% of total data) for future performance testing. The learning data was then synthetically augmented using a custom script to 12 580 images and further split into 70% training and 30% validation data (Figure 2). The augmentations included flipping, randomly rotating, and randomly cropping the radiographs to incorporate natural variations inherent to clinical radiographs into the training dataset.
Figure 2.

Overview of data workflow for training and testing the U-Net. Augmentation involved flips, random rotations, and random zooms. Each dataset was randomized prior to splitting.

Overview of data workflow for training and testing the U-Net. Augmentation involved flips, random rotations, and random zooms. Each dataset was randomized prior to splitting. We utilized U-Net, a well-established convolutional neural net (CNN) architecture for segmentation, to generate bone segmentations of the radiographs by optimizing dice similarity coefficient (DSC) loss.[22,23] DSC is a standard metric for evaluating the accuracy of segmentations by comparing the overlap between CNN generated segmentations and manually generated masks. While a loss function defined by a simple pixel accuracy may be more intuitive, the DSC outperforms these more naive approaches for segmentation based problems.[23] The final model was trained using batch size of 20 on a NVIDIA GeForce GTX 1080-TI GPU for 200 epochs over 24.4 hours. Beginning with raw radiographs, the algorithm first segments the image. Then, it automatically identifies the L1 and S1 vertebra from the segmentation and approximates their superior endplates to calculate the LL angle (Figure 3). Failure to properly identify L1 and S1 was treated as an algorithm failure, and no Cobb angle was measured for these cases. The algorithm was written using Python (version 3.5) and Keras (version 2.2.0).[24,25]
Figure 3.

Overview of algorithm workflow for automatic lumbar lordosis angle calculation. (A) The raw radiograph is captured and preprocessed. Bony segmentation is generated from the raw radiograph with the trained U-Net. The L1 and S1 slopes are identified from the segmented image with a computer vision algorithm. (B) Overlay of the L1 and S1 slopes on the raw radiograph demonstrates proper slope placement and accurate angle estimation.

Overview of algorithm workflow for automatic lumbar lordosis angle calculation. (A) The raw radiograph is captured and preprocessed. Bony segmentation is generated from the raw radiograph with the trained U-Net. The L1 and S1 slopes are identified from the segmented image with a computer vision algorithm. (B) Overlay of the L1 and S1 slopes on the raw radiograph demonstrates proper slope placement and accurate angle estimation.

Statistical Analysis

The segmentation performance was evaluated using DSC and the area under the receiver operating characteristic curve (AUC). The algorithm-generated angles were compared to manual angle measurements from a chief resident (JK), spine fellow (SL), and attending surgeon (SKC) on 50 randomly selected radiographs from the test dataset using Welch’s 2-sample t test. Each surgeon measured every radiograph twice over a 2-week period to evaluate intra- and interrater reliability using interrater correlation coefficient (ICC) with a 2-way random effects model.[26] Measurements from surgeon 3 (SKC) were used as the gold standard for comparison. All statistical analysis was performed with R (version 3.4.4).[27]

Results

Automatic Segmentation of Vertebral Bodies

The final U-Net achieved a training DSC of 0.966 and validation DSC of 0.923 with an overall training time of approximately 24.4 hours. The U-Net performed well on segmenting the test dataset, with a test DSC of 0.821, AUC of 0.914, and accuracy of 0.862 (Figure 4).
Figure 4.

Receiver operating characteristic (ROC) of the U-Net for the test dataset. The overall test area under the ROC curve (AUC) was 0.914 and the overall test accuracy was 0.862. The dotted line denotes AUC = 0.50.

Receiver operating characteristic (ROC) of the U-Net for the test dataset. The overall test area under the ROC curve (AUC) was 0.914 and the overall test accuracy was 0.862. The dotted line denotes AUC = 0.50.

Calculation of Lumbar Lordosis Angle From Segmentations

The algorithm measured LL Cobb angles for 127 of the 151 radiographs in the test dataset, an overall success rate of 84.1%, with an average speed of 0.14 seconds/radiograph. From the 151-image test set, 50 radiographs were randomly chosen for surgeon measurement. On comparison, the algorithm succeeded in measuring angles for 42 images, a success rate of 84%. Manual measurements among the surgeons demonstrated excellent intra- and interrater reliability, consistent with previous studies evaluating the surgeon reliability of radiographic spine angle measurements, with an overall ICC of 0.958 (95% CI: 0.931-0.976).[10,11,28-30] The intrarater correlation coefficients (IaCC) for surgeons 1, 2, and 3 were 0.933 (95% CI: 0.878-0.963), 0.984 (95% CI: 0.970-0.991), and 0.970 (95% CI: 0.945-0.984), respectively. The algorithm had higher variability in the mean absolute difference (MAD) from gold standard measurements, with a standard deviation of 12.989°, compared with 3.232° and 3.152° for surgeons 1 and 2 (JK, LS), respectively (Table 1). However, the algorithm still achieved good accuracy, with an overall mean absolute angle difference of 8.055 degrees and was not statistically different from the gold standard measurements (P = .372). Compared with the gold standard measurements, the algorithm was even more accurate with a median absolute angle difference of 6.965°, and this was also not statistically different (P = .161).
Table 1.

Absolute Angle Difference Performance Metrics.

OperatorMinimumQ1MedianMeanQ3MaximumSD P a
Relative to gold standard (deg)
 Algorithm0.6683.8106.96513.44121.85750.52812.9890.161
 Surgeon 10.3001.7624.0504.4746.65014.0003.2320.224
 Surgeon 20.1001.3753.0503.5294.82518.4003.1520.460
Relative to overall surgeon average (deg)
 Algorithm0.1873.8158.05513.06919.83454.39513.1260.372

a P values computed using Welch’s 2-sample t test.

Absolute Angle Difference Performance Metrics. a P values computed using Welch’s 2-sample t test. The sorted bar plot of the raw angle differences between the algorithm and gold standard measurements demonstrated higher rates of overestimation than underestimation (Figure 5). A subpopulation analysis revealed much lower variation in absolute angle difference in the images with 6 compared with 7 or more segmented vertebral bodies (Figure 6).
Figure 5.

Sorted bar plot of predicted angle error compared to the gold standard measurements (n = 42). The algorithm overestimated in 26 radiographs and underestimated in 16 radiographs.

Figure 6.

Box-whisker plot of absolute angle difference for radiographs with 6 and 7+ vertebral bodies segmented.

Sorted bar plot of predicted angle error compared to the gold standard measurements (n = 42). The algorithm overestimated in 26 radiographs and underestimated in 16 radiographs. Box-whisker plot of absolute angle difference for radiographs with 6 and 7+ vertebral bodies segmented.

Discussion

Accurate measurement of radiographic parameters is essential for proper assessment of sagittal alignment and surgical planning.[31,32] The present study demonstrates the first fully automated system for the assessment of sagittal alignment on routine lumbar imaging. The pipeline in this study demonstrates strong segmentation quality (assessed by DSC) and accurate spinopelvic measurement when compared with 3 orthopedic surgeons. While the advent of digital radiographs and computer-assisted measurement software have simplified the process of acquiring those parameters, they still require extensive manual input from the surgeon—increasing surgeon demand and introducing potential for interrater variability.[28,30,33] Previous work on automatic extraction of spinal parameters focused mostly on computed tomography and magnetic resonance data, as they contain higher-resolution data with less noise and allow for 3-dimensional reconstruction.[34-37] These imaging modalities allow for high-fidelity segmentations and measurement of additional parameters such as apical vertebral rotation but are not routinely used for monitoring or intraoperative imaging due to exposure to radiation and high price, limiting their clinical utility.[38-40] Therefore, this study aimed to evaluate the effectiveness of a fully automated, rapid LL Cobb angle measuring algorithm on lateral lumbar radiographs. Previous groups have utilized machine learning models such as faster region-based convolutional neural networks (faster R-CNN) as well as traditional computer vision techniques to automatically localize the spine on radiographs.[17,41-44] However, many of these studies were limited by small sample sizes, due to a lack of widely available source of labeled data, and utilized single vertebra-level segmentation—increasing complexity of the model and thus potential for error.[17,41] One systematic review of Cobb angle measurement also noted that all previous Cobb angle computerized approaches, even ones that were deemed “automatic,” require landmark identification or another user input to generate each measurement, reducing scalability.[45] Recent work by Al Arif, et al[46] demonstrated how a high-performance cervical vertebra segmentation algorithm that achieved an average DSC of 0.944 suffered decreases in performance to a DSC of 0.840 when integrated to a fully automatic workflow due to accumulation of error in the multimodel pipeline.[46] In comparison, our model segmented the entire radiograph at once and still achieved a similar test DSC of 0.852 on lateral lumbar radiographs, which contains additional physiological radiopaque artifacts such as bowel gas and panniculus that increases segmentation difficulty. Most of the algorithm failures were due to segmentation of too few vertebral bodies (<5 lumbar + 1 sacrum) or due to the missegmentation of separate vertebral bodies as a single, fused body (Figure 7). The importance of segmentation quality on performance was also demonstrated in our subpopulation analysis (Figure 6), where the variability in absolute angle difference was much higher in the radiographs with higher than expected number of segmented vertebral bodies (7+ vs 6).
Figure 7.

Examples of U-Net segmentation failures. (A) Image and corresponding segmentation characterized by L1 segmentation failure. (B) Image and corresponding segmentation characterized by fused L2 and L3 vertebrae.

Examples of U-Net segmentation failures. (A) Image and corresponding segmentation characterized by L1 segmentation failure. (B) Image and corresponding segmentation characterized by fused L2 and L3 vertebrae. Segmentation allows the algorithm to determine where the vertebral bodies are located in a radiograph, just as spinal surgeons identify the vertebral bodies before identifying the end plates to use for measurement. While this may seem like a trivial task, the large variability in posture and vertebral shape as well as the presence of various radiopaque artifacts make this a difficult problem using traditional computer vision techniques. Computer vision algorithms exhibit robust performance on visual tasks and are largely insensitive to a lack of training data, but they tend to fail on images with high complexity such as radiographs. Deep learning algorithms and CNNs, on the other hand, can tolerate higher complexity but require training on massive datasets in order to identify refined features. While the size of our dataset is significantly larger than those reported in other spine segmentation studies, training a neural network to directly predict the angle from raw radiographs would require an unreasonably large dataset. Segmentation networks are therefore necessary to reduce the complexity of the input image so that robust computer vision algorithms can be utilized. Further improving the robustness of segmentation will therefore be essential for improving the performance of future algorithms, as the computer vision techniques rely on the algorithm-generated segmentation to determine which landmarks to use for measurement. Our overall median absolute angle difference of 8.055° is larger than the error margins from surgeons (Table 1). While the t test showed that the algorithm measurements were not significantly different from surgeon measurements, demonstrating good accuracy, the standard deviation was much higher for the algorithm, demonstrating lower precision. Considering the lower precision relative to surgeon measurement, this algorithm may find perioperative clinical utility in an assistive capacity. We envision the algorithm could be integrated into manual tools for digital radiograph measurement to provide a visualized default measurement suggestion similar to that seen in Figure 8. The surgeon could then adjust the interactive measurement visualization tools as needed from the automatically generated starting point, reducing surgeon input compared to fully manual measurement. Even in cases of inaccurate measurement suggestion, the algorithm often still often succeeds in locating one of the necessary end plates (Figure 8b). Thus, deployment of this algorithm could provide clinical utility despite its lower precision by providing a bridge between manual measurement and fully automated measurement. Before fully automating the measurement of PI − LL with clinically acceptable error, consideration should be made to achieve absolute error and standard deviation that are sufficiently small to avoid affecting management. While there have been attempts at automating the measurement of Cobb angles in adolescent idiopathic scoliosis,[17,47,48] there is an unfortunate lack of literature on automating the measurement of other radiographic parameters such as pelvic incidence and sagittal alignment that are important for grading and surgical planning.[49] Further work in providing near real-time measurements from radiographs may increase the utility of these measurements in the outpatient as well as pre-, intra-, and postoperative settings, leading to improved screening and/or surgical outcomes.
Figure 8.

Computer-generated visualizations of end plate location and angle measurement for accurate and inaccurate algorithm measurements. (A) Accurate algorithm measurements corresponding to gold standard measurements of 61.1° (left) and 32.8° (right). (B) Inaccurate algorithm measurements corresponding to gold standard measurements of 37.7° (left) and 66.4° (right).

Computer-generated visualizations of end plate location and angle measurement for accurate and inaccurate algorithm measurements. (A) Accurate algorithm measurements corresponding to gold standard measurements of 61.1° (left) and 32.8° (right). (B) Inaccurate algorithm measurements corresponding to gold standard measurements of 37.7° (left) and 66.4° (right). This study has several limitations. The first limitation is that radiographs with any implants were excluded. This exclusion criteria simplified the process of generating masks, which allowed us to create a large enough database for robust training, optimization, and testing. However, it limits the utility of our tool in the setting of postoperative follow up or revision, as a U-Net trained solely on radiographs without implants is most likely to include the highly radiopaque implant in the segmentation. A large number of image/mask pairs with advanced augmentation techniques may be necessary to account for the high variability in the types of implants used and the levels at which they are placed.[46] Another limitation is that all the radiographic data originated from a single hospital system, which may not reflect the differences in X-ray machines and acquisition techniques used in other institutions that could introduce additional artifacts to the radiographs. Additionally, this study did not stratify the radiographs by severity of ASD, which has been shown to increase the MAD for LL Cobb angle between human observers.[11] Severe deformities in the vertebral bodies that are not included in the training data may affect the quality of the segmentation, decreasing the performance of the algorithm. Finally, our comparison analysis excluded radiographs that the algorithms failed to identify the L1 and S1 vertebra on. This may have biased our analysis, as we do not know how the algorithm would have performed on them if the segmentation quality was higher.

Conclusion

This is the first published fully automatic algorithm that measures the LL Cobb angle using lateral lumbar radiographs. Deep learning, in combination with computer vision, is a promising tool in automating the measurement of various radiographic parameters. Our algorithm accurately measures the LL Cobb angle and is not statistically different from manual measurements made by surgeons, suggesting potential clinical utility in an assistive capacity. As techniques in bone segmentation and computer vision improve over time, these types of tools may prove useful in preoperative and intraoperative surgical decision making. Future work should focus on improving the quality of segmentation to increase the reliability of measurements. We envision this to be possible with multilabel segmentation nets that incorporate the femoral heads, which would allow us to calculate important measurements such as pelvic tilt, pelvic incidence, and PI − LL.
  42 in total

1.  Mean-shifted surface curvature algorithm for automatic bone shape segmentation in orthopedic surgery planning: a sensitivity analysis.

Authors:  Pietro Cerveri; Alfonso Manzotti; Mario Marchente; Norberto Confalonieri; Guido Baroni
Journal:  Comput Aided Surg       Date:  2012-04-02

Review 2.  Measuring procedures to determine the Cobb angle in idiopathic scoliosis: a systematic review.

Authors:  S Langensiepen; O Semler; R Sobottke; O Fricke; J Franklin; E Schönau; P Eysel
Journal:  Eur Spine J       Date:  2013-02-27       Impact factor: 3.134

3.  VolHOG: a volumetric object recognition approach based on bivariate histograms of oriented gradients for vertebra detection in cervical spine MRI.

Authors:  Stefan Daenzer; Stefan Freitag; Sandra von Sachsen; Hanno Steinke; Mathias Groll; Jürgen Meixensberger; Mario Leimert
Journal:  Med Phys       Date:  2014-08       Impact factor: 4.071

4.  A computer-aided Cobb angle measurement method and its reliability.

Authors:  Junhua Zhang; Edmond Lou; Xinling Shi; Yuanyuan Wang; Douglas L Hill; James V Raso; Lawrence H Le; Liang Lv
Journal:  J Spinal Disord Tech       Date:  2010-08

5.  Statistical validation of image segmentation quality based on a spatial overlap index.

Authors:  Kelly H Zou; Simon K Warfield; Aditya Bharatha; Clare M C Tempany; Michael R Kaus; Steven J Haker; William M Wells; Ferenc A Jolesz; Ron Kikinis
Journal:  Acad Radiol       Date:  2004-02       Impact factor: 3.173

6.  Three-dimensional EOS Analysis of Apical Vertebral Rotation in Adolescent Idiopathic Scoliosis.

Authors:  So Kato; Charlotte Debaud; Reinhard D Zeller
Journal:  J Pediatr Orthop       Date:  2017-12       Impact factor: 2.324

7.  Do intraoperative radiographs in scoliosis surgery reflect radiographic result?

Authors:  Ronald A Lehman; Lawrence G Lenke; Melvin D Helgeson; Tobin T Eckel; Kathryn A Keeler
Journal:  Clin Orthop Relat Res       Date:  2009-05-07       Impact factor: 4.176

Review 8.  Cervical spine alignment, sagittal deformity, and clinical implications: a review.

Authors:  Justin K Scheer; Jessica A Tang; Justin S Smith; Frank L Acosta; Themistocles S Protopsaltis; Benjamin Blondel; Shay Bess; Christopher I Shaffrey; Vedat Deviren; Virginie Lafage; Frank Schwab; Christopher P Ames
Journal:  J Neurosurg Spine       Date:  2013-06-14

9.  Radiographical spinopelvic parameters and disability in the setting of adult spinal deformity: a prospective multicenter analysis.

Authors:  Frank J Schwab; Benjamin Blondel; Shay Bess; Richard Hostin; Christopher I Shaffrey; Justin S Smith; Oheneba Boachie-Adjei; Douglas C Burton; Behrooz A Akbarnia; Gregory M Mundis; Christopher P Ames; Khaled Kebaish; Robert A Hart; Jean-Pierre Farcy; Virginie Lafage
Journal:  Spine (Phila Pa 1976)       Date:  2013-06-01       Impact factor: 3.468

10.  Beyond Pelvic Incidence-Lumbar Lordosis Mismatch: The Importance of Assessing the Entire Spine to Achieve Global Sagittal Alignment.

Authors:  Robert K Merrill; Jun S Kim; Dante M Leven; Joung Heon Kim; Samuel K Cho
Journal:  Global Spine J       Date:  2017-04-20
View more
  10 in total

1.  Artificial Intelligence in Adult Spinal Deformity.

Authors:  Pramod N Kamalapathy; Aditya V Karhade; Daniel Tobert; Joseph H Schwab
Journal:  Acta Neurochir Suppl       Date:  2022

Review 2.  Artificial Intelligence and Computer Aided Diagnosis in Chronic Low Back Pain: A Systematic Review.

Authors:  Federico D'Antoni; Fabrizio Russo; Luca Ambrosio; Luca Bacco; Luca Vollero; Gianluca Vadalà; Mario Merone; Rocco Papalia; Vincenzo Denaro
Journal:  Int J Environ Res Public Health       Date:  2022-05-14       Impact factor: 4.614

3.  A deep learning system for automated, multi-modality 2D segmentation of vertebral bodies and intervertebral discs.

Authors:  Abhinav Suri; Brandon C Jones; Grace Ng; Nancy Anabaraonye; Patrick Beyrer; Albi Domi; Grace Choi; Sisi Tang; Ashley Terry; Thomas Leichner; Iman Fathali; Nikita Bastin; Helene Chesnais; Chamith S Rajapakse
Journal:  Bone       Date:  2021-04-21       Impact factor: 4.626

4.  Spinopelvic measurements of sagittal balance with deep learning: systematic review and critical evaluation.

Authors:  Tomaž Vrtovec; Bulat Ibragimov
Journal:  Eur Spine J       Date:  2022-03-12       Impact factor: 2.721

5.  Localization and Edge-Based Segmentation of Lumbar Spine Vertebrae to Identify the Deformities Using Deep Learning Models.

Authors:  Malaika Mushtaq; Muhammad Usman Akram; Norah Saleh Alghamdi; Joddat Fatima; Rao Farhat Masood
Journal:  Sensors (Basel)       Date:  2022-02-17       Impact factor: 3.576

6.  Image Quality Control in Lumbar Spine Radiography Using Enhanced U-Net Neural Networks.

Authors:  Xiao Chen; Qingshan Deng; Qiang Wang; Xinmiao Liu; Lei Chen; Jinjin Liu; Shuangquan Li; Meihao Wang; Guoquan Cao
Journal:  Front Public Health       Date:  2022-04-26

7.  Conformity assessment of a computer vision-based posture analysis system for the screening of postural deformation.

Authors:  Kwang Hyeon Kim; Moon-Jun Sohn; Chun Gun Park
Journal:  BMC Musculoskelet Disord       Date:  2022-08-22       Impact factor: 2.562

8.  Development of artificial intelligence for automated measurement of cervical lordosis on lateral radiographs.

Authors:  Takahito Fujimori; Yuki Suzuki; Shota Takenaka; Kosuke Kita; Yuya Kanie; Takashi Kaito; Yuichiro Ukon; Tadashi Watabe; Nozomu Nakajima; Shoji Kido; Seiji Okada
Journal:  Sci Rep       Date:  2022-09-21       Impact factor: 4.996

9.  Convolutional neural network-based automated segmentation and labeling of the lumbar spine X-ray.

Authors:  Sándor Kónya; Tr Sai Natarajan; Hassan Allouch; Kais Abu Nahleh; Omneya Yakout Dogheim; Heinrich Boehm
Journal:  J Craniovertebr Junction Spine       Date:  2021-06-10

10.  Intelligent Evaluation of Global Spinal Alignment by a Decentralized Convolutional Neural Network.

Authors:  Thong Phi Nguyen; Ji Won Jung; Yong Jin Yoo; Sung Hoon Choi; Jonghun Yoon
Journal:  J Digit Imaging       Date:  2022-01-21       Impact factor: 4.056

  10 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.