Abin Abraham1, Nicholas L Kavoussi2, Wilson Sui2, Cosmin Bejan3, John A Capra1,4,5, Ryan Hsi2. 1. Department of Biological Sciences, Vanderbilt Genetics Institute, and Center for Structural Biology, Vanderbilt University, Nashville, Tennessee, USA. 2. Department of Urology and Vanderbilt University Medical Center, Nashville, Tennessee, USA. 3. Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, USA. 4. Bakar Computational Health Sciences Institute, University of California, San Francisco, California, USA. 5. Department of Epidemiology and Biostatistics, University of California, San Francisco, California, USA.
Abstract
Objectives: To assess the accuracy of machine learning models in predicting kidney stone composition using variables extracted from the electronic health record (EHR). Materials and Methods: We identified kidney stone patients (n = 1296) with both stone composition and 24-hour (24H) urine testing. We trained machine learning models (XGBoost [XG] and logistic regression [LR]) to predict stone composition using 24H urine data and EHR-derived demographic and comorbidity data. Models predicted either binary (calcium vs noncalcium stone) or multiclass (calcium oxalate, uric acid, hydroxyapatite, or other) stone types. We evaluated performance using area under the receiver operating curve (ROC-AUC) and accuracy and identified predictors for each task. Results: For discriminating binary stone composition, XG outperformed LR with higher accuracy (91% vs 71%) with ROC-AUC of 0.80 for both models. Top predictors used by these models were supersaturations of uric acid and calcium phosphate, and urinary ammonium. For multiclass classification, LR outperformed XG with higher accuracy (0.64 vs 0.56) and ROC-AUC (0.79 vs 0.59), and urine pH had the highest predictive utility. Overall, 24H urine analyte data contributed more to the models' predictions of stone composition than EHR-derived variables. Conclusion: Machine learning models can predict calcium stone composition. LR outperforms XG in multiclass stone classification. Demographic and comorbidity data are predictive of stone composition; however, including 24H urine data improves performance. Further optimization of performance could lead to earlier directed medical therapy for kidney stone patients.
Objectives: To assess the accuracy of machine learning models in predicting kidney stone composition using variables extracted from the electronic health record (EHR). Materials and Methods: We identified kidney stone patients (n = 1296) with both stone composition and 24-hour (24H) urine testing. We trained machine learning models (XGBoost [XG] and logistic regression [LR]) to predict stone composition using 24H urine data and EHR-derived demographic and comorbidity data. Models predicted either binary (calcium vs noncalcium stone) or multiclass (calcium oxalate, uric acid, hydroxyapatite, or other) stone types. We evaluated performance using area under the receiver operating curve (ROC-AUC) and accuracy and identified predictors for each task. Results: For discriminating binary stone composition, XG outperformed LR with higher accuracy (91% vs 71%) with ROC-AUC of 0.80 for both models. Top predictors used by these models were supersaturations of uric acid and calcium phosphate, and urinary ammonium. For multiclass classification, LR outperformed XG with higher accuracy (0.64 vs 0.56) and ROC-AUC (0.79 vs 0.59), and urine pH had the highest predictive utility. Overall, 24H urine analyte data contributed more to the models' predictions of stone composition than EHR-derived variables. Conclusion: Machine learning models can predict calcium stone composition. LR outperforms XG in multiclass stone classification. Demographic and comorbidity data are predictive of stone composition; however, including 24H urine data improves performance. Further optimization of performance could lead to earlier directed medical therapy for kidney stone patients.
Authors: Paul A Harris; Robert Taylor; Robert Thielke; Jonathon Payne; Nathaniel Gonzalez; Jose G Conde Journal: J Biomed Inform Date: 2008-09-30 Impact factor: 6.317
Authors: Margaret S Pearle; David S Goldfarb; Dean G Assimos; Gary Curhan; Cynthia J Denu-Ciocca; Brian R Matlaga; Manoj Monga; Kristina L Penniston; Glenn M Preminger; Thomas M T Turk; James R White Journal: J Urol Date: 2014-05-20 Impact factor: 7.450
Authors: Paul A Harris; Robert Taylor; Brenda L Minor; Veida Elliott; Michelle Fernandez; Lindsay O'Neal; Laura McLeod; Giovanni Delacqua; Francesco Delacqua; Jacqueline Kirby; Stephany N Duda Journal: J Biomed Inform Date: 2019-05-09 Impact factor: 6.317
Authors: Scott M Lundberg; Bala Nair; Monica S Vavilala; Mayumi Horibe; Michael J Eisses; Trevor Adams; David E Liston; Daniel King-Wai Low; Shu-Fang Newman; Jerry Kim; Su-In Lee Journal: Nat Biomed Eng Date: 2018-10-10 Impact factor: 25.671
Authors: Ioana Danciu; James D Cowan; Melissa Basford; Xiaoming Wang; Alexander Saip; Susan Osgood; Jana Shirey-Rice; Jacqueline Kirby; Paul A Harris Journal: J Biomed Inform Date: 2014-02-14 Impact factor: 6.317
Authors: Anna L Zisman; Fredric L Coe; Andrew J Cohen; Christopher B Riedinger; Elaine M Worcester Journal: Clin J Am Soc Nephrol Date: 2020-06-19 Impact factor: 8.237