Ying Zhu1, Gyorgy J Simon2, Elizabeth C Wick3, Yumiko Abe-Jones4, Nader Najafi4, Adam Sheka5, Roshan Tourani1, Steven J Skube5, Zhen Hu1, Genevieve B Melton6. 1. Institute for Health Informatics, University of Minnesota, Twin Cities, Minneapolis, MN. 2. Institute for Health Informatics, University of Minnesota, Twin Cities, Minneapolis, MN; Departments of Medicine, University of Minnesota, Twin Cities, Minneapolis, MN. 3. Surgery, University of Minnesota, Twin Cities, Minneapolis, MN. 4. Departments of Surgery, University of California San Francisco, San Francisco, CA. 5. Medicine, University of California San Francisco, San Francisco, CA. 6. Institute for Health Informatics, University of Minnesota, Twin Cities, Minneapolis, MN; Medicine, University of California San Francisco, San Francisco, CA. Electronic address: gmelton@umn.edu.
Abstract
BACKGROUND: Surgical complications have tremendous consequences and costs. Complication detection is important for quality improvement, but traditional manual chart review is burdensome. Automated mechanisms are needed to make this more efficient. To understand the generalizability of a machine learning algorithm between sites, automated surgical site infection (SSI) detection algorithms developed at one center were tested at another distinct center. STUDY DESIGN: NSQIP patients had electronic health record (EHR) data extracted at one center (University of Minnesota Medical Center, Site A) over a 4-year period for model development and internal validation, and at a second center (University of California San Francisco, Site B) over a subsequent 2-year period for external validation. Models for automated NSQIP SSI detection of superficial, organ space, and total SSI within 30 days postoperatively were validated using area under the curve (AUC) scores and corresponding 95% confidence intervals. RESULTS: For the 8,883 patients (Site A) and 1,473 patients (Site B), AUC scores were not statistically different for any outcome including superficial (external 0.804, internal [0.784, 0.874] AUC); organ/space (external 0.905, internal [0.867, 0.941] AUC); and total (external 0.855, internal [0.854, 0.908] AUC) SSI. False negative rates decreased with increasing case review volume and would be amenable to a strategy in which cases with low predicted probabilities of SSI could be excluded from chart review. CONCLUSIONS: Our findings demonstrated that SSI detection machine learning algorithms developed at 1 site were generalizable to another institution. SSI detection models are practically applicable to accelerate and focus chart review.
BACKGROUND: Surgical complications have tremendous consequences and costs. Complication detection is important for quality improvement, but traditional manual chart review is burdensome. Automated mechanisms are needed to make this more efficient. To understand the generalizability of a machine learning algorithm between sites, automated surgical site infection (SSI) detection algorithms developed at one center were tested at another distinct center. STUDY DESIGN: NSQIP patients had electronic health record (EHR) data extracted at one center (University of Minnesota Medical Center, Site A) over a 4-year period for model development and internal validation, and at a second center (University of California San Francisco, Site B) over a subsequent 2-year period for external validation. Models for automated NSQIP SSI detection of superficial, organ space, and total SSI within 30 days postoperatively were validated using area under the curve (AUC) scores and corresponding 95% confidence intervals. RESULTS: For the 8,883 patients (Site A) and 1,473 patients (Site B), AUC scores were not statistically different for any outcome including superficial (external 0.804, internal [0.784, 0.874] AUC); organ/space (external 0.905, internal [0.867, 0.941] AUC); and total (external 0.855, internal [0.854, 0.908] AUC) SSI. False negative rates decreased with increasing case review volume and would be amenable to a strategy in which cases with low predicted probabilities of SSI could be excluded from chart review. CONCLUSIONS: Our findings demonstrated that SSI detection machine learning algorithms developed at 1 site were generalizable to another institution. SSI detection models are practically applicable to accelerate and focus chart review.
Authors: Stephen T Wu; Sunghwan Sohn; K E Ravikumar; Kavishwar Wagholikar; Siddhartha R Jonnalagadda; Hongfang Liu; Young J Juhn Journal: Ann Allergy Asthma Immunol Date: 2013-08-12 Impact factor: 6.347
Authors: Aldo Tinoco; R Scott Evans; Catherine J Staes; James F Lloyd; Jeffrey M Rothschild; Peter J Haug Journal: J Am Med Inform Assoc Date: 2011 Jul-Aug Impact factor: 4.497
Authors: Yi Mu; Jonathan R Edwards; Teresa C Horan; Sandra I Berrios-Torres; Scott K Fridkin Journal: Infect Control Hosp Epidemiol Date: 2011-09-01 Impact factor: 3.254
Authors: Harvey J Murff; Fern FitzHenry; Michael E Matheny; Nancy Gentry; Kristen L Kotter; Kimberly Crimin; Robert S Dittus; Amy K Rosen; Peter L Elkin; Steven H Brown; Theodore Speroff Journal: JAMA Date: 2011-08-24 Impact factor: 56.272
Authors: Hongfang Liu; Sunghwan Sohn; Sean Murphy; Jenna Lovely; Matthew Burton; James Naessens; David W Larson Journal: AMIA Jt Summits Transl Sci Proc Date: 2014-04-07
Authors: Steven J Skube; Zhen Hu; Gyorgy J Simon; Elizabeth C Wick; Elliot G Arsoniadis; Clifford Y Ko; Genevieve B Melton Journal: Ann Surg Date: 2020-10-14 Impact factor: 13.787
Authors: Steven J Skube; Zhen Hu; Elliot G Arsoniadis; Gyorgy J Simon; Elizabeth C Wick; Clifford Y Ko; Genevieve B Melton Journal: Stud Health Technol Inform Date: 2017