BACKGROUND: Identifying pneumonia using diagnosis codes alone may be insufficient for research on clinical decision making. Natural language processing (NLP) may enable the inclusion of cases missed by diagnosis codes. OBJECTIVES: This article (1) develops a NLP tool that identifies the clinical assertion of pneumonia from physician emergency department (ED) notes, and (2) compares classification methods using diagnosis codes versus NLP against a gold standard of manual chart review to identify patients initially treated for pneumonia. METHODS: Among a national population of ED visits occurring between 2006 and 2012 across the Veterans Affairs health system, we extracted 811 physician documents containing search terms for pneumonia for training, and 100 random documents for validation. Two reviewers annotated span- and document-level classifications of the clinical assertion of pneumonia. An NLP tool using a support vector machine was trained on the enriched documents. We extracted diagnosis codes assigned in the ED and upon hospital discharge and calculated performance characteristics for diagnosis codes, NLP, and NLP plus diagnosis codes against manual review in training and validation sets. RESULTS: Among the training documents, 51% contained clinical assertions of pneumonia; in the validation set, 9% were classified with pneumonia, of which 100% contained pneumonia search terms. After enriching with search terms, the NLP system alone demonstrated a recall/sensitivity of 0.72 (training) and 0.55 (validation), and a precision/positive predictive value (PPV) of 0.89 (training) and 0.71 (validation). ED-assigned diagnostic codes demonstrated lower recall/sensitivity (0.48 and 0.44) but higher precision/PPV (0.95 in training, 1.0 in validation); the NLP system identified more "possible-treated" cases than diagnostic coding. An approach combining NLP and ED-assigned diagnostic coding classification achieved the best performance (sensitivity 0.89 and PPV 0.80). CONCLUSION: System-wide application of NLP to clinical text can increase capture of initial diagnostic hypotheses, an important inclusion when studying diagnosis and clinical decision-making under uncertainty. Schattauer GmbH Stuttgart.
BACKGROUND: Identifying pneumonia using diagnosis codes alone may be insufficient for research on clinical decision making. Natural language processing (NLP) may enable the inclusion of cases missed by diagnosis codes. OBJECTIVES: This article (1) develops a NLP tool that identifies the clinical assertion of pneumonia from physician emergency department (ED) notes, and (2) compares classification methods using diagnosis codes versus NLP against a gold standard of manual chart review to identify patients initially treated for pneumonia. METHODS: Among a national population of ED visits occurring between 2006 and 2012 across the Veterans Affairs health system, we extracted 811 physician documents containing search terms for pneumonia for training, and 100 random documents for validation. Two reviewers annotated span- and document-level classifications of the clinical assertion of pneumonia. An NLP tool using a support vector machine was trained on the enriched documents. We extracted diagnosis codes assigned in the ED and upon hospital discharge and calculated performance characteristics for diagnosis codes, NLP, and NLP plus diagnosis codes against manual review in training and validation sets. RESULTS: Among the training documents, 51% contained clinical assertions of pneumonia; in the validation set, 9% were classified with pneumonia, of which 100% contained pneumonia search terms. After enriching with search terms, the NLP system alone demonstrated a recall/sensitivity of 0.72 (training) and 0.55 (validation), and a precision/positive predictive value (PPV) of 0.89 (training) and 0.71 (validation). ED-assigned diagnostic codes demonstrated lower recall/sensitivity (0.48 and 0.44) but higher precision/PPV (0.95 in training, 1.0 in validation); the NLP system identified more "possible-treated" cases than diagnostic coding. An approach combining NLP and ED-assigned diagnostic coding classification achieved the best performance (sensitivity 0.89 and PPV 0.80). CONCLUSION: System-wide application of NLP to clinical text can increase capture of initial diagnostic hypotheses, an important inclusion when studying diagnosis and clinical decision-making under uncertainty. Schattauer GmbH Stuttgart.
Authors: Lionel A Mandell; Richard G Wunderink; Antonio Anzueto; John G Bartlett; G Douglas Campbell; Nathan C Dean; Scott F Dowell; Thomas M File; Daniel M Musher; Michael S Niederman; Antonio Torres; Cynthia G Whitney Journal: Clin Infect Dis Date: 2007-03-01 Impact factor: 9.079
Authors: Seema Jain; Wesley H Self; Richard G Wunderink; Sherene Fakhran; Robert Balk; Anna M Bramley; Carrie Reed; Carlos G Grijalva; Evan J Anderson; D Mark Courtney; James D Chappell; Chao Qi; Eric M Hart; Frank Carroll; Christopher Trabue; Helen K Donnelly; Derek J Williams; Yuwei Zhu; Sandra R Arnold; Krow Ampofo; Grant W Waterer; Min Levine; Stephen Lindstrom; Jonas M Winchell; Jacqueline M Katz; Dean Erdman; Eileen Schneider; Lauri A Hicks; Jonathan A McCullers; Andrew T Pavia; Kathryn M Edwards; Lyn Finelli Journal: N Engl J Med Date: 2015-07-14 Impact factor: 91.245
Authors: Nathan C Dean; Barbara E Jones; Jeffrey P Ferraro; Caroline G Vines; Peter J Haug Journal: JAMA Intern Med Date: 2013-04-22 Impact factor: 21.873
Authors: Ewoudt M W van de Garde; Jan Jelrik Oosterheert; Marc Bonten; Robert C Kaplan; Hubert G M Leufkens Journal: J Clin Epidemiol Date: 2007-02-23 Impact factor: 6.437
Authors: Vincent Liu; Mark P Clark; Mark Mendoza; Ramin Saket; Marla N Gardner; Benjamin J Turk; Gabriel J Escobar Journal: BMC Med Inform Decis Mak Date: 2013-08-15 Impact factor: 2.796
Authors: Majid Afshar; Dmitriy Dligach; Brihat Sharma; Xiaoyuan Cai; Jason Boyda; Steven Birch; Daniel Valdez; Suzan Zelisko; Cara Joyce; François Modave; Ron Price Journal: J Am Med Inform Assoc Date: 2019-11-01 Impact factor: 4.497
Authors: Steven Horng; Nathaniel R Greenbaum; Larry A Nathanson; James C McClay; Foster R Goss; Jeffrey A Nielson Journal: Appl Clin Inform Date: 2019-06-12 Impact factor: 2.342
Authors: Prakash Adekkanattu; Evan T Sholle; Joseph DeFerio; Jyotishman Pathak; Stephen B Johnson; Thomas R Campion Journal: AMIA Annu Symp Proc Date: 2018-12-05
Authors: Manan Shah; Derek Shu; V B Surya Prasath; Yizhao Ni; Andrew H Schapiro; Kevin R Dufendach Journal: Appl Clin Inform Date: 2021-09-08 Impact factor: 2.762
Authors: Barbara Ellen Jones; Jian Ying; Vanessa Stevens; Candace Haroldsen; Tao He; McKenna Nevers; Matthew A Christensen; Richard E Nelson; Gregory J Stoddard; Brian C Sauer; Peter M Yarbrough; Makoto M Jones; Matthew Bidwell Goetz; Tom Greene; Matthew H Samore Journal: JAMA Intern Med Date: 2020-04-01 Impact factor: 21.873
Authors: Majid Afshar; Andrew Phillips; Niranjan Karnik; Jeanne Mueller; Daniel To; Richard Gonzalez; Ron Price; Richard Cooper; Cara Joyce; Dmitriy Dligach Journal: J Am Med Inform Assoc Date: 2019-03-01 Impact factor: 4.497
Authors: Sujay Kulshrestha; Dmitriy Dligach; Cara Joyce; Marshall S Baker; Richard Gonzalez; Ann P O'Rourke; Joshua M Glazer; Anne Stey; Jacqueline M Kruser; Matthew M Churpek; Majid Afshar Journal: Injury Date: 2020-10-25 Impact factor: 2.586
Authors: Frances B Maguire; Cyllene R Morris; Arti Parikh-Patel; Rosemary D Cress; Theresa H M Keegan; Chin-Shang Li; Patrick S Lin; Kenneth W Kizer Journal: PLoS One Date: 2019-02-22 Impact factor: 3.240