Maria A Woodward1,2, Nenita Maganti1,3, Leslie M Niziol1, Sejal Amin4, Andrew Hou4, Karandeep Singh2,5. 1. Department of Ophthalmology and Visual Sciences, W. K. Kellogg Eye Center, University of Michigan, Ann Arbor, MI. 2. Institute for Healthcare Policy and Innovation, University of Michigan, Ann Arbor, MI. 3. Feinberg School of Medicine, Northwestern University, Chicago, IL. 4. Department of Ophthalmology, Henry Ford Health System, Detroit, MI; and. 5. Departments of Learning Health Systems and Internal Medicine, University of Michigan, Ann Arbor, MI.
Abstract
PURPOSE: The purpose of this article was to develop and validate a natural language processing (NLP) algorithm to extract qualitative descriptors of microbial keratitis (MK) from electronic health records. METHODS: In this retrospective cohort study, patients with MK diagnoses from 2 academic centers were identified using electronic health records. An NLP algorithm was created to extract MK centrality, depth, and thinning. A random sample of patient with MK encounters were used to train the algorithm (400 encounters of 100 patients) and compared with expert chart review. The algorithm was evaluated in internal (n = 100) and external validation data sets (n = 59) in comparison with masked chart review. Outcomes were sensitivity and specificity of the NLP algorithm to extract qualitative MK features as compared with masked chart review performed by an ophthalmologist. RESULTS: Across data sets, gold-standard chart review found centrality was documented in 64.0% to 79.3% of charts, depth in 15.0% to 20.3%, and thinning in 25.4% to 31.3%. Compared with chart review, the NLP algorithm had a sensitivity of 80.3%, 50.0%, and 66.7% for identifying central MK, 85.4%, 66.7%, and 100% for deep MK, and 100.0%, 95.2%, and 100% for thin MK, in the training, internal, and external validation samples, respectively. Specificity was 41.1%, 38.6%, and 46.2% for centrality, 100%, 83.3%, and 71.4% for depth, and 93.3%, 100%, and was not applicable (n = 0) to the external data for thinning, in the samples, respectively. CONCLUSIONS: MK features are not documented consistently showing a lack of standardization in recording MK examination elements. NLP shows promise but will be limited if the available clinical data are missing from the chart.
PURPOSE: The purpose of this article was to develop and validate a natural language processing (NLP) algorithm to extract qualitative descriptors of microbial keratitis (MK) from electronic health records. METHODS: In this retrospective cohort study, patients with MK diagnoses from 2 academic centers were identified using electronic health records. An NLP algorithm was created to extract MK centrality, depth, and thinning. A random sample of patient with MK encounters were used to train the algorithm (400 encounters of 100 patients) and compared with expert chart review. The algorithm was evaluated in internal (n = 100) and external validation data sets (n = 59) in comparison with masked chart review. Outcomes were sensitivity and specificity of the NLP algorithm to extract qualitative MK features as compared with masked chart review performed by an ophthalmologist. RESULTS: Across data sets, gold-standard chart review found centrality was documented in 64.0% to 79.3% of charts, depth in 15.0% to 20.3%, and thinning in 25.4% to 31.3%. Compared with chart review, the NLP algorithm had a sensitivity of 80.3%, 50.0%, and 66.7% for identifying central MK, 85.4%, 66.7%, and 100% for deep MK, and 100.0%, 95.2%, and 100% for thin MK, in the training, internal, and external validation samples, respectively. Specificity was 41.1%, 38.6%, and 46.2% for centrality, 100%, 83.3%, and 71.4% for depth, and 93.3%, 100%, and was not applicable (n = 0) to the external data for thinning, in the samples, respectively. CONCLUSIONS: MK features are not documented consistently showing a lack of standardization in recording MK examination elements. NLP shows promise but will be limited if the available clinical data are missing from the chart.
Authors: Michael F Chiang; Michael V Boland; Allen Brewer; K David Epley; Mark B Horton; Michele C Lim; Colin A McCannel; Sayjal J Patel; David E Silverstone; Linda Wedemeyer; Flora Lum Journal: Ophthalmology Date: 2011-06-16 Impact factor: 12.079
Authors: Chung-Il Wi; Sunghwan Sohn; Mary C Rolfes; Alicia Seabright; Euijung Ryu; Gretchen Voge; Kay A Bachman; Miguel A Park; Hirohito Kita; Ivana T Croghan; Hongfang Liu; Young J Juhn Journal: Am J Respir Crit Care Med Date: 2017-08-15 Impact factor: 21.405
Authors: Rishi P Singh; Rumneek Bedi; Ang Li; Sharmila Kulkarni; Tiffany Rodstrom; Gene Altus; Daniel F Martin Journal: JAMA Ophthalmol Date: 2015-06 Impact factor: 7.389
Authors: Karandeep Singh; Niteesh K Choudhry; Alexis A Krumme; Caroline McKay; Newell E McElwee; Joe Kimura; Jessica M Franklin Journal: Pharmacoepidemiol Drug Saf Date: 2019-07-16 Impact factor: 2.890
Authors: Son Doan; Cleo K Maehara; Juan D Chaparro; Sisi Lu; Ruiling Liu; Amanda Graham; Erika Berry; Chun-Nan Hsu; John T Kanegaye; David D Lloyd; Lucila Ohno-Machado; Jane C Burns; Adriana H Tremoulet Journal: Acad Emerg Med Date: 2016-04-13 Impact factor: 3.451