Hanna Suominen1, Maree Johnson2, Liyuan Zhou3, Paula Sanchez4, Raul Sirel5, Jim Basilakis6, Leif Hanlen7, Dominique Estival8, Linda Dawson9, Barbara Kelly10. 1. Machine Learning Research Group, NICTA, College of Engineering and Computer Science, The Australian National University, Faculty of Health, University of Canberra, and Department of Information Technology, University of Turku, Canberra, Australian Capital Territory, Australia. 2. Research Faculty of Health Sciences, Australian Catholic University, Sydney, New South Wales, Australia. 3. Machine Learning Research Group, NICTA, Canberra, Australian Capital Territory, Australia. 4. Centre for Applied Nursing Research (University of Western Sydney and South Western Sydney Local Health District), Sydney, New South Wales, Australia. 5. Institute of Estonian and General Linguistics, University of Tartu, Tartu, Estonia. 6. School of Computing, Engineering and Mathematics, University of Western Sydney, Sydney, New South Wales, Australia. 7. Machine Learning Research Group, NICTA, College of Engineering and Computer Science, The Australian National University, Faculty of Health, University of Canberra, Canberra, Australian Capital Territory, Australia. 8. The MARCS Institute, University of Western Sydney and Department of Linguistics, University of Sydney, Sydney, New South Wales, Australia. 9. Faculty of Social Sciences, University of Wollongong, Wollongong, New South Wales, Australia. 10. School of Languages and Linguistics, The University of Melbourne, Melbourne, Victoria, Australia.
Abstract
OBJECTIVE: We study the use of speech recognition and information extraction to generate drafts of Australian nursing-handover documents. METHODS: Speech recognition correctness and clinicians' preferences were evaluated using 15 recorder-microphone combinations, six documents, three speakers, Dragon Medical 11, and five survey/interview participants. Information extraction correctness evaluation used 260 documents, six-class classification for each word, two annotators, and the CRF++ conditional random field toolkit. RESULTS: A noise-cancelling lapel-microphone with a digital voice recorder gave the best correctness (79%). This microphone was also the most preferred option by all but one participant. Although the participants liked the small size of this recorder, their preference was for tablets that can also be used for document proofing and sign-off, among other tasks. Accented speech was harder to recognize than native language and a male speaker was detected better than a female speaker. Information extraction was excellent in filtering out irrelevant text (85% F1) and identifying text relevant to two classes (87% and 70% F1). Similarly to the annotators' disagreements, there was confusion between the remaining three classes, which explains the modest 62% macro-averaged F1. DISCUSSION: We present evidence for the feasibility of speech recognition and information extraction to support clinicians' in entering text and unlock its content for computerized decision-making and surveillance in healthcare. CONCLUSIONS: The benefits of this automation include storing all information; making the drafts available and accessible almost instantly to everyone with authorized access; and avoiding information loss, delays, and misinterpretations inherent to using a ward clerk or transcription services.
OBJECTIVE: We study the use of speech recognition and information extraction to generate drafts of Australian nursing-handover documents. METHODS: Speech recognition correctness and clinicians' preferences were evaluated using 15 recorder-microphone combinations, six documents, three speakers, Dragon Medical 11, and five survey/interview participants. Information extraction correctness evaluation used 260 documents, six-class classification for each word, two annotators, and the CRF++ conditional random field toolkit. RESULTS: A noise-cancelling lapel-microphone with a digital voice recorder gave the best correctness (79%). This microphone was also the most preferred option by all but one participant. Although the participants liked the small size of this recorder, their preference was for tablets that can also be used for document proofing and sign-off, among other tasks. Accented speech was harder to recognize than native language and a male speaker was detected better than a female speaker. Information extraction was excellent in filtering out irrelevant text (85% F1) and identifying text relevant to two classes (87% and 70% F1). Similarly to the annotators' disagreements, there was confusion between the remaining three classes, which explains the modest 62% macro-averaged F1. DISCUSSION: We present evidence for the feasibility of speech recognition and information extraction to support clinicians' in entering text and unlock its content for computerized decision-making and surveillance in healthcare. CONCLUSIONS: The benefits of this automation include storing all information; making the drafts available and accessible almost instantly to everyone with authorized access; and avoiding information loss, delays, and misinterpretations inherent to using a ward clerk or transcription services.
Authors: Edward C Callaway; Clifford F Sweet; Eliot Siegel; John M Reiser; Douglas P Beall Journal: J Digit Imaging Date: 2002-04-30 Impact factor: 4.056
Authors: Nigam H Shah; Nipun Bhatia; Clement Jonquet; Daniel Rubin; Annie P Chiang; Mark A Musen Journal: BMC Bioinformatics Date: 2009-09-17 Impact factor: 3.169
Authors: Nicola Brew-Sam; Jane Desborough; Anne Parkinson; Krishnan Murugappan; Eleni Daskalaki; Ellen Brown; Harry Ebbeck; Lachlan Pedley; Kristal Hannon; Karen Brown; Elizabeth Pedley; Genevieve Ebbeck; Antonio Tricoli; Hanna Suominen; Christopher J Nolan; Christine Phillips Journal: PLoS One Date: 2022-07-25 Impact factor: 3.752
Authors: Kevin Bretonnel Cohen; Benjamin Glass; Hansel M Greiner; Katherine Holland-Bouley; Shannon Standridge; Ravindra Arya; Robert Faist; Diego Morita; Francesco Mangano; Brian Connolly; Tracy Glauser; John Pestian Journal: Biomed Inform Insights Date: 2016-05-22
Authors: Sumithra Velupillai; Hanna Suominen; Maria Liakata; Angus Roberts; Anoop D Shah; Katherine Morley; David Osborn; Joseph Hayes; Robert Stewart; Johnny Downs; Wendy Chapman; Rina Dutta Journal: J Biomed Inform Date: 2018-10-24 Impact factor: 6.317