| Literature DB >> 26635322 |
Michael D Ringler1, Brian C Goss1, Brian J Bartholmai1.
Abstract
Speech recognition software can increase the frequency of errors in radiology reports, which may affect patient care. We retrieved 213,977 speech recognition software-generated reports from 147 different radiologists and proofread them for errors. Errors were classified as "material" if they were believed to alter interpretation of the report. "Immaterial" errors were subclassified as intrusion/omission or spelling errors. The proportion of errors and error type were compared among individual radiologists, imaging subspecialty, and time periods. In all, 20,759 reports (9.7%) contained errors, of which 3992 (1.9%) were material errors. Among immaterial errors, spelling errors were more common than intrusion/omission errors ( p < .001). Proportion of errors and fraction of material errors varied significantly among radiologists and between imaging subspecialties ( p < .001). Errors were more common in cross-sectional reports, reports reinterpreting results of outside examinations, and procedural studies (all p < .001). Error rate decreased over time ( p < .001), which suggests that a quality control program with regular feedback may reduce errors.Entities:
Keywords: PowerScribe; quality control; radiology report; report errors; speech recognition
Mesh:
Year: 2016 PMID: 26635322 DOI: 10.1177/1460458215613614
Source DB: PubMed Journal: Health Informatics J ISSN: 1460-4582 Impact factor: 2.681