Polina Kukhareva1, Catherine Staes2, Kevin W Noonan3, Heather L Mueller4, Phillip Warner5, David E Shields6, Howard Weeks7, Kensaku Kawamoto8. 1. Department of Biomedical Informatics and Knowledge Management and Mobilization, University of Utah, 421 Wakara Way, Suite #140, Salt Lake City, UT 84108, United States. Electronic address: polina.kukhareva@utah.edu. 2. Department of Biomedical Informatics and Knowledge Management and Mobilization, University of Utah, 421 Wakara Way, Suite #140, Salt Lake City, UT 84108, United States. Electronic address: catherine.staes@hsc.utah.edu. 3. University of Utah Medical Group, 127 S. 500 E., Suite #660, Salt Lake City, UT 84102, United States. Electronic address: kevin.noonan@hsc.utah.edu. 4. University of Utah Medical Group, 127 S. 500 E., Suite #660, Salt Lake City, UT 84102, United States. Electronic address: heather.mueller@hsc.utah.edu. 5. Department of Biomedical Informatics and Knowledge Management and Mobilization, University of Utah, 421 Wakara Way, Suite #140, Salt Lake City, UT 84108, United States. Electronic address: phillip.warner@utah.edu. 6. Department of Biomedical Informatics and Knowledge Management and Mobilization, University of Utah, 421 Wakara Way, Suite #140, Salt Lake City, UT 84108, United States. Electronic address: david.shields@utah.edu. 7. University of Utah Medical Group, 127 S. 500 E., Suite #660, Salt Lake City, UT 84102, United States; Department of Psychiatry, University of Utah, 501 Chipeta Way, Salt Lake City, UT 84108, United States. Electronic address: howard.weeks@hsc.utah.edu. 8. Department of Biomedical Informatics and Knowledge Management and Mobilization, University of Utah, 421 Wakara Way, Suite #140, Salt Lake City, UT 84108, United States. Electronic address: kensaku.kawamoto@utah.edu.
Abstract
OBJECTIVE: Develop evidence-based recommendations for single-reviewer validation of electronic phenotyping results in operational settings. MATERIAL AND METHODS: We conducted a randomized controlled study to evaluate whether electronic phenotyping results should be used to support manual chart review during single-reviewer electronic phenotyping validation (N=3104). We evaluated the accuracy, duration and cost of manual chart review with and without the availability of electronic phenotyping results, including relevant patient-specific details. The cost of identification of an erroneous electronic phenotyping result was calculated based on the personnel time required for the initial chart review and subsequent adjudication of discrepancies between manual chart review results and electronic phenotype determinations. RESULTS: Providing electronic phenotyping results (vs not providing those results) was associated with improved overall accuracy of manual chart review (98.90% vs 92.46%, p<0.001), decreased review duration per test case (62.43 vs 76.78s, p<0.001), and insignificantly reduced estimated marginal costs of identification of an erroneous electronic phenotyping result ($48.54 vs $63.56, p=0.16). The agreement between chart review and electronic phenotyping results was higher when the phenotyping results were provided (Cohen's kappa 0.98 vs 0.88, p<0.001). As a result, while accuracy improved when initial electronic phenotyping results were correct (99.74% vs 92.67%, N=3049, p<0.001), there was a trend towards decreased accuracy when initial electronic phenotyping results were erroneous (56.67% vs 80.00%, N=55, p=0.07). Electronic phenotyping results provided the greatest benefit for the accurate identification of rare exclusion criteria. DISCUSSION: Single-reviewer chart review of electronic phenotyping can be conducted more accurately, quickly, and at lower cost when supported by electronic phenotyping results. However, human reviewers tend to agree with electronic phenotyping results even when those results are wrong. Thus, the value of providing electronic phenotyping results depends on the accuracy of the underlying electronic phenotyping algorithm. CONCLUSION: We recommend using a mix of phenotyping validation strategies, with the balance of strategies based on the anticipated electronic phenotyping error rate, the tolerance for missed electronic phenotyping errors, as well as the expertise, cost, and availability of personnel involved in chart review and discrepancy adjudication. Copyright Â
OBJECTIVE: Develop evidence-based recommendations for single-reviewer validation of electronic phenotyping results in operational settings. MATERIAL AND METHODS: We conducted a randomized controlled study to evaluate whether electronic phenotyping results should be used to support manual chart review during single-reviewer electronic phenotyping validation (N=3104). We evaluated the accuracy, duration and cost of manual chart review with and without the availability of electronic phenotyping results, including relevant patient-specific details. The cost of identification of an erroneous electronic phenotyping result was calculated based on the personnel time required for the initial chart review and subsequent adjudication of discrepancies between manual chart review results and electronic phenotype determinations. RESULTS: Providing electronic phenotyping results (vs not providing those results) was associated with improved overall accuracy of manual chart review (98.90% vs 92.46%, p<0.001), decreased review duration per test case (62.43 vs 76.78s, p<0.001), and insignificantly reduced estimated marginal costs of identification of an erroneous electronic phenotyping result ($48.54 vs $63.56, p=0.16). The agreement between chart review and electronic phenotyping results was higher when the phenotyping results were provided (Cohen's kappa 0.98 vs 0.88, p<0.001). As a result, while accuracy improved when initial electronic phenotyping results were correct (99.74% vs 92.67%, N=3049, p<0.001), there was a trend towards decreased accuracy when initial electronic phenotyping results were erroneous (56.67% vs 80.00%, N=55, p=0.07). Electronic phenotyping results provided the greatest benefit for the accurate identification of rare exclusion criteria. DISCUSSION: Single-reviewer chart review of electronic phenotyping can be conducted more accurately, quickly, and at lower cost when supported by electronic phenotyping results. However, human reviewers tend to agree with electronic phenotyping results even when those results are wrong. Thus, the value of providing electronic phenotyping results depends on the accuracy of the underlying electronic phenotyping algorithm. CONCLUSION: We recommend using a mix of phenotyping validation strategies, with the balance of strategies based on the anticipated electronic phenotyping error rate, the tolerance for missed electronic phenotyping errors, as well as the expertise, cost, and availability of personnel involved in chart review and discrepancy adjudication. Copyright Â
Authors: Ellen McCreedy; Andrea Gilmore-Bykovskyi; David A Dorr; Julie Lima; Ellen P McCarthy; David J Meyers; Richard Platt; V G Vinod Vydiswaran; Julie P W Bynum Journal: J Am Geriatr Soc Date: 2021-11-02 Impact factor: 5.562
Authors: Martin Chapman; Shahzad Mumtaz; Luke V Rasmussen; Andreas Karwath; Georgios V Gkoutos; Chuang Gao; Dan Thayer; Jennifer A Pacheco; Helen Parkinson; Rachel L Richesson; Emily Jefferson; Spiros Denaxas; Vasa Curcin Journal: Gigascience Date: 2021-09-11 Impact factor: 6.524