Susrutha Kotwal1,2, Mehdi Fanai2,3, Wei Fu4, Zheyu Wang2,4,5, Anand K Bery6, Rodney Omron2,7, Nana Tevzadze3, Daniel Gold3, Brian T Garibaldi2,8, Scott M Wright1, David E Newman-Toker2,3,7. 1. Department of Medicine, Division of Hospital Medicine, Johns Hopkins Bayview Medical Center, Johns Hopkins University School of Medicine, Baltimore, MD, USA. 2. Center for Diagnostic Excellence, Armstrong Institute for Patient Safety and Quality, Johns Hopkins University School of Medicine, Baltimore, MD, USA. 3. Department of Neurology, Division of Neuro-Visual & Vestibular Disorders, Johns Hopkins Hospital, Johns Hopkins University School of Medicine, Baltimore, MD, USA. 4. Department of Oncology, Johns Hopkins University School of Medicine, Baltimore, MD, USA. 5. Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA. 6. Department of Medicine, Division of Neurology, The Ottawa Hospital, Ottawa, Canada. 7. Department of Emergency Medicine, Johns Hopkins University School of Medicine, Baltimore, MD, USA. 8. Division of Pulmonary and Critical Care Medicine, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
Abstract
OBJECTIVES: Diagnostic errors are pervasive in medicine and most often caused by clinical reasoning failures. Clinical presentations characterized by nonspecific symptoms with broad differential diagnoses (e.g., dizziness) are especially prone to such errors. METHODS: We hypothesized that novice clinicians could achieve proficiency diagnosing dizziness by training with virtual patients (VPs). This was a prospective, quasi-experimental, pretest-posttest study (2019) at a single academic medical center. Internal medicine interns (intervention group) were compared to second/third year residents (control group). A case library of VPs with dizziness was developed from a clinical trial (AVERT-NCT02483429). The approach (VIPER - Virtual Interactive Practice to build Expertise using Real cases) consisted of brief lectures combined with 9 h of supervised deliberate practice. Residents were provided dizziness-related reading and teaching modules. Both groups completed pretests and posttests. RESULTS: For interns (n=22) vs. residents (n=18), pretest median diagnostic accuracy did not differ (33% [IQR 18-46] vs. 31% [IQR 13-50], p=0.61) between groups, while posttest accuracy did (50% [IQR 42-67] vs. 20% [IQR 17-33], p=0.001). Pretest median appropriate imaging did not differ (33% [IQR 17-38] vs. 31% [IQR 13-38], p=0.89) between groups, while posttest appropriateness did (65% [IQR 52-74] vs. 25% [IQR 17-36], p<0.001). CONCLUSIONS: Just 9 h of deliberate practice increased diagnostic skills (both accuracy and testing appropriateness) of medicine interns evaluating real-world dizziness 'in silico' more than ∼1.7 years of residency training. Applying condensed educational experiences such as VIPER across a broad range of common presentations could significantly enhance diagnostic education and translate to improved patient care.
OBJECTIVES: Diagnostic errors are pervasive in medicine and most often caused by clinical reasoning failures. Clinical presentations characterized by nonspecific symptoms with broad differential diagnoses (e.g., dizziness) are especially prone to such errors. METHODS: We hypothesized that novice clinicians could achieve proficiency diagnosing dizziness by training with virtual patients (VPs). This was a prospective, quasi-experimental, pretest-posttest study (2019) at a single academic medical center. Internal medicine interns (intervention group) were compared to second/third year residents (control group). A case library of VPs with dizziness was developed from a clinical trial (AVERT-NCT02483429). The approach (VIPER - Virtual Interactive Practice to build Expertise using Real cases) consisted of brief lectures combined with 9 h of supervised deliberate practice. Residents were provided dizziness-related reading and teaching modules. Both groups completed pretests and posttests. RESULTS: For interns (n=22) vs. residents (n=18), pretest median diagnostic accuracy did not differ (33% [IQR 18-46] vs. 31% [IQR 13-50], p=0.61) between groups, while posttest accuracy did (50% [IQR 42-67] vs. 20% [IQR 17-33], p=0.001). Pretest median appropriate imaging did not differ (33% [IQR 17-38] vs. 31% [IQR 13-38], p=0.89) between groups, while posttest appropriateness did (65% [IQR 52-74] vs. 25% [IQR 17-36], p<0.001). CONCLUSIONS: Just 9 h of deliberate practice increased diagnostic skills (both accuracy and testing appropriateness) of medicine interns evaluating real-world dizziness 'in silico' more than ∼1.7 years of residency training. Applying condensed educational experiences such as VIPER across a broad range of common presentations could significantly enhance diagnostic education and translate to improved patient care.
Authors: Alexander A Tarnutzer; Aaron L Berkowitz; Karen A Robinson; Yu-Hsiang Hsieh; David E Newman-Toker Journal: CMAJ Date: 2011-05-16 Impact factor: 8.262
Authors: David E Newman-Toker; Adam C Schaffer; C Winnie Yu-Moe; Najlla Nassery; Ali S Saber Tehrani; Gwendolyn D Clemens; Zheyu Wang; Yuxin Zhu; Mehdi Fanai; Dana Siegal Journal: Diagnosis (Berl) Date: 2019-08-27
Authors: Michael von Brevern; Pierre Bertholon; Thomas Brandt; Terry Fife; Takao Imai; Daniele Nuti; David Newman-Toker Journal: Acta Otorrinolaringol Esp Date: 2017-10-19
Authors: Seung-Han Lee; Victoria Stanton; Richard E Rothman; Barbara Crain; Robert Wityk; Zheyu Wang; David E Newman-Toker Journal: Diagnosis (Berl) Date: 2017-03-01
Authors: Victoria A Stanton; Yu-Hsiang Hsieh; Carlos A Camargo; Jonathan A Edlow; Paris B Lovett; Paris Lovett; Joshua N Goldstein; Stephanie Abbuhl; Michelle Lin; Arjun Chanmugam; Richard E Rothman; David E Newman-Toker Journal: Mayo Clin Proc Date: 2007-11 Impact factor: 7.616
Authors: Andrew J Solomon; Marwa Kaisey; Stephen C Krieger; Salim Chahin; Robert T Naismith; Sarah M Weinstein; Russell T Shinohara; Brian G Weinshenker Journal: Mult Scler Date: 2021-10-06 Impact factor: 5.855