Kelly L Sloane1, Julie J Miller2, Amanda Piquet3, Brian L Edlow4, Eric S Rosenthal5, Aneesh B Singhal6. 1. Department of Neurology, Massachusetts General Hospital, Boston, MA, USA; Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA; Crescenz Veterans Affairs Medical Center, Philadelphia, Pennsylvania 19104, USA. Electronic address: Kelly.sloane@pennmedicine.upenn.edu. 2. Department of Neurology, Massachusetts General Hospital, Boston, MA, USA. Electronic address: Jmiller30@mgh.harvard.edu. 3. Department of Neurology, Massachusetts General Hospital, Boston, MA, USA; Department of Neurology, University of Colorado, Aurora, CO, USA. Electronic address: amanda.piquet@CUAnschutz.edu. 4. Department of Neurology, Massachusetts General Hospital, Boston, MA, USA. Electronic address: bedlow@mgh.harvard.edu. 5. Department of Neurology, Massachusetts General Hospital, Boston, MA, USA. Electronic address: erosenthal@mgh.harvard.edu. 6. Department of Neurology, Massachusetts General Hospital, Boston, MA, USA. Electronic address: asinghal@partners.org.
Abstract
BACKGROUND: For patients with acute, serious neurological conditions presenting to the emergency department (ED), prognostication is typically based on clinical experience, scoring systems and patient co-morbidities. Because estimating a poor prognosis influences caregiver decisions to withdraw life-sustaining therapy, we investigated the consistency of prognostication across a spectrum of neurology physicians. METHODS: Five acute neurological presentations (2 with large hemispheric infarction; 1 with brainstem infarction, 1 with lobar hemorrhage, and 1 with hypoxic-ischemic encephalopathy) were selected for a department-wide prognostication simulation exercise. All had presented to our tertiary care hospital's ED, where a poor outcome was predicted by the ED neurology team within 24 hours of onset. Relevant clinical, laboratory and imaging data available before ED prognostication were presented on a web-based platform to 120 providers blinded to the actual outcome. The provider was requested to rank-order, from most to least likely, the predicted 90-day modified Rankin Scale (mRS) score. To determine the accuracy of individual outcome predictions we compared the patient's the actual 90-day mRS score to highest ranked predicted mRS score. Additionally, the group's "weighted" outcomes, accounting for the entire spectrum of mRS scores ranked by all respondents, were compared to the actual outcome for each case. Consistency was compared between pre-specified provider roles: neurology trainees versus faculty; non-vascular versus vascular faculty. RESULTS: Responses ranged from 106-110 per case. Individual predictions were highly variable, with predictions matching the actual mRS scores in as low as 2% of respondents in one case and 95% in another case. However, as a group, the weighted outcome matched the actual mRS score in 3 of 5 cases (60%). There was no significant difference between subgroups based on expertise (stroke/neurocritical care versus other) or experience (faculty versus trainee) in 4 of 5 cases. CONCLUSION: Acute neuro-prognostication is highly variable and often inaccurate among neurology providers. Significant differences are not attributable to experience or subspecialty expertise. The mean outcome prediction from group of providers ("the wisdom of the crowd") may be superior to that of individual providers.
BACKGROUND: For patients with acute, serious neurological conditions presenting to the emergency department (ED), prognostication is typically based on clinical experience, scoring systems and patient co-morbidities. Because estimating a poor prognosis influences caregiver decisions to withdraw life-sustaining therapy, we investigated the consistency of prognostication across a spectrum of neurology physicians. METHODS: Five acute neurological presentations (2 with large hemispheric infarction; 1 with brainstem infarction, 1 with lobar hemorrhage, and 1 with hypoxic-ischemic encephalopathy) were selected for a department-wide prognostication simulation exercise. All had presented to our tertiary care hospital's ED, where a poor outcome was predicted by the ED neurology team within 24 hours of onset. Relevant clinical, laboratory and imaging data available before ED prognostication were presented on a web-based platform to 120 providers blinded to the actual outcome. The provider was requested to rank-order, from most to least likely, the predicted 90-day modified Rankin Scale (mRS) score. To determine the accuracy of individual outcome predictions we compared the patient's the actual 90-day mRS score to highest ranked predicted mRS score. Additionally, the group's "weighted" outcomes, accounting for the entire spectrum of mRS scores ranked by all respondents, were compared to the actual outcome for each case. Consistency was compared between pre-specified provider roles: neurology trainees versus faculty; non-vascular versus vascular faculty. RESULTS: Responses ranged from 106-110 per case. Individual predictions were highly variable, with predictions matching the actual mRS scores in as low as 2% of respondents in one case and 95% in another case. However, as a group, the weighted outcome matched the actual mRS score in 3 of 5 cases (60%). There was no significant difference between subgroups based on expertise (stroke/neurocritical care versus other) or experience (faculty versus trainee) in 4 of 5 cases. CONCLUSION: Acute neuro-prognostication is highly variable and often inaccurate among neurology providers. Significant differences are not attributable to experience or subspecialty expertise. The mean outcome prediction from group of providers ("the wisdom of the crowd") may be superior to that of individual providers.
Authors: K J Becker; A B Baxter; W A Cohen; H M Bybee; D L Tirschwell; D W Newell; H R Winn; W T Longstreth Journal: Neurology Date: 2001-03-27 Impact factor: 9.910
Authors: Marjolein Geurts; Malcolm R Macleod; Ghislaine J M W van Thiel; Jan van Gijn; L Jaap Kappelle; H Bart van der Worp Journal: Lancet Neurol Date: 2014-03-25 Impact factor: 44.182
Authors: Thomas Quinn; Jesse Moskowitz; Muhammad W Khan; Lori Shutter; Robert Goldberg; Nananda Col; Kathleen M Mazor; Susanne Muehlschlegel Journal: Neurocrit Care Date: 2017-10 Impact factor: 3.210
Authors: Alexis F Turgeon; François Lauzier; Karen E A Burns; Maureen O Meade; Damon C Scales; Ryan Zarychanski; Lynne Moore; David A Zygun; Lauralyn A McIntyre; Salmaan Kanji; Paul C Hébert; Valérie Murat; Giuseppe Pagliarello; Dean A Fergusson Journal: Crit Care Med Date: 2013-04 Impact factor: 7.598
Authors: Natalia S Rost; Eric E Smith; Yuchiao Chang; Ryan W Snider; Rishi Chanderraj; Kristin Schwab; Emily FitzMaurice; Lauren Wendell; Joshua N Goldstein; Steven M Greenberg; Jonathan Rosand Journal: Stroke Date: 2008-06-12 Impact factor: 7.914
Authors: Annette Rogge; Victoria Dorothea Witt; José Manuel Valdueza; Christoph Borzikowsky; Alena Buyx Journal: Neurocrit Care Date: 2019-08 Impact factor: 3.210