Philip Hoelter1, Iris Muehlen2, Philipp Goelitz2, Vanessa Beuscher3, Stefan Schwab3, Arnd Doerfler2. 1. Department of Neuroradiology, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Schwabachanlage 6, 91054, Erlangen, Germany. philip.hoelter@uk-erlangen.de. 2. Department of Neuroradiology, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Schwabachanlage 6, 91054, Erlangen, Germany. 3. Department of Neurology, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Schwabachanlage 6, 91054, Erlangen, Germany.
Abstract
PURPOSE: Various software applications offer support in the diagnosis of acute ischemic stroke (AIS), yet it remains unclear whether the performance of these tools is comparable to each other. Our study aimed to evaluate three fully automated software applications for Alberta Stroke Program Early CT (ASPECT) scoring (Syngo.via Frontier ASPECT Score Prototype V2, Brainomix e-ASPECTS® and RAPID ASPECTS) in AIS patients. METHODS: Retrospectively, 131 patients with large vessel occlusion (LVO) of the middle cerebral artery or the internal carotid artery, who underwent endovascular therapy (EVT), were included. Pre-interventional non-enhanced CT (NECT) datasets were assessed in random order using the automated ASPECT software and by three experienced neuroradiologists in consensus. Interclass correlation coefficient (ICC), Bland-Altman, and receiver operating characteristics (ROC) were applied for statistical analysis. RESULTS: Median ASPECTS of the expert consensus reading was 8 (7-10). Highest correlation was between the expert read and Brainomix (r = 0.871 (0.818, 0.909), p < 0.001). Correlation between expert read and Frontier V2 (r = 0.801 (0.719, 0.859), p < 0.001) and between expert read and RAPID (r = 0.777 (0.568, 0.871), p < 0.001) was high, respectively. There was a high correlation among the software tools (Frontier V2 and Brainomix: r = 0.830 (0.760, 0.880), p < 0.001; Frontier V2 and RAPID: r = 0.847 (0.693, 0.913), p < 0.001; Brainomix and RAPID: r = 0.835 (0.512, 0.923), p < 0.001). An ROC curve analysis revealed comparable accuracy between the applications and expert consensus reading (Brainomix: AUC = 0.759 (0.670-0.848), p < 0.001; Frontier V2: AUC = 0.752 (0.660-0.843), p < 0.001; RAPID: AUC = 0.734 (0.634-0.831), p < 0.001). CONCLUSION: Overall, there is a convincing yet developable grade of agreement between current ASPECT software evaluation tools and expert evaluation with regard to ASPECT assessment in AIS.
PURPOSE: Various software applications offer support in the diagnosis of acute ischemic stroke (AIS), yet it remains unclear whether the performance of these tools is comparable to each other. Our study aimed to evaluate three fully automated software applications for Alberta Stroke Program Early CT (ASPECT) scoring (Syngo.via Frontier ASPECT Score Prototype V2, Brainomix e-ASPECTS® and RAPID ASPECTS) in AIS patients. METHODS: Retrospectively, 131 patients with large vessel occlusion (LVO) of the middle cerebral artery or the internal carotid artery, who underwent endovascular therapy (EVT), were included. Pre-interventional non-enhanced CT (NECT) datasets were assessed in random order using the automated ASPECT software and by three experienced neuroradiologists in consensus. Interclass correlation coefficient (ICC), Bland-Altman, and receiver operating characteristics (ROC) were applied for statistical analysis. RESULTS: Median ASPECTS of the expert consensus reading was 8 (7-10). Highest correlation was between the expert read and Brainomix (r = 0.871 (0.818, 0.909), p < 0.001). Correlation between expert read and Frontier V2 (r = 0.801 (0.719, 0.859), p < 0.001) and between expert read and RAPID (r = 0.777 (0.568, 0.871), p < 0.001) was high, respectively. There was a high correlation among the software tools (Frontier V2 and Brainomix: r = 0.830 (0.760, 0.880), p < 0.001; Frontier V2 and RAPID: r = 0.847 (0.693, 0.913), p < 0.001; Brainomix and RAPID: r = 0.835 (0.512, 0.923), p < 0.001). An ROC curve analysis revealed comparable accuracy between the applications and expert consensus reading (Brainomix: AUC = 0.759 (0.670-0.848), p < 0.001; Frontier V2: AUC = 0.752 (0.660-0.843), p < 0.001; RAPID: AUC = 0.734 (0.634-0.831), p < 0.001). CONCLUSION: Overall, there is a convincing yet developable grade of agreement between current ASPECT software evaluation tools and expert evaluation with regard to ASPECT assessment in AIS.
Authors: S Nannoni; F Ricciardi; D Strambo; G Sirimarco; M Wintermark; V Dunet; P Michel Journal: AJNR Am J Neuroradiol Date: 2021-01-28 Impact factor: 3.825
Authors: Hossein Mohammadian Foroushani; Ali Hamzehloo; Atul Kumar; Yasheng Chen; Laura Heitsch; Agnieszka Slowik; Daniel Strbian; Jin-Moo Lee; Daniel S Marcus; Rajat Dhar Journal: Neurocrit Care Date: 2020-07-29 Impact factor: 3.210
Authors: Fatih Seker; Johannes Alex Rolf Pfaff; Yahia Mokli; Anne Berberich; Rafael Namias; Steven Gerry; Simon Nagel; Martin Bendszus; Christian Herweh Journal: Int J Stroke Date: 2021-02-11 Impact factor: 5.266