OBJECTIVES: New chest compression detection technology allows for the recording and graphical depiction of clinical cardiopulmonary resuscitation (CPR) chest compressions. The authors sought to determine the inter-rater reliability of chest compression pattern classifications by human raters. Agreement with automated chest compression classification was also evaluated by computer analysis. METHODS: This was an analysis of chest compression patterns from cardiac arrest patients enrolled in the ongoing Resuscitation Outcomes Consortium (ROC) Continuous Chest Compressions Trial. Thirty CPR process files from patients in the trial were selected. Using written guidelines, research coordinators from each of eight participating ROC sites classified each chest compression pattern as 30:2 chest compressions, continuous chest compressions (CCC), or indeterminate. A computer algorithm for automated chest compression classification was also developed for each case. Inter-rater agreement between manual classifications was tested using Fleiss's kappa. The criterion standard was defined as the classification assigned by the majority of manual raters. Agreement between the automated classification and the criterion standard manual classifications was also tested. RESULTS: The majority of the eight raters classified 12 chest compression patterns as 30:2, 12 as CCC, and six as indeterminate. Inter-rater agreement between manual classifications of chest compression patterns was κ = 0.62 (95% confidence interval [CI] = 0.49 to 0.74). The automated computer algorithm classified chest compression patterns as 30:2 (n = 15), CCC (n = 12), and indeterminate (n = 3). Agreement between automated and criterion standard manual classifications was κ = 0.84 (95% CI = 0.59 to 0.95). CONCLUSIONS: In this study, good inter-rater agreement in the manual classification of CPR chest compression patterns was observed. Automated classification showed strong agreement with human ratings. These observations support the consistency of manual CPR pattern classification as well as the use of automated approaches to chest compression pattern analysis.
OBJECTIVES: New chest compression detection technology allows for the recording and graphical depiction of clinical cardiopulmonary resuscitation (CPR) chest compressions. The authors sought to determine the inter-rater reliability of chest compression pattern classifications by human raters. Agreement with automated chest compression classification was also evaluated by computer analysis. METHODS: This was an analysis of chest compression patterns from cardiac arrestpatients enrolled in the ongoing Resuscitation Outcomes Consortium (ROC) Continuous Chest Compressions Trial. Thirty CPR process files from patients in the trial were selected. Using written guidelines, research coordinators from each of eight participating ROC sites classified each chest compression pattern as 30:2 chest compressions, continuous chest compressions (CCC), or indeterminate. A computer algorithm for automated chest compression classification was also developed for each case. Inter-rater agreement between manual classifications was tested using Fleiss's kappa. The criterion standard was defined as the classification assigned by the majority of manual raters. Agreement between the automated classification and the criterion standard manual classifications was also tested. RESULTS: The majority of the eight raters classified 12 chest compression patterns as 30:2, 12 as CCC, and six as indeterminate. Inter-rater agreement between manual classifications of chest compression patterns was κ = 0.62 (95% confidence interval [CI] = 0.49 to 0.74). The automated computer algorithm classified chest compression patterns as 30:2 (n = 15), CCC (n = 12), and indeterminate (n = 3). Agreement between automated and criterion standard manual classifications was κ = 0.84 (95% CI = 0.59 to 0.95). CONCLUSIONS: In this study, good inter-rater agreement in the manual classification of CPR chest compression patterns was observed. Automated classification showed strong agreement with human ratings. These observations support the consistency of manual CPR pattern classification as well as the use of automated approaches to chest compression pattern analysis.
Authors: Laurie J Morrison; Graham Nichol; Thomas D Rea; Jim Christenson; Clifton W Callaway; Shannon Stephens; Ronald G Pirrallo; Dianne L Atkins; Daniel P Davis; Ahamed H Idris; Craig Newgard Journal: Resuscitation Date: 2008-05-13 Impact factor: 5.262
Authors: Christian Vaillancourt; Siobhan Everson-Stewart; Jim Christenson; Douglas Andrusiek; Judy Powell; Graham Nichol; Sheldon Cheskes; Tom P Aufderheide; Robert Berg; Ian G Stiell Journal: Resuscitation Date: 2011-07-18 Impact factor: 5.262
Authors: Ian G Stiell; Graham Nichol; Brian G Leroux; Thomas D Rea; Joseph P Ornato; Judy Powell; James Christenson; Clifton W Callaway; Peter J Kudenchuk; Tom P Aufderheide; Ahamed H Idris; Mohamud R Daya; Henry E Wang; Laurie J Morrison; Daniel Davis; Douglas Andrusiek; Shannon Stephens; Sheldon Cheskes; Robert H Schmicker; Ray Fowler; Christian Vaillancourt; David Hostler; Dana Zive; Ronald G Pirrallo; Gary M Vilke; George Sopko; Myron Weisfeldt Journal: N Engl J Med Date: 2011-09-01 Impact factor: 91.245
Authors: Tom P Aufderheide; Graham Nichol; Thomas D Rea; Siobhan P Brown; Brian G Leroux; Paul E Pepe; Peter J Kudenchuk; Jim Christenson; Mohamud R Daya; Paul Dorian; Clifton W Callaway; Ahamed H Idris; Douglas Andrusiek; Shannon W Stephens; David Hostler; Daniel P Davis; James V Dunford; Ronald G Pirrallo; Ian G Stiell; Catherine M Clement; Alan Craig; Lois Van Ottingham; Terri A Schmidt; Henry E Wang; Myron L Weisfeldt; Joseph P Ornato; George Sopko Journal: N Engl J Med Date: 2011-09-01 Impact factor: 91.245
Authors: Sheldon Cheskes; Robert H Schmicker; Jim Christenson; David D Salcido; Tom Rea; Judy Powell; Dana P Edelson; Rebecca Sell; Susanne May; James J Menegazzi; Lois Van Ottingham; Michele Olsufka; Sarah Pennington; Jacob Simonini; Robert A Berg; Ian Stiell; Ahamed Idris; Blair Bigham; Laurie Morrison Journal: Circulation Date: 2011-06-20 Impact factor: 29.690
Authors: Jim Christenson; Douglas Andrusiek; Siobhan Everson-Stewart; Peter Kudenchuk; David Hostler; Judy Powell; Clifton W Callaway; Dan Bishop; Christian Vaillancourt; Dan Davis; Tom P Aufderheide; Ahamed Idris; John A Stouffer; Ian Stiell; Robert Berg Journal: Circulation Date: 2009-09-14 Impact factor: 29.690
Authors: Bentley J Bobrow; Lani L Clark; Gordon A Ewy; Vatsal Chikani; Arthur B Sanders; Robert A Berg; Peter B Richman; Karl B Kern Journal: JAMA Date: 2008-03-12 Impact factor: 56.272
Authors: Craig D Newgard; Gena K Sears; Thomas D Rea; Daniel P Davis; Ronald G Pirrallo; Clifton W Callaway; Dianne L Atkins; Ian G Stiell; Jim Christenson; Joseph P Minei; Carolyn R Williams; Laurie J Morrison Journal: Resuscitation Date: 2008-05-15 Impact factor: 5.262
Authors: David D Salcido; Robert H Schmicker; Jason E Buick; Sheldon Cheskes; Brian Grunau; Peter Kudenchuk; Brian Leroux; Stephanie Zellner; Dana Zive; Tom P Aufderheide; Allison C Koller; Heather Herren; Jack Nuttall; Matthew L Sundermann; James J Menegazzi Journal: Resuscitation Date: 2017-04-06 Impact factor: 5.262
Authors: David D Salcido; Cesar Torres; Allison C Koller; Aaron M Orkin; Robert H Schmicker; Laurie J Morrison; Graham Nichol; Shannon Stephens; James J Menegazzi Journal: Resuscitation Date: 2015-11-27 Impact factor: 5.262
Authors: Robert H Schmicker; Audrey Blewer; Joshua R Lupton; Tom P Aufderheide; Henry E Wang; Ahamed H Idris; Elisabete Aramendi; Mohamed B Hagahmed; Owen T Traynor; M Riccardo Colella; Mohamud R Daya Journal: Resuscitation Date: 2021-12-03 Impact factor: 5.262
Authors: Robert H Schmicker; Graham Nichol; Peter Kudenchuk; Jim Christenson; Christian Vaillancourt; Henry E Wang; Tom P Aufderheide; Ahamed H Idris; Mohamud R Daya Journal: Resuscitation Date: 2021-06-05 Impact factor: 6.251