Literature DB >> 17074874

Do panels vary when assessing intrapartum adverse events? The reproducibility of assessments by hospital risk management groups.

D Kernaghan1, G C Penney.   

Abstract

INTRODUCTION: A national audit project, Scotland-wide Learning from Intrapartum Critical Events (SLICE), included local assessment of quality of care in cases of perinatal death and neonatal encephalopathy due to intrapartum events. Concerns had been raised about interobserver variation in case assessment by different panels. We therefore studied the extent of agreement and disagreement between assessment panels, and examined the areas in which agreement and disagreement tended to occur.
METHODS: 8 cases were randomly selected from all 42 cases identified during a 6-month period (1 January-1 July 2005). Each case was independently reviewed by three panels: the local hospital clinical risk-management group and two specially convened external panels. Panels assessed quality of care in three areas: admission assessment, recognition of incident, and method and timing of delivery. Predefined standards of care were provided for these three areas. Panels were also asked to assess the overall quality of care.
RESULTS: For each area of care, agreement between the two external panels was lowest. The lowest levels of agreement between panels were seen in assessment of overall care (50% crude agreement between external panel 1 and the hospital (kappa = 0.24, AC(1) = 0.36); 29% crude agreement between external panels 1 and 2 (kappa = -0.11, AC(1) = 0.1); 47% crude agreement between external panel 2 and the hospital (kappa = 0.36, AC(1) = 0.46). The lowest level of agreement among all three panels was also in the assessment of overall care (crude agreement 48%; kappa = 0.16, AC(1) = 0.34).
CONCLUSION: Moderate to substantial agreement among the three panels was achieved for the three areas in which explicit standards were provided. Therefore, a systematic approach to analysis of adverse events in perinatal care improves reproducibility.

Entities:  

Mesh:

Year:  2006        PMID: 17074874      PMCID: PMC2565823          DOI: 10.1136/qshc.2006.018572

Source DB:  PubMed          Journal:  Qual Saf Health Care        ISSN: 1475-3898


  7 in total

1.  Practice visits as a tool in quality improvement: mutual visits and feedback by peers compared with visits and feedback by non-physician observers.

Authors:  P van den Hombergh; R Grol; H J van den Hoogen; W J van den Bosch
Journal:  Qual Health Care       Date:  1999-09

Review 2.  Agreement studies in obstetrics and gynaecology: inappropriateness, controversies and consequences.

Authors:  Cristina Costa Santos; Altamiro Costa Pereira; João Bernardes
Journal:  BJOG       Date:  2005-05       Impact factor: 6.531

3.  High agreement but low kappa: II. Resolving the paradoxes.

Authors:  D V Cicchetti; A R Feinstein
Journal:  J Clin Epidemiol       Date:  1990       Impact factor: 6.437

Review 4.  Consensus development methods, and their use in clinical guideline development.

Authors:  M K Murphy; N A Black; D L Lamping; C M McKee; C F Sanderson; J Askham; T Marteau
Journal:  Health Technol Assess       Date:  1998       Impact factor: 4.014

5.  High agreement but low kappa: I. The problems of two paradoxes.

Authors:  A R Feinstein; D V Cicchetti
Journal:  J Clin Epidemiol       Date:  1990       Impact factor: 6.437

6.  The measurement of observer agreement for categorical data.

Authors:  J R Landis; G G Koch
Journal:  Biometrics       Date:  1977-03       Impact factor: 2.571

Review 7.  Determining the contribution of asphyxia to brain damage in the neonate.

Authors:  James A Low
Journal:  J Obstet Gynaecol Res       Date:  2004-08       Impact factor: 1.730

  7 in total
  1 in total

1.  Examining agreement between clinicians when assessing sick children.

Authors:  John Wagai; John Senga; Greg Fegan; Mike English
Journal:  PLoS One       Date:  2009-02-27       Impact factor: 3.240

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.