Literature DB >> 27488487

Evaluating the reliability of an injury prevention screening tool: Test-retest study.

Michael A Gittelman1, Madeline Kincaid, Sarah Denny, Melissa Wervey Arnold, Michael FitzGerald, Adam C Carle, Constance A Mara.   

Abstract

BACKGROUND: A standardized injury prevention (IP) screening tool can identify family risks and allow pediatricians to address behaviors. To assess behavior changes on later screens, the tool must be reliable for an individual and ideally between household members. Little research has examined the reliability of safety screening tool questions. This study utilized test-retest reliability of parent responses on an existing IP questionnaire and also compared responses between household parents.
METHODS: Investigators recruited parents of children 0 to 1 year of age during admission to a tertiary care children's hospital. When both parents were present, one was chosen as the "primary" respondent. Primary respondents completed the 30-question IP screening tool after consent, and they were re-screened approximately 4 hours later to test individual reliability. The "second" parent, when present, only completed the tool once. All participants received a 10-dollar gift card. Cohen's Kappa was used to estimate test-retest reliability and inter-rater agreement. Standard test-retest criteria consider Kappa values: 0.0 to 0.40 poor to fair, 0.41 to 0.60 moderate, 0.61 to 0.80 substantial, and 0.81 to 1.00 as almost perfect reliability.
RESULTS: One hundred five families participated, with five lost to follow-up. Thirty-two (30.5%) parent dyads completed the tool. Primary respondents were generally mothers (88%) and Caucasian (72%). Test-retest of the primary respondents showed their responses to be almost perfect; average 0.82 (SD = 0.13, range 0.49-1.00). Seventeen questions had almost perfect test-retest reliability and 11 had substantial reliability. However, inter-rater agreement between household members for 12 objective questions showed little agreement between responses; inter-rater agreement averaged 0.35 (SD = 0.34, range -0.19-1.00). One question had almost perfect inter-rater agreement and two had substantial inter-rater agreement.
CONCLUSIONS: The IP screening tool used by a single individual had excellent test-retest reliability for nearly all questions. However, when a reporter changes from pre- to postintervention, differences may reflect poor reliability or different subjective experiences rather than true change.

Entities:  

Mesh:

Year:  2016        PMID: 27488487     DOI: 10.1097/TA.0000000000001182

Source DB:  PubMed          Journal:  J Trauma Acute Care Surg        ISSN: 2163-0755            Impact factor:   3.313


  2 in total

1.  Considerations of a test-retest reliability study in injury prevention.

Authors:  Francisco J Bonilla-Escobar; Catalina Restrepo-Lopera; Juan Carlos Puyana
Journal:  J Trauma Acute Care Surg       Date:  2017-02       Impact factor: 3.313

2.  A quality improvement program in pediatric practices to increase tailored injury prevention counseling and assess self-reported changes made by families.

Authors:  Michael A Gittelman; Adam C Carle; Sarah Denny; Samantha Anzeljc; Melissa Wervey Arnold
Journal:  Inj Epidemiol       Date:  2018-04-10
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.