Carol Bova1, Carol Jaffarian, Sybil Crawford, Jose Bernardo Quintos, Mary Lee, Susan Sullivan-Bolyai. 1. Carol Bova, PhD, RN, is Professor and Carol Jaffarian, MS, RN, is Instructor, Graduate School of Nursing, University of Massachusetts Medical School, Worcester. Sybil Crawford, PhD, is Professor, Division of Preventive and Behavioral Medicine, Department of Medicine, University of Massachusetts Medical School, Worcester. Jose Bernardo Quintos, MD, is Division Chief, Pediatric Endocrinology, Hasbro Children's Hospital, Providence, Rhode Island. Mary Lee, MD, is Professor, Department of Pediatrics, University of Massachusetts Medical School, Worcester. Susan Sullivan-Bolyai, DNSc, CNS, RN, FAAN, is Associate Professor, College of Nursing, New York University.
Abstract
BACKGROUND: Measurement of intervention fidelity is an essential component of any scientifically sound intervention trial. However, few papers have proposed ways to integrate intervention fidelity data into the execution of these trials. OBJECTIVE: The purpose of this article is to describe the intervention fidelity process used in a randomized controlled trial of a human patient simulator intervention and how these data were used to monitor drift and provide feedback to improve the consistency of both intervention and control delivery over time in a multisite education intervention for parents of children with newly diagnosed Type 1 diabetes. METHODS:Intervention fidelity was measured for both the intervention and control condition by direct observation, self-report of interventionist delivery, and parent participant receipt of educational information. Intervention fidelity data were analyzed after 50%, 75%, and 100% of the participants had been recruited and compared by group (treatment and control) and research site. RESULTS: The sample included 191 parents of young children newly diagnosed with Type 1 diabetes. Observations scores in both intervention and control groups indicated a high level of intervention fidelity. Treatment receipt was also high and did not differ by treatment group. The teaching session attendance rates by site and session were significantly different at Time Point 1 (50% enrollment); following study staff retraining and reinforcement, there were no significant differences at Time Point 3 (100% enrollment). IMPLICATIONS: Results demonstrate the importance of monitoring intervention fidelity in both the intervention and control condition over time and using these data to correct drift during the course of a multisite clinical trial.
RCT Entities:
BACKGROUND: Measurement of intervention fidelity is an essential component of any scientifically sound intervention trial. However, few papers have proposed ways to integrate intervention fidelity data into the execution of these trials. OBJECTIVE: The purpose of this article is to describe the intervention fidelity process used in a randomized controlled trial of a humanpatient simulator intervention and how these data were used to monitor drift and provide feedback to improve the consistency of both intervention and control delivery over time in a multisite education intervention for parents of children with newly diagnosed Type 1 diabetes. METHODS: Intervention fidelity was measured for both the intervention and control condition by direct observation, self-report of interventionist delivery, and parent participant receipt of educational information. Intervention fidelity data were analyzed after 50%, 75%, and 100% of the participants had been recruited and compared by group (treatment and control) and research site. RESULTS: The sample included 191 parents of young children newly diagnosed with Type 1 diabetes. Observations scores in both intervention and control groups indicated a high level of intervention fidelity. Treatment receipt was also high and did not differ by treatment group. The teaching session attendance rates by site and session were significantly different at Time Point 1 (50% enrollment); following study staff retraining and reinforcement, there were no significant differences at Time Point 3 (100% enrollment). IMPLICATIONS: Results demonstrate the importance of monitoring intervention fidelity in both the intervention and control condition over time and using these data to correct drift during the course of a multisite clinical trial.
Authors: Susan M Breitenstein; Deborah Gross; Christine A Garvey; Carri Hill; Louis Fogg; Barbara Resnick Journal: Res Nurs Health Date: 2010-04 Impact factor: 2.228
Authors: Susan Sullivan-Bolyai; Sybil Crawford; Carol Bova; Mary Lee; J B Quintos; Kim Johnson; Karen Cullen; Terri Hamm; Jean Bisordi; Neesha Ramchandani; Jason Fletcher; Diane Quinn; Carol Jaffarian; Terri Lipman; Gail Melkus Journal: Diabetes Educ Date: 2015-08-05 Impact factor: 2.140
Authors: Joseph G Winger; Sarah A Kelleher; Hannah M Fisher; Tamara J Somers; Gregory P Samsa Journal: J Pain Symptom Manage Date: 2022-02-27 Impact factor: 5.576
Authors: Barbara Riegel; Alexandra L Hanlon; Norma B Coe; Karen B Hirschman; Gladys Thomas; Michael Stawnychy; Joyce W Wald; Kathryn H Bowles Journal: Contemp Clin Trials Date: 2019-09-06 Impact factor: 2.226