| Literature DB >> 26793738 |
William M Vollmer1, Beverly B Green2, Gloria D Coronado1.
Abstract
CONTEXT: Pragmatic trials lack the relatively tight quality control of traditional efficacy studies and hence may pose added analytic challenges owing to the practical realities faced in carrying them out. CASE DESCRIPTION: STOP CRC is a cluster randomized trial testing the effectiveness of automated, electronic medical record (EMR)-driven strategies to raise colorectal cancer (CRC) screening rates in safety net clinics. Screen-eligible participants were accrued during year 1 and followed for 12 months (measurement window) to assess completion of a fecal screening test. Control clinics implemented the intervention in year 2. IMPLEMENTATION CHALLENGES/ANALYTIC ISSUES: Due to limitations on how we could build the intervention tools, the overlap of the year 1 measurement windows with year 2 intervention rollout posed a potential for contamination of the primary outcome for control participants. In addition, a variety of factors led to a lack of synchronization of the measurement windows with actual intervention delivery. In both cases, the net impact of these factors would be to diminish the estimated impact of the intervention. PROPOSED SOLUTIONS: We dealt with the overlap issue by delaying the start of intervention rollout to control clinics in year 2 by 6 months and by truncating the measurement windows for intervention and control participants at this point. In addition we formulated three sensitivity analyses to help address the issue of asynchronization.Entities:
Keywords: 2014 Group Health Seattle Symposium; Electronic Health Records; Quality Inprovement; cluster-randomized study; colorectal cancer screening; data analysis; methodology; pragmatic trials
Year: 2015 PMID: 26793738 PMCID: PMC4708092 DOI: 10.13063/2327-9214.1200
Source DB: PubMed Journal: EGEMS (Wash DC) ISSN: 2327-9214
Figure 1.Illustration of separate accrual and individual measurement windows
Note: Participants are accrued into the analysis sample during a common 12-month accrual period base, while return of FIT kits are assessed over individual measurement windows measured from the date of initial entry screen eligibility. Panel A depicts the original analysis plan, in which each measurement window lasted for 12 months. Panel B depicts the revised plan to truncate measurement windows at the shorter of either 12 months or the start of intervention rollout for control clinics in month 7 of study year 2.
Timeline of Intervention Rollout
| 02-02-14 | 02-02-14 | 02-02-14 | 02-02-14 | 02-02-14 | 02-02-14 | 02-02-14 | 02-02-14 | |
| 02-20-14 | 05-15-14 | 05-16-14 | 06-06-14 | 06-10-14 | 06-17-14 | 08-06-14 | 10-06-14 | |
| 07-15-14 | 06-15-14 | 06-15-14 | 10-01-14 | 09-27-14 | 08-15-14 | 11-15-14 | 01-15-15 | |
| 6-27-14 | 6-11-14 | 6-16-14 | 9-23-14 | 9-29-14 | 7-26-14 | 7-9-14 | 1-31-15 | |
| 143 | 127 | 132 | 231 | 237 | 172 | 155 | 361 | |
Notes:
Some networks consist of more than one participating intervention clinic.
All clinics were required to undergo a series of test protocols to confirm they were ready to have the CDS tool officially activated. For networks with clinics, this was a common date for all clinics.
Conceptual Layout for Study as a Stepped Wedge Design
| Control Clinics | usual care | usual care | start-up |
| Intervention Clinics | usual care | start-up | steady state |