Literature DB >> 34982767

The impact of removing financial incentives and/or audit and feedback on chlamydia testing in general practice: A cluster randomised controlled trial (ACCEPt-able).

Jane S Hocking1, Anna Wood1,2, Meredith Temple-Smith2, Sabine Braat1, Matthew Law3, Liliana Bulfone4, Callum Jones1, Mieke van Driel5, Christopher K Fairley6,7, Basil Donovan3, Rebecca Guy3, Nicola Low8, John Kaldor3, Jane Gunn9.   

Abstract

BACKGROUND: Financial incentives and audit/feedback are widely used in primary care to influence clinician behaviour and increase quality of care. While observational data suggest a decline in quality when these interventions are stopped, their removal has not been evaluated in a randomised controlled trial (RCT), to our knowledge. This trial aimed to determine whether chlamydia testing in general practice is sustained when financial incentives and/or audit/feedback are removed. METHODS AND
FINDINGS: We undertook a 2 × 2 factorial cluster RCT in 60 general practices in 4 Australian states targeting 49,525 patients aged 16-29 years for annual chlamydia testing. Clinics were recruited between July 2014 and September 2015 and were followed for up to 2 years or until 31 December 2016. Clinics were eligible if they were in the intervention group of a previous cluster RCT where general practitioners (GPs) received financial incentives (AU$5-AU$8) for each chlamydia test and quarterly audit/feedback reports of their chlamydia testing rates. Clinics were randomised into 1 of 4 groups: incentives removed but audit/feedback retained (group A), audit/feedback removed but incentives retained (group B), both removed (group C), or both retained (group D). The primary outcome was the annual chlamydia testing rate among 16- to 29-year-old patients, where the numerator was the number who had at least 1 chlamydia test within 12 months and the denominator was the number who had at least 1 consultation during the same 12 months. We undertook a factorial analysis in which we investigated the effects of removal versus retention of incentives (groups A + C versus groups B + D) and the effects of removal versus retention of audit/feedback (group B + C versus groups A + D) separately. Of 60 clinics, 59 were randomised and 55 (91.7%) provided data (group A: 15 clinics, 11,196 patients; group B: 14, 11,944; group C: 13, 11,566; group D: 13, 14,819). Annual testing decreased from 20.2% to 11.7% (difference -8.8%; 95% CI -10.5% to -7.0%) in clinics with incentives removed and decreased from 20.6% to 14.3% (difference -7.1%; 95% CI -9.6% to -4.7%) where incentives were retained. The adjusted absolute difference in treatment effect was -0.9% (95% CI -3.5% to 1.7%; p = 0.2267). Annual testing decreased from 21.0% to 11.6% (difference -9.5%; 95% CI -11.7% to -7.4%) in clinics where audit/feedback was removed and decreased from 19.9% to 14.5% (difference -6.4%; 95% CI -8.6% to -4.2%) where audit/feedback was retained. The adjusted absolute difference in treatment effect was -2.6% (95% CI -5.4% to -0.1%; p = 0.0336). Study limitations included an unexpected reduction in testing across all groups impacting statistical power, loss of 4 clinics after randomisation, and inclusion of rural clinics only.
CONCLUSIONS: Audit/feedback is more effective than financial incentives of AU$5-AU$8 per chlamydia test at sustaining GP chlamydia testing practices over time in Australian general practice. TRIAL REGISTRATION: Australian New Zealand Clinical Trials Registry ACTRN12614000595617.

Entities:  

Mesh:

Year:  2022        PMID: 34982767      PMCID: PMC8726492          DOI: 10.1371/journal.pmed.1003858

Source DB:  PubMed          Journal:  PLoS Med        ISSN: 1549-1277            Impact factor:   11.069


Introduction

Primary care plays a fundamental role in preventive healthcare, and strategies to improve its quality include financial incentives and audit/feedback [1]. Financial incentives aimed at modifying provider behaviour to improve quality and/or increase efficiency in primary care [2] have been used by the Australian Government since 1998, when the Practice Incentives Program was introduced for activities such as diabetes care [3]. The program provides less than 10% of the funding for general practitioner (GPs) [4]. In the UK, the Quality and Outcomes Framework was introduced into the contract of GPs by the government in 2004, accounting for about 25% of primary care clinics’ income [5]. Both schemes have been subject to debate about effectiveness [6-9] and have undergone modification, including withdrawal of some incentives and raising the payment threshold targets on others [5,10,11]. While some observational data suggest a decline in provider activities and quality of care when incentives are removed [5,12], other data have shown little impact [13,14]. There is little information about the effect of incentive removal on provider activities and quality of care in the Australian general practice setting. Further, the impact of the removal of incentives has not to our knowledge been assessed in a randomised controlled trial (RCT). Audit/feedback is widely used in primary care [15-17]. In audit/feedback, GPs’ professional practice is measured and compared with guidelines, targets, and/or peers, and results are fed back to the GPs. Ideally, this prompts them to modify their practice if the feedback finds this is needed. While there is substantial RCT evidence that audit/feedback improves practice [18], observational data suggest that removing audit/feedback may reverse improvements. However, there is little evidence about the impact of removing audit/feedback on GP activities and quality of care in Australia, and to our knowledge no RCT evidence. We had the unique opportunity to evaluate the impact of removing incentives and audit/feedback on the preventive activities of GPs in Australia by building on an existing trial—the Australian Chlamydia Control Effectiveness Pilot (ACCEPt) [19]. ACCEPt evaluated an intervention to increase chlamydia screening, a key preventive activity for young adults (<30 years) in Australian general practice [20]. The intervention included incentive payments for testing and audit/feedback on GPs’ testing performance. At the end of ACCEPt, we re-randomised intervention clinics in a 2 × 2 factorial cluster RCT to determine whether preventive activities such as chlamydia testing in general practice are sustained when incentives and/or audit/feedback are removed. Given that the intention of financial incentives and/or audit/feedback is to modify provider behaviour in order to improve quality and/or increase efficiency, our hypothesis was that chlamydia testing would decrease if these strategies were removed. We present the results of this new trial, ACCEPt-able, here.

Methods

ACCEPt-able was a 2 × 2 factorial cluster RCT and followed a published protocol [21]. We report the findings according to the CONSORT extension for cluster RCTs [22] (S1 CONSORT Checklist). There were no changes to trial recruitment, implementation, management, or follow-up methods, but in a change to the published protocol, we had to exclude clinics that were unable to provide outcome data at the end of the trial from the primary analysis (further detail provided below).

Study design and participants

The parent trial, ACCEPt, was a cluster RCT that evaluated the effectiveness of a chlamydia screening intervention on chlamydia prevalence, finishing in December 2015. ACCEPt was conducted across 4 Australian states (New South Wales, Victoria, South Australia, and Queensland). Full details are published elsewhere [19,23]. At the time of ACCEPt, opportunistic chlamydia testing was recommended annually for sexually active 16- to 29-year-olds in general practice [20]. How chlamydia testing was conducted varied between clinics, with some clinics using GPs to initiate testing and others using practice nurses, and some using clinician-collected specimens for testing, others allowing patients to self-collect specimens (e.g., urine specimens or high vaginal swabs) and leave them at the clinic for testing, and others requiring the patient to attend an external pathology collection centre for testing. Intervention clinics received financial incentives to individual GPs for each chlamydia test: AU$5 per test for up to 20% of 16- to 29-year-olds tested each year to AU$8 per test for over 40% coverage. These payments were electronically transferred to the clinic each quarter. This amount was consistent with the payment of AU$6 GPs received at the time for completing immunisation schedules and corresponds to an annual payment of about AU$800 assuming an annual chlamydia testing rate of 20% and an average of 800 patients aged 16 to 29 years attending each clinic per year. This total amount, the payment frequency, and electronic transfer methods were consistent with those of other government-funded general-practice-based incentives at the time [4,24]. Intervention clinics also received audit/feedback, where individual GPs were provided with a 1-page report that summarised their chlamydia testing rates for the previous quarter, including the number of patients aged 16 to 29 years who had consulted them, the number they tested, and the number that tested positive. The report also included a statement of the total amount of incentive payments they would receive for that quarter’s testing. The report was given to individual GPs during a quarterly face-to-face visit with a research officer who explained the results and worked with the GP to identify strategies to help increase their testing rates. The intervention also included chlamydia education (hard-copy and online resources about chlamydia and its management that were given to all GPs and nurses in a face-to-face meeting with a research officer after randomisation) and computer alerts prompting testing. Not all clinics used the computer alerts. Guided by normalisation process theory, a member of the research team worked with each clinic to tailor the intervention to the resources of the clinic and to identify strategies to facilitate testing and embed it into routine practice [25]. Annual testing of 16- to 29-year-olds in intervention clinics increased from 8.2% to 20.1%, with a treatment effect odds ratio (OR) of 1.7 (95% CI 1.4 to 2.1) [19]. At the conclusion of ACCEPt, a research officer met with GPs in each intervention clinic, informed them about ACCEPt-able, invited them to participate, and obtained informed consent [21]. The intervention was allocated at the cluster level (clinic) because patients attending each clinic could consult with different GPs. Clinics were eligible if they were in the ACCEPt intervention arm. Patients aged 16–29 years were eligible for 1 chlamydia test per year unless they reported risk factors (e.g., new sex partner) or genital symptoms requiring further testing. This trial was approved by the Royal Australian College of General Practitioners National Research and Evaluation Ethics Committee (NREEC 14–004; 16 May 2014), and written consent was obtained from all GPs. During ACCEPt-able, we recruited and consented new GPs, who were also provided with the chlamydia education package. Clinics were recruited into ACCEPt-able immediately after completing ACCEPt, between July 2014 and September 2015. Clinics were followed up for 2 years or until 31 December 2016, whichever came first.

Randomisation and masking

Clinics were randomised using a computer-generated minimisation algorithm to maximise the balance across 2 variables—annual chlamydia testing rate among 16- to 29-year-olds in the clinic for 12 months prior to ACCEPt-able (<19% versus ≥19%, based on median testing rate) and number of 16- to 29-year-olds attending the clinic each year (<1,000 versus ≥1,000, based on the 67th percentile of the number of patients at each clinic, to ensure that groups were evenly distributed among relatively smaller and larger clinics because of the potential association of clinic size with patient quality of care [26]). The trial statistician was blinded to allocation. Blinding of clinics and GPs was not possible. Randomisation took place after clinics were recruited into ACCEPt-able and consented to participate. A research officer informed clinics and each GP of their allocation.

Interventions

Clinics in ACCEPt-able were randomised into 1 of 4 arms: incentives removed but audit/feedback and visit retained (group A), audit/feedback and visit removed but incentives retained (group B), incentives and audit/feedback and visit removed (group C), or incentives and audit/feedback and visit retained (group D). All GPs within each clinic received the same intervention. The groups receiving audit/feedback received the same quarterly 1-page report as for the ACCEPt trial that summarised GPs’ chlamydia testing rate for the previous quarter and included a statement of the total amount of incentive payments they would receive for that quarter’s testing. The report was given during a quarterly face-to-face visit with a research officer who explained the results and worked with GPs to identify strategies to help increase their testing rates.

Outcomes

The primary outcome was annual chlamydia testing rate among 16- to 29-year-olds attending the clinic. The numerator was the number of patients aged 16–29 years who had at least 1 chlamydia test within 12 months; the denominator was the number of patients aged 16–29 years who had at least 1 consultation during the same 12 months. Testing data were extracted from each clinic’s electronic medical records using GRHANITE [27,28], a data extraction tool. The tool extracts consultation data including a unique non-identifying patient code, the age and sex of the patient, and chlamydia test results. Data were extracted for the 12 months prior to commencement in ACCEPt-able and during the intervention period.

Sample size

The sample size was determined by ACCEPt, which included 60 intervention clinics. We had 94% power to detect a 5% absolute decrease in annual chlamydia testing from 20% to 15% between any 2 groups. A 5% reduction represents a clinically relevant result—about 200,000 fewer 16- to 29-year-olds screened each year in Australia. Our calculations assumed an intra-cluster correlation coefficient (ICC) of 0.02 for testing rate [19], an average cluster size of 700 patients aged 16–29 years per clinic per year, and an alpha of 0.05.

Statistical analysis

We conducted a factorial analysis as our primary analysis. This investigated the effects of removal versus retention of incentives (groups A + C versus groups B + D) and audit/feedback (groups B + C versus groups A + D) separately on annual chlamydia testing over 2 years. We aimed to compare the groups according to intention-to-treat, but in a change to the published protocol [21], we had to exclude clinics that were unable to provide outcome data at the end of the trial from the primary analysis. For each intervention, we fitted generalised linear models, using generalised estimating equations to account for clustering at the clinic level, and assessed the impact of the intervention on chlamydia testing in year 2 compared with baseline. A logistic model generated ORs, and absolute differences were obtained from a model with an identity link function with binomial error distribution. These models also provided 95% confidence intervals and p-values and adjusted for minimisation variables only (annual chlamydia testing rate among 16- to 29-year-olds in the clinic and number of 16- to 29-year-olds attending the clinic each year), as is recommended [29]. We also obtained the results of an adjusted model post hoc that, in addition to the minimisation factors, also included the variables that were adjusted for in the ACCEPt trial (patient sex and age group and socio-economic status quintile of the clinic—‘fully adjusted model’) [19,30]. We undertook several post-hoc analyses: (i) we calculated absolute differences in addition to the planned ORs; (ii) we tested the assumption that there was no interaction effect between the 2 interventions and conducted an analysis by randomised group whereby the group that retained audit/feedback and incentives was the control (‘intervention group analysis’), as is recommended for reporting factorial trials [31]; (iii) we calculated the ICC for chlamydia testing using the primary analysis model with trial arm in the model; and (iv) we conducted factorial subgroup analyses by sex and age group (16–19, 20–24, and 25–29 years). The output was generated using SAS software, version 9.4, for Windows.

Cost–consequence analysis

A cost–consequence analysis comparing costs and consequences for each combination of removing/retaining incentives and audit/feedback activities was conducted [32]. Costs (incentives, travel, staff time, and data extraction) and consequences (proportion of the target population tested) for the scenarios of removing versus retaining each intervention were obtained from trial data. The average saving per patient aged 16–29 years was calculated for removal of each intervention. The incremental cost of retaining each intervention per additional patient in the target population tested was calculated. As the trial was based in rural clinics, we conducted a sensitivity analysis to examine the potential costs and consequences for removing or retaining the interventions in metropolitan clinics, where travel costs and staff time for travel are likely to be reduced considerably.

Results

Of 60 clinics, 59 agreed to participate in ACCEPt-able. No clinics withdrew, but 4 clinics had technical problems with data extraction and their data were unavailable, leaving 55 (91.7%) clinics in the analysis (Fig 1). The intervention period ranged from 0.2 years to 2 years, with a mean duration of 1.5 years (SD 0.4). Three clinics participated for less than 1 year (2 clinics closed and 1 clinic was a solo GP who became unwell and ceased seeing patients), 23 clinics between 1 and 1.5 years, and 29 clinics between 1.5 and 2 years. The average duration of the intervention period was similar between groups (1.5 years for groups A and C; 1.6 years for groups B and D).
Fig 1

Flow chart.

Baseline characteristics at the patient and cluster level were similar between pairs of intervention groups (for factorial analysis) (Table 1), but given the loss of 4 clinics, we report only the results from the fully adjusted models in the text. The results from the model adjusted for minimisation variables only and the results from the fully adjusted model (adjusted for minimisation variables and patient age and sex and socio-economic status of the clinic) were similar (Table 2). There were some minor differences between the 4 trial groups, with clinics in group C (incentives and audit/feedback removed) and group D (incentives and audit/feedback retained) being more likely to be in disadvantaged areas. For analyses reporting on each intervention group (‘intervention group analysis’), we report the fully adjusted analyses.
Table 1

Baseline characteristics of clinics and patients.

CharacteristicTotal samplePairs of intervention groupsaIntervention groupsb
Incentives removed (A + C)Incentives retained (B + D)Audit/feedback removed (B + C)Audit/feedback retained (A + D)Removal of incentives only (A)Removal of audit/feedback only (B)Removal of both incentives and audit/feedback (C)Control—incentives and audit/feedback retained (D)
Clinic-level characteristics
Number of clinics552827272815141313
Socio-economic status of the clinic location, n (%)c
    Q1 (most disadvantaged)12 (21.8)5 (17.9)7 (25.9)7 (25.9)5 (17.9)0 (0.0)2 (14.3)5 (38.5)5 (38.5)
    Q235 (63.6)19 (67.9)16 (59.3)17 (63.0)18 (64.3)12 (80.0)10 (71.4)7 (53.8)6 (46.1)
    Q34 (7.3)2 (7.1)2 (7.4)1 (3.7)3 (10.7)1 (6.7)0 (0.0)1 (7.7)2 (15.4)
    Q43 (5.5)2 (7.1)1 (3.7)1 (3.7)2 (7.1)2 (13.3)1 (7.1)0 (0.0)0 (0.0)
    Q5 (least disadvantaged)1 (1.8)0 (0.0)1 (3.7)1 (3.7)0 (0.0)0 (0.0)1 (7.1)0 (0.0)0 (0.0)
Number of GPs, n (IQR of number of GPs per clinic)383 (6[3–9])195 (6[2–8])188 (6[3–9])185 (6[2–10])198 (6[3–9])103 (6[4–7])93 (5[3–9])92 (6[2–10])95 (6[3–7])
Patient-level characteristics
Number of patients in the 12 months prior to randomisationd49,52522,76226,76323,51026,01511,19611,94411,56614,819
Patient age group, n (%)
    16–20 years15,205 (30.7)6,985 (30.7)8,220 (30.7)7,202 (30.6)8,003 (30.8)3,525 (31.5)3,742 (31.3)3,460 (29.9)4,478 (30.2)
    20–24 years17,564 (35.5)8,047 (35.3)9,517 (35.6)8,285 (35.2)9,279 (35.7)3,906 (34.9)4,144 (34.7)4,141 (35.8)5,373 (36.3)
    25–29 years16,756 (33.8)7,730 (34.0)9,026 (33.7)8,023 (34.1)8,733 (33.6)3,765 (33.6)4,058 (34.0)3,965 (34.3)4,968 (33.5)
Patient sex, n (%)d
    Male20,589 (41.6)9,623 (42.3)10,966 (41.0)10,093 (42.9)10,496 (40.3)4,726 (42.2)5,196 (43.5)4,897 (42.3)5,770 (38.9)
    Female28,936 (58.4)13,139 (57.7)15,797 (59.0)13,417 (57.1)15,519 (59.7)6,470 (57.8)6,748 (56.5)6,669 (57.7)9,049 (61.1)
Chlamydia testing rate in the 12 months prior to randomisation, n/N (%)10,109/49,525 (20.4)4,592/22,762 (20.2)5,517/26,763 (20.6)4,935/23,510 (21.0)5,147/26,015 (19.9)2,124/11,196 (19.0)2,467/11,944 (20.6)2,468/11,566 (21.3)3,050/14,819 (20.6)

n = number tested aged 16 to 29 years; N = number of individuals aged 16 to 29 years attending the clinic.

aFor factorial analysis.

bFor intervention group analysis.

cSocio-economic status is based on quintiles (Q) of the Socio-Economic Indexes for Areas (SEIFA) Index of Relative Socio-economic Disadvantage (IRSD) of the Australian Bureau of Statistics 2011 census for the postcodes of each clinic location.

dNumber of patients aged 16 to 29 years attending participating clinics in the 12-month period prior to randomisation.

GP, general practitioner; IQR, interquartile range.

Table 2

Primary outcome chlamydia testing—factorial analysis.

Impact of removal of incentive payments
Time point or outcome Incentive payments removed (groups A + C) (intervention) Incentive payments retained (groups B + D) (control) Treatment effect a Adjusted treatment effect b
n/N Testing rate, percent (95% CI) n/N Testing rate, percent (95% CI) OR (95% CI) p-Value OR (95% CI) p-Value
Baselinec4,592/22,76220.2 (18.2 to 22.1)5,517/26,76320.6 (18.2 to 23.0)0.9 (0.8 to 1.1)0.45671.0 (0.9 to 1.1)0.4729
Year 1c3,032/21,28414.2 (12.6 to 15.9)4,292/26,75216.0 (12.9 to 19.2)0.8 (0.7 to 1.0)0.07550.9 (0.7 to 1.0)0.1017
Year 2c1,720/14,651d11.7 (9.9 to 13.6)3,009/21,076b14.3 (10.3 to 18.2)0.8 (0.6 to 1.1)0.10390.8 (0.6 to 1.1)0.1774
Year 2 versus baseline (95% CI)aDiff: −8.8 (−10.5 to −7.0)OR: 0.5 (0.4 to 0.6)Diff: −7.1 (−9.6 to −4.7)OR: 0.6 (0.5 to 0.8)
Treatment effect (removal-retain) (95% CI)Diff: −1.6 (−4.6 to 1.3)OR: 0.8 (0.6 to 1.1)0.1964Diff: −0.9 (−3.5 to 1.7)OR: 0.8 (0.6 to 1.1)0.2267
Impact of removal of audit/feedback
Time point or outcome Audit/feedback removed (groups B + C) (intervention) Audit/feedback retained (groups A + D) (control) Treatment effect a Adjusted treatment effect b
n/N Testing rate% (95% CI) n/N Testing rate% (95% CI) OR (95% CI) p-Value OR (95% CI) p-Value
Baselinec4,935/23,51021.0 (18.8 to 23.2)5,147/26,01519.9 (17.6 to 22.1)1.0 (0.9 to 1.2)0.66741.0 (0.9 to 1.2)0.7514
Year 1c3,329/22,73814.6 (12.7 to 16.5)3,995/25,29815.8 (12.6 to 19.0)0.9 (0.7 to 1.1)0.20100.9 (0.7 to 1.0)0.1293
Year 2c1,809/15,643d11.6 (9.4 to 13.8)2,920/20,084b14.5 (10.6 to 18.5)0.7 (0.5 to 1.0)0.08820.7 (0.5 to 0.9)0.0191
Year 2 versus baseline (95% CI)aDiff: −9.5 (−11.7 to −7.4)OR: 0.5 (0.4 to 0.6)Diff: −6.4 (−8.6 to −4.2)OR: 0.6 (0.5 to 0.8)
Treatment effect: (removal-retain) (95% CI)Diff: −3.1 (−6.2 to −0.1)OR: 0.7 (0.5 to 1.0)0.0374Diff: −2.6 (−5.4 to −0.2)OR: 0.7 (0.5 to 1.0)0.0336

n = number tested aged 16 to 29 years; N = number of individuals aged 16 to 29 years attending the clinic.

aModels account for minimisation variables including annual chlamydia testing rate among 16- to 29-year-olds and number of 16- to 29-year-olds attending the clinic each year.

bThe fully adjusted model contains patient sex, patient age group, and socio-economic status of the clinic (continuous) in addition to the minimisation variables.

cBaseline is the 12-month period prior to randomisation. Year 1 is 1–12 months after randomisation. Year 2 is 13–24 months after randomisation.

dNumerator and denominator less than for baseline and year 1 because not all clinics contributed 12 months of data to year 2.

Diff, absolute difference; OR, odds ratio.

n = number tested aged 16 to 29 years; N = number of individuals aged 16 to 29 years attending the clinic. aFor factorial analysis. bFor intervention group analysis. cSocio-economic status is based on quintiles (Q) of the Socio-Economic Indexes for Areas (SEIFA) Index of Relative Socio-economic Disadvantage (IRSD) of the Australian Bureau of Statistics 2011 census for the postcodes of each clinic location. dNumber of patients aged 16 to 29 years attending participating clinics in the 12-month period prior to randomisation. GP, general practitioner; IQR, interquartile range. n = number tested aged 16 to 29 years; N = number of individuals aged 16 to 29 years attending the clinic. aModels account for minimisation variables including annual chlamydia testing rate among 16- to 29-year-olds and number of 16- to 29-year-olds attending the clinic each year. bThe fully adjusted model contains patient sex, patient age group, and socio-economic status of the clinic (continuous) in addition to the minimisation variables. cBaseline is the 12-month period prior to randomisation. Year 1 is 1–12 months after randomisation. Year 2 is 13–24 months after randomisation. dNumerator and denominator less than for baseline and year 1 because not all clinics contributed 12 months of data to year 2. Diff, absolute difference; OR, odds ratio. Chlamydia testing rates decreased from baseline in all groups (Figs 2–4), and for groups A, B, and C, testing rates reduced to levels like those observed in the first 12 months of ACCEPt, the parent trial (S1 Fig).
Fig 2

Proportion of patients tested for chlamydia per year by time since randomisation: Factorial analysis—removal of financial incentives versus retention of financial incentives.

Error bars correspond to 95% confidence intervals. FI, financial incentives.

Fig 4

Proportion of patients tested for chlamydia per year by time since randomisation: Intervention group analysis.

Error bars correspond to 95% confidence intervals. AF, audit/feedback; FI, financial incentives.

Proportion of patients tested for chlamydia per year by time since randomisation: Factorial analysis—removal of financial incentives versus retention of financial incentives.

Error bars correspond to 95% confidence intervals. FI, financial incentives.

Proportion of patients tested for chlamydia per year by time since randomisation: Factorial analysis—removal of audit/feedback versus retention of audit/feedback.

Error bars correspond to 95% confidence intervals. AF, audit/feedback.

Proportion of patients tested for chlamydia per year by time since randomisation: Intervention group analysis.

Error bars correspond to 95% confidence intervals. AF, audit/feedback; FI, financial incentives. There was no statistical evidence of an interaction for treatment effect between removal of incentives and removal of audit/feedback on our primary outcome of chlamydia testing (interaction effect = 3.2%; 95% CI −2.4% to 8.8%; p = 0.2642). The ICC for testing was 0.015. In our factorial analysis, the annual chlamydia testing rate decreased from 20.2% to 11.7% over the 2 years (difference −8.8%; 95% CI −10.5% to −7.0%) where incentives were removed and decreased from 20.6% to 14.3% (difference −7.1%; 95% CI −9.6% to −4.7%) where incentives were retained. The adjusted absolute difference in treatment effect between groups was −0.9% (95% CI −3.5% to 1.7%; p = 0.2267), and the adjusted OR was 0.8 (95% CI 0.6 to 1.1; p = 0.2267) (Table 2). In subgroup analyses, the differences in treatment effect between clinics where incentives were removed and clinics where incentives were retained when stratified by sex or age of patient were not statistically significant (S1 Table). Annual testing decreased from 21.0% to 11.6% over the 2 years (difference −9.5%; 95% CI −11.7% to −7.4%) where audit/feedback was removed and decreased from 19.9% to 14.5% (difference −6.4%; 95% CI −8.6% to −4.2%) where audit/feedback was retained. The adjusted absolute difference in treatment effect was greater for removal than retention of audit/feedback (difference −2.6%; 95% CI −5.4% to −0.2%; p = 0.0336), and the adjusted OR was 0.7 (95% CI 0.5 to 1.0; p = 0.0336) (Table 2). In subgroup analyses, evidence of a difference was observed when stratified by sex and age group of patients except for those aged 25 to 29 years (S1 Table). The absolute difference in treatment effect did not vary between age groups. Our intervention group analysis showed that testing decreased in all 4 groups, but the decrease was substantially lower in the group that retained incentives and audit/feedback. The adjusted absolute treatment effects were −1.8% (95% CI −4.9% to 1.3%; p = 0.0660) for removal of incentives only, −3.4% (95% CI −7.8% to 1.0%; p = 0.0247) for removal of audit/feedback only, and −3.4 (95% CI −6.5% to −0.2%; p = 0.0356) for removal of incentives and audit/feedback (S2 Table).

Cost and consequences

There was an estimated cost saving of AU$2.31 per 16- to 29-year-old patient per year associated with removing incentives. As removal of incentives had no significant impact on testing, discontinuing incentives dominates over a strategy of their retention (Table 3). There was an estimated cost-saving of AU$5.88 per 16- to 29-year-old patient per year associated with removing audit/feedback. The incremental cost of continuing audit/feedback activities was an estimated AU$189.64 (range: AU$94.82 to AU$5,117.49) per additional patient in the target population tested (Table 3). Most costs for audit/feedback were travel-related (79%). Sensitivity analysis showed that if travel costs were reduced to reflect the costs for research officers to visit metropolitan clinics, the costs of audit/feedback would decrease to an average of AU$3.02 per patient (Table 3).
Table 3

Cost and consequences evaluation.

VariableACCEPt-able costs (rural clinics)Estimated costs for metro clinics (sensitivity analysis)
Average number of hours used per clinic per quarter (range)Hourly cost (in AUD)Average total cost (AUD) per clinic per quarter (range) (unless otherwise indicated)Average number of hours used per clinic per quarter (range)Hourly cost (in AUD)Average total cost (AUD) per clinic per quarter (range) (unless otherwise indicated)
Incentive payments
Program-related activities and resources: Labour and technical support to collate data for dispensing incentive payments to clinics
Analysis of medical record data (includes extraction, parsing, and cleaning of data to generate report of incentive payments due)$250.00 ($225.00, $312.50)$250.00 ($225.00, $312.50)
Staff administration of incentive payments0.25 (0.13, 0.5)$57.50a$14.38 ($7.48, $57.50)0.25 (0.13, 0.5)$57.50a$14.38 ($7.48, $57.50)
Total and incremental costs
Total costs per clinic per quarter to authorise payments$264.38 ($232.48, $370.00)$264.38 ($232.48, $370.00)
Total costs for 28 clinics per year to authorise payments$29,610$29,610
Total incentive payments for 28 clinics at 20.2% testing rate per yearb$23,006$23,006
Average incentive payments per clinic per year$822$822
Total costs for 28 clinics per year to provide incentives$52,616$52,616
Total reduction in costs for removal of incentives−$52,616−$52,616
Total reduction in costs per clinic per year for removal incentives−$1,879−$1,879
Number of people in the target population in the 28 clinics where incentives were removedb22,76222,762
Average saving per patient per year in the target population for removal of incentives−$2.31−$2.31
Incremental change in proportion of target patients tested through removal of incentives (95% CI)c−1.6% (−4.6%, 1.3%)−2.1% (−5.6%, 1.4%)
Incremental cost of incentive payments per additional patient per year in the target population tested (range)DominantDominant
Audit and feedback
Program-related activities and resources: Labour and technical support to collate data and generate audit report
Preparation of medical record reports (includes extraction, parsing, and cleaning of data and generating each report)$250.00 ($225.00, $312.50)$250.00 ($225.00, $312.50)
Staff quality checking reports0.25 (0.13, 0.5)$57.50a$14.38 ($7.48, $57.50)0.25 (0.167, 0.5)$57.50a$14.38 ($7.48, $57.50)
Provision of reports to each clinic: Labour and travel costs
Staff labour costs involved in visiting and providing reports to clinic3.5 (1, 7)$62.50d$218.75 ($62.50, $437.50)3.5 (1, 7)$62.50d$218.75 ($62.50, $437.50)
Staff travel time to visit clinic (labour)6 (3, 17)$62.50d$375.00 ($187.50, $1,062.50)2 (0.5, 4)$62.50d$125.00 ($31.25, $250.00)
Flights, vehicle/parking expenses, and accommodation expenses to visit clinic$421.50 ($259.00, $502.00)$50.00e
Total and incremental costs
Total costs per clinic per quarter to provide audit and feedback$1,279.63$658.13
Total costs for 27 clinics per year to provide audit and feedback$138,200$71,078
Total reduction in costs for removal of audit and feedback−$138,200−$71,078
Total reduction in costs per clinic per year for removal of audit and feedback−$5,118.52−$2,632.52
Number of patients in the target population in the 27 clinics where audit and feedback activities were removedb23,51023,510f
Average saving per patient per year in the target population through removal of audit and feedback activities−$5.88−$3.02f
Incremental change in proportion of target patients tested through removal of audit and feedback activities (95% CI)g−3.1% (−6.2%, −0.1%)−3.1%f (−6.2%, −0.1%)
Incremental cost of audit and feedback activities per additional patient per year in the target population tested (range)$189.64 ($94.82, $5,117.49)$97.42f ($48.71, $2,628.3

aHourly rate is based on the hourly salary (AU$46) of a junior academic researcher plus 25% on-costs.

bSee Table 2 for data.

cSee Table 2 for results. The p-value for the incremental change in proportion tested was 0.1852, so incremental cost not calculated.

dHourly rate is based on the hourly salary (AU$50) of a postdoctoral academic researcher plus 25% on-costs.

eVisiting metropolitan clinics would incur vehicle/parking costs of $50 per trip.

fApplying data from ACCEPt-able clinics to a metropolitan setting.

gSee Table 2 for results. The p-value for the incremental change in the proportion tested was 0.0270.

AUD, Australian dollars.

aHourly rate is based on the hourly salary (AU$46) of a junior academic researcher plus 25% on-costs. bSee Table 2 for data. cSee Table 2 for results. The p-value for the incremental change in proportion tested was 0.1852, so incremental cost not calculated. dHourly rate is based on the hourly salary (AU$50) of a postdoctoral academic researcher plus 25% on-costs. eVisiting metropolitan clinics would incur vehicle/parking costs of $50 per trip. fApplying data from ACCEPt-able clinics to a metropolitan setting. gSee Table 2 for results. The p-value for the incremental change in the proportion tested was 0.0270. AUD, Australian dollars.

Discussion

In a 2 × 2 factorial cluster RCT set in Australian general practice, the removal of financial incentives of AU$5 to AU$8 paid to GPs for each chlamydia test conducted had little additional impact on reducing testing rates among 16- to 29-year-olds attending the clinic. Our payments were consistent with other incentives at the time [24], suggesting that in the Australian general practice setting, incentives at this level do not have an important impact on preventive activities like chlamydia testing. We found that the removal of audit/feedback reduced testing, with a relative reduction of 30% (absolute difference = −2.6%) that could translate to about 160,000 fewer 16- to 29-year-olds tested each year in Australia [33]. The provision of audit/feedback was costlier, but most costs were for the visit, which could be substantially reduced with online conferencing for example. Fully automating the audit and feedback reports using digital platforms would also further reduce costs. We also found that chlamydia testing rates declined in all groups, regardless of whether incentives and/or audit and feedback were removed, emphasising the challenges in sustaining preventive healthcare activities in general practice over time. There are several explanations for why we did not see an impact of removal of incentives. Incentives may not have been critical in driving test uptake in ACCEPt, such their removal in ACCEPt-able did not substantially impact testing. At the beginning of ACCEPt-able, clinics received an average total payment of AU$822 per year for chlamydia testing, which, at the time, was consistent with the total amount of approximately AU$2,400 that clinics received across 3 activities (asthma and diabetes cycles of care and cervical screening in under-screened women) as part of the Practice Incentives Program [34]. The introduction of these incentives in 2001 did not significantly increase uptake of these activities, suggesting incentivisation like this is unlikely to translate into substantial changes in Australian general practice [4]. This is supported by qualitative research, where Australian GPs report that incentives do not fundamentally influence patient management [4,35]. This may be because Australian general practices are largely funded by a fee-for-service reimbursement model; the few incentives available represent less than 10% of their funding [4]. Chance cannot be excluded because we did not expect a reduction in testing in clinics that retained incentives, which reduced our effective sample size, and our observed treatment effect of 0.9% was considerably smaller than our hypothesized 5%. Our audit/feedback intervention included a written report and visit by a research officer. Unfortunately, we could not determine whether removing the report or the visit alone would have had the same effect. However, a previous systematic review compared an educational visit plus audit/feedback with audit/feedback alone, finding that the 2-pronged approach was more effective than audit/feedback only [36]. Unexpectedly, we observed that testing also decreased in the group that retained incentives and audit/feedback. This suggests that chlamydia testing had not become normalised in work practices, with clinics returning to their pre-intervention ways of working despite the intervention’s remaining in place [25]. Alternatively, it is possible that staff turnover led to loss of ‘corporate memory’ [37] about chlamydia, contributing to reduced testing. We provided clinics with the same level of support during ACCEPt-able as during ACCEPt, but we did not monitor whether there were changes in the clinics’ use of other strategies to facilitate testing such as using computer alerts, and while new GPs received our chlamydia educational package, we did not provide any further educational support to already-participating GPs. The lack of ongoing ‘calibration’ of the intervention and its support may have contributed to declining testing rates across all groups [38]. In addition, our intervention targeted GPs, with negligible patient involvement, which is necessary for sustaining change over time [39]. Nonetheless, ACCEPt-able highlights the challenges of sustaining GP behaviour change; further research is needed on how to sustain such change. Several studies have reported on the removal of incentives in primary care, but all present observational data only, with conflicting results. Two studies examined incentive removal from the UK Quality and Outcomes Framework [5,14]. Similar to our findings, Kontopantelis et al. found that incentive removal had minimal effect on activities related to treatment and monitoring (e.g., cholesterol) [14]. In contrast, Minchin et al. found immediate reductions following incentive removal [5]. However, reductions were greatest where the GP was required to record advice provided to the patient (e.g., contraception advice) and smaller for activities related to measurement (e.g., cholesterol) [5,14]. Similar findings were observed in another study of 35 Kaiser Permanente facilities in the US, where small decreases in screening for diabetic retinopathy and cervical screening were observed when incentives were removed [12]. A cluster RCT of an intervention that included incentives to reduce high-risk prescribing in 34 primary care clinics in Scotland [13] found no change in high-risk prescribing during a 4-year observational post-intervention study when incentives were removed. We are unaware of any RCT evidence about the impact of removing audit/feedback on provider activity. Observational data collected at the end of RCTs of audit/feedback interventions show similar results. An RCT of an intervention that included an educational session and audit/feedback found a 50% reduction in inappropriate antibiotic prescribing in 18 community-based paediatric clinics in the US, but once the intervention was terminated at trial end, there was an immediate increase in inappropriate prescribing, which returned to pre-trial levels within 18 months [40]. Similar findings were reported at the conclusion of another US trial of audit/feedback to reduce inappropriate prescribing [41]. Our trial has several limitations. First, our sample size assumed an absolute reduction in testing of 5% when incentives and/or audit/feedback were removed and no change where they were retained. We did not anticipate a decrease in all groups. However, the factorial design and smaller ICC than estimated (0.015 versus 0.02) maximised our statistical power. Second, when designing the trial, we assumed no interaction between removal of incentives and removal of audit/feedback and were not powered to detect an interaction. However, our post hoc analysis of each intervention group separately showed similar results to our primary analysis, confirming the factorial analysis findings. Third, 4 clinics did not provide testing data and were excluded from the analysis after randomisation. However, their removal had little impact on the distribution of minimisation and socio-economic variables across the intervention groups, and these variables were adjusted for in our analysis, minimising any bias (S3 Table). Fourth ACCEPt-able was undertaken in rural areas, so the results might not be generalisable to urban areas. However, our analysis accounted for cluster-level socio-economic factors, which had little impact on results. Fifth, we assessed the impact of the intervention on chlamydia testing in year 2 compared with baseline, and not all clinics remained in the trial until the end of year 2. However, it was reassuring that the average duration of the intervention period was similar between groups. Sixth, we evaluated the impact of the removal of incentives and audit/feedback on chlamydia testing, so our results may not be generalisable to other preventive health activities in general practice. Finally, this trial was set in Australia, where general practice is mainly renumerated on a fee-for-service basis; our results may be less transferrable to settings where incentives represent a larger proportion of income.

Conclusions

In this cluster RCT, we found that the financial incentives offered had little impact on chlamydia testing in Australian general practice. The total amount of financial incentive payments received per year in our trial was consistent with other incentive payments GPs received at the same time in Australia. It is possible that the removal of financial incentives might have a greater impact if incentive payments made up a greater proportion of GP income, such as in the UK. RCT evidence is needed to investigate this question. The removal of audit and feedback with a face-to-face visit resulted in a relative reduction in testing activity of 30% overall. A reduction of this size could have a considerable public health impact at the population level, with fewer chlamydia tests conducted and more infections going undetected. Our results suggest that, in Australia at least, audit and feedback is an important intervention for influencing GP behaviour for preventive health activities like chlamydia testing. The use of digital platforms that include automated reports and online communication could reduce the costs associated with audit and feedback. Our finding that chlamydia testing also decreased in clinics that retained incentives and audit and feedback highlights that simply retaining these interventions over time is not enough; further studies should investigate how to sustain clinician behaviour change over time. (DOCX) Click here for additional data file.

Annual chlamydia testing rates for ACCEPt and ACCEPt-able.

(PDF) Click here for additional data file.

The primary outcome, chlamydia testing, by sex and age group: Factorial analysis.

(DOCX) Click here for additional data file.

The primary outcome, chlamydia testing: Intervention group analysis.

(DOCX) Click here for additional data file.

Distribution of minimisation and socio-economic status variables across clinics by intervention group.

(DOCX) Click here for additional data file. 9 Jun 2021 Dear Dr Hocking, Thank you for submitting your manuscript entitled "The impact of removing financial incentives and/or audit and feedback on preventive care activities in general practice: A cluster randomised controlled trial (ACCEPt-able)" for consideration by PLOS Medicine. Your manuscript has now been evaluated by the PLOS Medicine editorial staff and I am writing to let you know that we would like to send your submission out for external peer review. However, before we can send your manuscript to reviewers, we need you to complete your submission by providing the metadata that is required for full assessment. To this end, please login to Editorial Manager where you will find the paper in the 'Submissions Needing Revisions' folder on your homepage. Please click 'Revise Submission' from the Action Links and complete all additional questions in the submission questionnaire. Please re-submit your manuscript within two working days, i.e. by Jun 11 2021 11:59PM. Login to Editorial Manager here: https://www.editorialmanager.com/pmedicine Once your full submission is complete, your paper will undergo a series of checks in preparation for peer review. Once your manuscript has passed all checks it will be sent out for review. Feel free to email us at plosmedicine@plos.org if you have any queries relating to your submission. Kind regards, Beryne Odeny Associate Editor PLOS Medicine 11 Aug 2021 Dear Dr. Hocking, Thank you very much for submitting your manuscript "The impact of removing financial incentives and/or audit and feedback on preventive care activities in general practice: A cluster randomised controlled trial (ACCEPt-able)" (PMEDICINE-D-21-02501R1) for consideration at PLOS Medicine. Your paper was discussed among the editors and sent to independent reviewers, including a statistical reviewer. The reviews are appended at the bottom of this email and any accompanying reviewer attachments can be seen via the link below: [LINK] In light of these reviews, we will not be able to accept the manuscript for publication in the journal in its current form, but we would like to invite you to submit a revised version that addresses the reviewers' and editors' comments fully. You will appreciate that we cannot make a decision about publication until we have seen the revised manuscript and your response, and we expect to seek re-review by one or more of the reviewers. In revising the manuscript for further consideration, your revisions should address the specific points made by each reviewer and the editors. Please also check the guidelines for revised papers at http://journals.plos.org/plosmedicine/s/revising-your-manuscript for any that apply to your paper. In your rebuttal letter you should indicate your response to the reviewers' and editors' comments, the changes you have made in the manuscript, and include either an excerpt of the revised text or the location (eg: page and line number) where each change can be found. Please submit a clean version of the paper as the main article file; a version with changes marked should be uploaded as a marked up manuscript. In addition, we request that you upload any figures associated with your paper as individual TIF or EPS files with 300dpi resolution at resubmission; please read our figure guidelines for more information on our requirements: http://journals.plos.org/plosmedicine/s/figures. While revising your submission, please upload your figure files to the PACE digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at PLOSMedicine@plos.org. We hope to receive your revised manuscript by Aug 31 2021 11:59PM. Please email us (plosmedicine@plos.org) if you have any questions or concerns. ***Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.*** We ask every co-author listed on the manuscript to fill in a contributing author statement, making sure to declare all competing interests. If any of the co-authors have not filled in the statement, we will remind them to do so when the paper is revised. If all statements are not completed in a timely fashion this could hold up the re-review process. If new competing interests are declared later in the revision process, this may also hold up the submission. Should there be a problem getting one of your co-authors to fill in a statement we will be in contact. YOU MUST NOT ADD OR REMOVE AUTHORS UNLESS YOU HAVE ALERTED THE EDITOR HANDLING THE MANUSCRIPT TO THE CHANGE AND THEY SPECIFICALLY HAVE AGREED TO IT. You can see our competing interests policy here: http://journals.plos.org/plosmedicine/s/competing-interests. Please use the following link to submit the revised manuscript: https://www.editorialmanager.com/pmedicine/ Your article can be found in the "Submissions Needing Revision" folder. To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols Please ensure that the paper adheres to the PLOS Data Availability Policy (see http://journals.plos.org/plosmedicine/s/data-availability), which requires that all data underlying the study's findings be provided in a repository or as Supporting Information. For data residing with a third party, authors are required to provide instructions with contact information for obtaining the data. PLOS journals do not allow statements supported by "data not shown" or "unpublished results." For such statements, authors must provide supporting data or cite public sources that include it. Please let me know if you have any questions, and we look forward to receiving your revised manuscript. Sincerely, Richard Turner PhD, for Beryne Odeny Senior editor, PLOS Medicine rturner@plos.org ----------------------------------------------------------- Requests from the editors: Noting PLOS' data policy, https://journals.plos.org/plosmedicine/s/data-availability, please state whether the study's ethics approval would permit the study data to be shared with other researchers under conditions of confidentiality. In the abstract and throughout the paper, please quote p values alongside 95% CI, where available. Please add a new final sentence to the "Methods and findings" subsection of your abstract, which should begin "Study limitations include ..." or similar and should quote 2-3 of the study's main limitations. Please relocate the Author Summary after the Abstract. Throughout the text, please remove spaces from within the square brackets for reference call-outs (e.g., "... on others [4,9,10]."). Please remove the information on competing interests and funding from the end of the main text. In the event of publication, this will appear in the article metadata, via entries in the submission form. Please abbreviate journal names consistently in your reference list. Please add a completed checklist for the appropriate CONSORT extension as a supplementary document, labelled "S1_CONSORT_Checklist" or similar and referred to as such in the Methods section. In the checklist, please refer to individual items by section (e.g., "Methods") and paragraph number, not by line or page numbers as these generally change in the event of publication Comments from the reviewers: *** Reviewer #1: Hocking and colleagues submitted an interesting and well-written manuscript on the impact of removing financial incentives and/or audit/feedback on chlamydia testing rates in Australian GP clinics. They conducted a cluster randomized controlled trial to address this question, which to date has only been addressed by a few observational studies. They find declining testing rates over time for all groups, but only removal of audit/feedback resulted in a reduction of these rates beyond the existing trends. The manuscript contributes meaningfully to filling an important knowledge gap in this field. The authors appear to have done a very good job in designing and executing their study, and in reporting their results in this manuscript. I have read the manuscript carefully, and only have a few minor comments, which are detailed below: 1. The authors mention in the discussion that incentives (and perhaps audit/feedback as well) may not have been critical in driving test uptake. If that is the case, why would you then expect a decrease in preventive activities if these were removed (see hypothesis in p. 4, l. 118). It would be helpful if this authors would motivate their hypothesis more based on what is known from previous work in Australia and elsewhere. 2. In the discussion the authors mention that the incentives comprised less than 10 percent of GP practices' funding. It would be helpful if this would be mentioned earlier, e.g. in the section 'study design and participants'. In addition, what is the maximum incentive size relative to GP's maximum income (or GP practices revenues)? That would tell the reader something about the actual magnitude of the incentive. 3. A general point is that the authors could be a bit more consistent in the use of the terms 'GP' and 'Clinic', e.g. on page 5. It is not always clear if the authors talk about individual GPs or about the clinics that they work in. 4. p. 6, l 157: why the choice for the 67th percentile? 5. p. 6, l 170-171: I wonder why the authors used this as the denominator instead of all patients in this age range. Perhaps this has something to do with the fact that GPs in Australia are not paid on a capitation basis (?) and only on a FFS basis, which would imply that they only 'observe' their patients when they present themselves with a health issue? 6. p. 10, l 283: given that the focus was on Chlamydia screening rates, I don't think the authors can say something about 'preventive activities' in general. I think this should be changed in the title of the manuscript as well. 7. p. 10, l 305-307: I found this part a bit difficult to follow. *** Reviewer #2: This is a well designed and conducted 2X2 factorial cluster-RCT on the impact of removing financial incentives and/or audit and feedback on preventive care activities in general practice. The study design, outcomes, sample size, randomisation, trial registration, protocol, statistical methods and analyses, and presentation and interpretation of the results are mostly adequate. Especially, testing the assumption that there was no interaction effect between the two interventions was well done as the factorial design is only valid if there is no interaction between the interventions. However, there are still a few major issues needing attention. 1) In the statistical analysis section on page 7, it says "our analysis was a modified intention-to-treat as clinics that were unable to provide outcome data at trial end were excluded from the primary analysis". However, it's either ITT or not ITT, the wording 'modified ITT' is vague and widely criticised so should be avoided. This is essentially a complete case analysis as 4 clusters/clinics were dropped and excluded in the primary analysis. 2) Analyses. As 4 clusters/clinics were dropped and excluded in the analyses, the randomisation were broken and interupted at both cluster and patients levels, therefore the primary analyses only adjusted for two minimisation factors are not sufficient and inadequate as clearly we can see the imbalance at the cluster level in socioeconomic status. Instead, fully-adjusted analyses should be used for primary and all analyses throughout the paper to avoid potential bias due to the loss of 4 clusters. 3) Missing data. Normally, a complete case analysis will go alongside with a sensitivity analysis with missing data imputation. However, the missing data issue was not dealt with at all in this study. While, it may not be feasible in this study for missing data imputation as 4 clinics were dropped, but it would be very useful to compare the characteristics of dropped clusters with that of remaining clusters in the same arm to see whether it's missing at random or not at random so that make sure we are able to address the potential bias and impact of the missing data on the trial results. *** Reviewer #3: Summary The original ACCEPt Trial was of a multifaceted, clinic-based intervention using computerized reminders, an education package, financial incentives, and feedback on testing rates. The ACCEPt trial resulted in a significant increase in Chlamydia testing rates of eligible patients between control and intervention practices (13% versus 20%), but there was no difference in the primary outcome, the prevalence of chlamydia among patients aged 16-29 who attended the clinics "at the end of the intervention period" (2.5 to 4 years later). In the current manuscript, "The impact of removing financial incentives and/or audit and feedback on preventive care activities in general practice: a cluster randomised controlled trial (ACCEPt-able)," Hocking and colleagues report on an RCT of the removal of either the financial incentive, audit and feedback, both, or neither on the rates of Chlamydia testing in primary care practices that were part of the original ACCEPt intervention group. Of 59 randomized practices, 55 contributed data. General Comments This appears to be a revision. I was not one of the initial reviewers. This is a potentially interesting analysis about a randomized deimplementation of financial incentives and audit and feedback following their successful implementation (in regards to increasing testing rates). I have several major concerns with the manuscript and analysis. First, the analytic method, the authors' own definition of clinical significance, and the marginal nature of the results call the conclusions into question. The protocol said that "analyses will be adjusted for the chlamydia testing rate at each general practice immediately prior to commencing ACCEPt-able" and "account for cluster…GP and patient variability." In the actual analysis, the investigators adjusted for clinic clustering, annual chlamydia testing rates, and the number of 16 to 29-year-olds attending the clinic each year. A more fully-adjusted model included patient sex, age-group, and socio-economic status of the clinic. Only the less-adjusted model examining the impact of the removal of audit and feedback was of marginal statistical significance, and even here, the odds ratio - which the authors identify as the planned primary analysis (page 7, line 202) - includes 1.0. Further adjustment yielded a non-significant result. In the Methods, the investigators cited a 5% absolute reduction as a "clinically relevant result." (I have actually used this exact clinically significant difference in some of our own analyses of quality or health services research.) The absolute reduction was only a 3.1% absolute decline. In all, this does not seem strong enough to hang the conclusion that "financial incentives don't work, but audit and feedback does so primary care practices should invest in audit and feedback." (See "Third" below regarding overgeneralizing the result.) Second, crucial details of GPs workflow around chlamydia testing are not described that might help the reader understand the lack of effectiveness of either the ACCEPt intervention or the persistence of financial incentives or audit and feedback. The reader is left to wonder what it is about ordering Chalmydia testing that is so difficult that an intensive, multifaceted intervention only led to a 7% increase in testing rates and those rates reverted almost back to their pre-ACCEPt level. For behavioral interventions, details of workflow and interventions are extremely important (Fox et al. BMJ. 2020;370:m3256), perhaps especially when they are not successful. It is overly simplistic to say "financial incentives and audit and feedback don't work." Regarding chlamydia testing, how, by whom, and when was it done? Regarding the financial incentives, in what context were they delivered (i.e., was it an unrecognizably small part of some larger, quarterly payment or was it identifiable as a "chlamydia testing incentive"). Regarding the feedback, all we know is that it was "given to GPs during a [quarterly] visit with a research officer," but how was it delivered? Was it provided with a descriptive norm or injunctive norm? Lack of these details makes it hard for readers to learn anything from the ACCEPt and ACCEPt-able experience. I am also curious about the details of how financial incentives and audit and feedback were deimplemented. Practices were invited and had to consent to participate in this deimplementation RCT. As such, presumably they were told they were going to have a prior intervention removed ("A research officer informed clinics of their allocation"), but how was this done? Thus, the invitation, consent, and enrollment process must have included the implicit message that "Chlamydia testing is less important than it was before" or "we are not going to be monitoring Chlamydia testing as closely." Just as the Hawthorne effect in an intervention trial is often the most impactful part of the interventions, as part of deimplementation, these implicit messages could have had as big an effect as the actual removal of the interventions. Indeed, most of the decrease in Chlamydia testing occurred between the Baseline and Year 1. Also, what happened to the computerized alerts? Did they persist? With apologies for 1 paragraph of editorializing, given the findings of ACCEPt and ACCEPt-able, I would guess the clinic environment and the mechanism of ordering was reliant on busy GPs to remember to bring up, discuss, and order Chlamydia testing that was not integrated into regular work-flow. Financial incentives and audit and feedback have their place in nudging clinicians regarding complex decision-requiring behaviors. But for something as simple as Chlamydia testing, a much better solution is to use practice facilitation to systematize or routinize the activity and remove it from the need for the GP to remember and act. In my own health system, we achieve rates around 90% when we have our check-in or rooming staff systematically perform tasks and use standing orders like for influenza vaccination, fall screening, depression screening, and tobacco screening. Third, related to the marginal nature of the results and details around practice and interventions, it is an overgeneralization to say that these interventions did or did not affect "preventive care in general practice." Because the details of workflow are crucial, all that has been shown is that "the removal of financial incentives or audit and feedback did not affect Chlamydia testing in rural Australian general practices." While that might be overly narrow, the details around individual preventive services, like Chlamydia testing, cannot be generalized to ALL preventive services (e.g., counseling, cancer screening, immunizations, etc.). Fourth, given that the interventions were delivered on a quarterly basis, it is unclear why the investigators chose to analyze their data on an annual basis and, in the analysis, only compare Year 2 to the baseline year. This effectively ignores all the data from Year 1 (when more practices were participating) and treats all of the data collected during Year 2 as equivalent (i.e., a visit in Month 13 is the same as a visit in Month 14). It also ignores the possibility that changes in Chlamydia screening among the groups could have had different trajectories and could have overweighted the contribution of clinics that participated in the trial longer (the analysis was adjusted for clustering by practice, but not by practice volume). Specific Comments Title: As noted above, including "preventive care activities" in the title regarding an intervention that had to do with Chlamydia screening is an overgeneralization. Page 3, Line 66: Here, and throughout the manuscript, the investigators never state the number of GPs or the number of GPs clustered within practices. I see there were 305 GPs in the 63 clinics randomized to the interventions in the original ACCEPt Trial. Page 3, Line 71: More a problem with presentation, here and elsewhere, it is confusing that the authors present the RCT as organized in 4 groups, but then only analyze the data in 2 groups. This can be improved by mentioning the analytic plan earlier on and, in Lines 77 through 84, introducing the actual comparison groups much earlier in the sentences. As written, one only discovers the group being discussed in the middle of the sentences, after some data about those groups has been presented. Page 3, Abstract General Comment: The timeframe of the assessment of the primary outcome is not stated (in Year 2 after randomization). Page 5, Line 123: The authors say there were no changes to the trial methods, but later report a protocol deviation (modified intention-to-treat analysis). Page 5, Line 136: The authors mention, in addition to financial incentives and feedback, computer alerts, but what about the educational package? That was part of the initial multicomponent ACCEPt intervention. Was that only done at the beginning of ACCEPt and never repeated? Page 5, Line 143: If a patient reported risk factors or genital symptoms and required further testing, how were repeat patients counted? How were they handled in the analysis? Page 6, Line 168: The authors state that the primary outcome was the "annual chlamydia testing rate," but do not say when this was assessed. It is not until the reader gets to Table 2 does it become clear that the primary analysis is between Baseline and Year 2. Page 8, Line 243: The authors need to define the interpretation and direction of the interaction effect. Was the 3.2% (NS) the marginal increase in screening in practices randomized to retain both of the interventions relative to practices retaining neither? How is this conceptually different from the analysis presented on page 9, line 260? Page 9, Line 274: Given that all of the clinics participating in ACCEPt and ACCEPt-able were rural, it is not clear how information about metropolitan clinics is included. Page 18, Table 1: The table should include information about GPs within practices (central tendency and variation). Page 20, Table 2: In the footnote and elsewhere, the authors say the denominator is "N=number of individuals attending the clinic," but I think it should be the number of 16-29-year-olds. Page 21, Table 3: The rows about "authorising payments" are unclear. What do these represent? *** Any attachments provided with reviews can be seen via the following link: [LINK] 9 Sep 2021 Submitted filename: Response to reviewers-FINAL.docx Click here for additional data file. 26 Oct 2021 Dear Dr. Hocking, Thank you very much for re-submitting your manuscript "The impact of removing financial incentives and/or audit and feedback on chlamydia testing in general practice: A cluster randomised controlled trial (ACCEPt-able)" (PMEDICINE-D-21-02501R2) for review by PLOS Medicine. I have discussed the paper with my colleagues and the academic editor and it was also seen again by two reviewers. I am pleased to say that provided the remaining editorial and production issues are dealt with we are planning to accept the paper for publication in the journal. The remaining issues that need to be addressed are listed at the end of this email. Any accompanying reviewer attachments can be seen via the link below. Please take these into account before resubmitting your manuscript: [LINK] ***Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.*** In revising the manuscript for further consideration here, please ensure you address the specific points made by each reviewer and the editors. In your rebuttal letter you should indicate your response to the reviewers' and editors' comments and the changes you have made in the manuscript. Please submit a clean version of the paper as the main article file. A version with changes marked must also be uploaded as a marked up manuscript file. Please also check the guidelines for revised papers at http://journals.plos.org/plosmedicine/s/revising-your-manuscript for any that apply to your paper. If you haven't already, we ask that you provide a short, non-technical Author Summary of your research to make findings accessible to a wide audience that includes both scientists and non-scientists. The Author Summary should immediately follow the Abstract in your revised manuscript. This text is subject to editorial change and should be distinct from the scientific abstract. We expect to receive your revised manuscript within 1 week. Please email us (plosmedicine@plos.org) if you have any questions or concerns. We ask every co-author listed on the manuscript to fill in a contributing author statement. If any of the co-authors have not filled in the statement, we will remind them to do so when the paper is revised. If all statements are not completed in a timely fashion this could hold up the re-review process. Should there be a problem getting one of your co-authors to fill in a statement we will be in contact. YOU MUST NOT ADD OR REMOVE AUTHORS UNLESS YOU HAVE ALERTED THE EDITOR HANDLING THE MANUSCRIPT TO THE CHANGE AND THEY SPECIFICALLY HAVE AGREED TO IT. Please ensure that the paper adheres to the PLOS Data Availability Policy (see http://journals.plos.org/plosmedicine/s/data-availability), which requires that all data underlying the study's findings be provided in a repository or as Supporting Information. For data residing with a third party, authors are required to provide instructions with contact information for obtaining the data. PLOS journals do not allow statements supported by "data not shown" or "unpublished results." For such statements, authors must provide supporting data or cite public sources that include it. To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. Please note, when your manuscript is accepted, an uncorrected proof of your manuscript will be published online ahead of the final version, unless you've already opted out via the online submission form. If, for any reason, you do not want an earlier version of your manuscript published online or are unsure if you have already indicated as such, please let the journal staff know immediately at plosmedicine@plos.org. If you have any questions in the meantime, please contact me or the journal staff on plosmedicine@plos.org. We look forward to receiving the revised manuscript by Nov 02 2021 11:59PM. Sincerely, Beryne Odeny, PLOS Medicine plosmedicine.org ------------------------------------------------------------ Requests from Editors: 1) The Data Availability Statement (DAS) requires revision. If part of the data is not freely available, please include an appropriate contact (web or email address) for inquiries (this cannot be a study author/ co-author). 2) Please place the Author Summary after the Abstract. 3) Abstract - In the last sentence of the Abstract Methods and Findings section, please describe the main limitation(s) of the study's methodology. 4) The terms gender and sex are not interchangeable (as discussed in http://www.who.int/gender/whatisgender/en/ ); please use the appropriate term. 5) Please indicate in the figure captions the meaning of the bars and whiskers in the figures 6) Please remove the information on funding acquisition from the end of the main text. In the event of publication, this will appear in the article metadata, via entries in the submission form 7) References: a) Please ensure there is no space between in-text reference call outs. For example, “…community [2,8,9].” b) Please ensure that journal name abbreviations consistently match those found in the National Center for Biotechnology Information (NCBI) databases. https://journals.plos.org/plosmedicine/s/submission-guidelines#loc-references. 8) To help us extend the reach of your research, please provide any Twitter handle(s) that would be appropriate to tag, including your own, your coauthors’, your institution, funder, or lab. Comments from Reviewers: Reviewer #2: Many thanks authors for their great effort to improve the manuscript. I am mostly satisfied with the response and revision. However, one minor issue still remains. In response to my comments on missing data, the authors said "We have added in two additional supplementary tables...(Supplementary Tables 3A and 3B)". However, these two tables have neither been mentioned in the final clean version nor appeared anywhere in the supplementary information. Could authors please add and link these two supplementary tables in the submission? Also, it seems a previous version rather than a clean version was presented in the resubmission. Can authors make sure all the changes are included and appear in the final clean version? Reviewer #3: This is a revised version. The "clean" version that was uploaded appears to be the same as the original (R1) revision. I am reviewing the response letter and the "marked" version. Not having the clean version has made this review more challenging. The authors have addressed most, but not all, of the prior critiques. In particular, the addition of details about the clinic environment and the nature of the intervention greatly improves the usefulness of the manuscript. It is good that the authors have made the language more specific about their intervention being limited to Chlamydia testing. The authors were non-responsive to my first comment about using multiple models and the lack of statistical significance of the results as shown in Table 2 (the OR for all treatment effects include 1.0). With apologies, for my fourth point about the analysis only comparing year 2 in aggregate to the baseline year, included a typo. I meant to write that "a visit in Month 13 is the same as a visit in Month 24" (not "Month 14"). The authors were non-responsive on this point. At a minimum, they need to address this as a limitation. Any attachments provided with reviews can be seen via the following link: [LINK] 28 Oct 2021 Submitted filename: Response to reviewers - R2.docx Click here for additional data file. 2 Nov 2021 Dear Dr Hocking, On behalf of my colleagues and the Academic Editor, Dr. David Peiris, I am pleased to inform you that we have agreed to publish your manuscript "The impact of removing financial incentives and/or audit and feedback on chlamydia testing in general practice: A cluster randomised controlled trial (ACCEPt-able)" (PMEDICINE-D-21-02501R3) in PLOS Medicine. Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. Please be aware that it may take several days for you to receive this email; during this time no action is required by you. Once you have received these formatting requests, please note that your manuscript will not be scheduled for publication until you have made the required changes. In the meantime, please log into Editorial Manager at http://www.editorialmanager.com/pmedicine/, click the "Update My Information" link at the top of the page, and update your user information to ensure an efficient production process. PUBLICATION SCHEDULE Given our busy publication schedule for the remainder of 2021, we are planning to publish your paper in early January 2022 (the exact date will be communicated to you once confirmed). PRESS We frequently collaborate with press offices. If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximise its impact. If the press office is planning to promote your findings, we would be grateful if they could coordinate with medicinepress@plos.org. If you have not yet opted out of the early version process, we ask that you notify us immediately of any press plans so that we may do so on your behalf. We also ask that you take this opportunity to read our Embargo Policy regarding the discussion, promotion and media coverage of work that is yet to be published by PLOS. As your manuscript is not yet published, it is bound by the conditions of our Embargo Policy. Please be aware that this policy is in place both to ensure that any press coverage of your article is fully substantiated and to provide a direct link between such coverage and the published work. For full details of our Embargo Policy, please visit http://www.plos.org/about/media-inquiries/embargo-policy/. To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols Thank you again for submitting to PLOS Medicine. We look forward to publishing your paper. Sincerely, Beryne Odeny PLOS Medicine
  29 in total

1.  The impact of corporate memory loss: What happens when a senior executive leaves?

Authors:  Denis Lahaie
Journal:  Int J Health Care Qual Assur Inc Leadersh Health Serv       Date:  2005

2.  The Quality and Outcomes Framework--where next?

Authors:  Stephen Gillam; Nicholas Steel
Journal:  BMJ       Date:  2013-02-07

3.  Effects of Behavioral Interventions on Inappropriate Antibiotic Prescribing in Primary Care 12 Months After Stopping Interventions.

Authors:  Jeffrey A Linder; Daniella Meeker; Craig R Fox; Mark W Friedberg; Stephen D Persell; Noah J Goldstein; Jason N Doctor
Journal:  JAMA       Date:  2017-10-10       Impact factor: 56.272

4.  'It Opened My Eyes'-examining the impact of a multifaceted chlamydia testing intervention on general practitioners using Normalization Process Theory.

Authors:  Anna Yeung; Jane Hocking; Rebecca Guy; Christopher K Fairley; Kirsty Smith; Alaina Vaisey; Basil Donovan; John Imrie; Jane Gunn; Meredith Temple-Smith
Journal:  Fam Pract       Date:  2018-09-18       Impact factor: 2.267

5.  The impact of removing financial incentives from clinical quality indicators: longitudinal analysis of four Kaiser Permanente indicators.

Authors:  Helen Lester; Julie Schmittdiel; Joe Selby; Bruce Fireman; Stephen Campbell; Janelle Lee; Alan Whippy; Philip Madvig
Journal:  BMJ       Date:  2010-05-11

Review 6.  Analysis and reporting of factorial trials: a systematic review.

Authors:  Finlay A McAlister; Sharon E Straus; David L Sackett; Douglas G Altman
Journal:  JAMA       Date:  2003-05-21       Impact factor: 56.272

Review 7.  Educational outreach visits: effects on professional practice and health care outcomes.

Authors:  M A O'Brien; S Rogers; G Jamtvedt; A D Oxman; J Odgaard-Jensen; D T Kristoffersen; L Forsetlund; D Bainbridge; N Freemantle; D A Davis; R B Haynes; E L Harvey
Journal:  Cochrane Database Syst Rev       Date:  2007-10-17

Review 8.  The method of minimization for allocation to clinical trials. a review.

Authors:  Neil W Scott; Gladys C McPherson; Craig R Ramsay; Marion K Campbell
Journal:  Control Clin Trials       Date:  2002-12

Review 9.  Growing literature, stagnant science? Systematic review, meta-regression and cumulative analysis of audit and feedback interventions in health care.

Authors:  Noah M Ivers; Jeremy M Grimshaw; Gro Jamtvedt; Signe Flottorp; Mary Ann O'Brien; Simon D French; Jane Young; Jan Odgaard-Jensen
Journal:  J Gen Intern Med       Date:  2014-11       Impact factor: 5.128

10.  Ten tips for advancing a culture of improvement in primary care.

Authors:  Tara Kiran; Noor Ramji; Mary Beth Derocher; Rajesh Girdhari; Samantha Davie; Margarita Lam-Antoniades
Journal:  BMJ Qual Saf       Date:  2018-10-31       Impact factor: 7.035

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.