Literature DB >> 28859674

Identifying research priorities for effective retention strategies in clinical trials.

Anna Kearney1, Anne Daykin2, Alison R G Shaw2, Athene J Lane2, Jane M Blazeby2, Mike Clarke3, Paula Williamson4, Carrol Gamble5.   

Abstract

BACKGROUND: The failure to retain patients or collect primary-outcome data is a common challenge for trials and reduces the statistical power and potentially introduces bias into the analysis. Identifying strategies to minimise missing data was the second highest methodological research priority in a Delphi survey of the Directors of UK Clinical Trial Units (CTUs) and is important to minimise waste in research. Our aim was to assess the current retention practices within the UK and priorities for future research to evaluate the effectiveness of strategies to reduce attrition.
METHODS: Seventy-five chief investigators of NIHR Health Technology Assessment (HTA)-funded trials starting between 2009 and 2012 were surveyed to elicit their awareness about causes of missing data within their trial and recommended practices for improving retention. Forty-seven CTUs registered within the UKCRC network were surveyed separately to identify approaches and strategies being used to mitigate missing data across trials. Responses from the current practice surveys were used to inform a subsequent two-round Delphi survey with registered CTUs. A consensus list of retention research strategies was produced and ranked by priority.
RESULTS: Fifty out of seventy-five (67%) chief investigators and 33/47 (70%) registered CTUs completed the current practice surveys. Seventy-eight percent of trialists were aware of retention challenges and implemented strategies at trial design. Patient-initiated withdrawal was the most common cause of missing data. Registered CTUs routinely used newsletters, timeline of participant visits, and telephone reminders to mitigate missing data. Whilst 36 out of 59 strategies presented had been formally or informally evaluated, some frequently used strategies, such as site initiation training, have had no research to inform practice. Thirty-five registered CTUs (74%) participated in the Delphi survey. Research into the effectiveness of site initiation training, frequency of patient contact during a trial, the use of routinely collected data, the frequency and timing of reminders, triggered site training and the time needed to complete questionnaires was deemed critical. Research into the effectiveness of Christmas cards for site staff was not of critical importance.
CONCLUSION: The surveys of current practices demonstrates that a variety of strategies are being used to mitigate missing data but with little evidence to support their use. Six retention strategies were deemed critically important within the Delphi survey and should be a primary focus of future retention research.

Entities:  

Keywords:  Attrition; Clinical trials; Missing data; Missing data strategies; Retention; Study design

Mesh:

Year:  2017        PMID: 28859674      PMCID: PMC5580283          DOI: 10.1186/s13063-017-2132-z

Source DB:  PubMed          Journal:  Trials        ISSN: 1745-6215            Impact factor:   2.279


Background

The challenges of recruiting and retaining participants in clinical trials are well documented [1-4] and addressing these is of critical importance. A recent Delphi survey with directors of UKCRC Clinical Trial Units (CTUs) established that identifying methods to improve recruitment was a top methodological research priority with methods to minimise attrition and the development of core outcome sets as joint second [5]. These priorities are in line with moves to minimise waste in research, ensuring that trials are as robust and cost-effective as possible [6-10]. One of the key ways to achieve this is to maximise the retention of all recruited patients in the study and the collection, analysis and reporting of a complete set of outcomes for them. Whilst there are a number of projects addressing recruitment challenges it is important to ensure there is an equal focus on retention. Recruiting and randomising people who are not subsequently retained for the measurement of the primary outcome may be worse for the analysis of the trial than not randomising that patient at all. Missing data arises from patients being lost to follow-up or withdrawing before data collection time points, difficulties in measuring and recording outcomes for patients who are retained, incomplete or missing patient-reported outcomes, or by excluding data from randomised patients from the analysis population. Missing primary-outcome data in clinical trials is a common problem which has the potential to reduce the power of the trial and can introduce bias if the reasons or amounts of missing data are different across arms. Both the Food and Drug Administration (FDA) and European Medicines Agency (EMA) guidelines [11-14] strongly emphasise the need to mitigate missing data within trial design and conduct, as statistical methods ‘cannot robustly recover the estimates from a complete data set’. Our aim is to understand the current practices employed by UK trialists to mitigate missing data in clinical trials. To achieve this we conducted a survey of chief investigators of National Institute of Health Research (NIHR) Health Technology Assessment (HTA)-funded trials and CTUs registered within the UK Clinical Research Collaboration (UKCRC) network. A subsequent Delphi survey across registered CTUs was undertaken to establish retention research priorities to minimise missing data in clinical trials.

Methods

Surveys of current practices for mitigating missing data

Two different surveys of HTA chief investigators and registered CTUs were used to capture current retention practices. HTA chief investigators were surveyed to provide a clinical perspective of retention-based strategies used within specific trials representing a range of disease areas. In contrast, registered CTUs were surveyed to identify non-clinical expertise and insights pertinent across a wide range of trial designs. Whilst it is possible some HTA-funded trials engaged the expertise of registered CTUs, the two surveys were designed to elicit responses from the different perspectives. A cohort of 76 NIHR Health Technology Assessment Programme (HTA)-funded randomised trials were identified through their website portfolio [15]. The NIHR HTA portfolio was chosen as it represents the largest public funder of current and ongoing trials and the start dates ensured at least 1 year’s recruitment following any pilot work or delays obtaining governance approvals. Two-arm, parallel trials were included. Exclusion criteria were pilot studies, patient preference trials, phase 1 and 2 trials, and studies evaluating longer-term follow-up of previous trials and substudies. This cohort was identified as part of a wider project on retention. Parallel trials with more than two arms were also excluded for consistency across linked projects. Chief investigators were surveyed to identify whether retention concerns were identified or known at the trial outset and which missing data strategies were implemented either during the trial design or subsequently during trial conduct (Additional file 1). Survey questions included the causes of missing data within their current trials and whether observed levels of missing data were higher, lower or as expected compared to their estimations at trial outset. Respondents were also asked to recommend the three most effective retention practices based on their experience of clinical trials. A second survey was emailed to 47 registered CTUs identified from the UK Clinical Research Collaboration (UKCRC) website in August 2014 (Additional file 2). Surveys were initially sent to registered CTU directors to complete or delegate as appropriate. If there was no response, attempts were made to identify contact details for other senior staff such as deputy directors, operational directors, senior trial managers or senior statisticians. Fifty-nine strategies to reduce missing data were collated from the Cochrane review and the published results of previous surveys [16, 17]. Registered CTUs were asked to identify which of these strategies they had used within their trials either during the initial design or later in response to specific challenges. Registered CTUs also had the opportunity to suggest strategies that they felt were missing. For each strategy selected, registered CTUs were asked whether it was used routinely or occasionally and whether they had evaluated its effectiveness. Evaluations were classified as either a formal nested study or an informal evaluation comparing retention rates before and after implementation. Questions also explored how frequently sample size calculations were adjusted for missing data, what percentage of missing data was used and the justification for this. Both surveys were created in MS Word and distributed via email to allow collaboration within the relevant trial teams and registered CTUs. One author (AK) categorised free-text fields and analysed binary responses from the surveys using SPSS 22.

Delphi survey

A Delphi survey was used to gain consensus amongst the registered CTUs to identify which missing data strategies should be prioritised for future research to evaluate their effectiveness. Registered CTUs were invited to take part in the Delphi survey due to their knowledge across clinical trials so that established priorities would be pertinent to the majority of trials undertaken. Forty-seven CTU directors, or their proxy from the previous survey, were invited by email to take part in a two-round online Delphi survey. During registration participants were given a unique ID number to allow the survey to be completed across a number of sessions and link responses between rounds. A list of 67 missing data strategies were compiled for use in round 1 based on the results of the surveys shared with CTUs and HTA chief investigators. Registered CTU responses directed changes to the 59 strategies initially used: three strategies were amended for clarity; four strategies were separated to allow delineation between strategies aimed at sites and participants; and similar strategies were grouped together to minimise Delphi survey burden (Additional file 3: Table S1). Two topics were removed as they were not used by registered CTUs (Additional file 3: Table S2). Twelve new strategies were added, 10 of which were influenced by responses from the chief investigator’s survey. These predominantly focussed on the timing and frequency of strategies, e.g. newsletters, patient contact, site contact and questionnaires and were in added in response to comments in the chief investigator’s survey about ‘regular’ contact. Participants were invited to score each strategy using a scale of 1–9 based on the GRADE guidelines [18]. Scores 1–3 indicated that research into the effectiveness of the retention strategy was not important, 4–6 indicated that it was important but not critical and 7–9 indicated that research was of critical importance. All survey participants could suggest additional retention strategies during round 1 and could abstain from scoring any strategy. All participants who completed round 1 were invited to participate in round 2 where they were shown a summary of the group’s scores from the previous round and given the option of changing their individual scores or keeping them the same (Fig. 1). All strategies from round 1 were used in the subsequent round. Additional strategies suggested by participants were reviewed by CG and AK for inclusion in round 2.
Fig. 1

An example of the scoring software used in round 2 of the Delphi survey. Individual participant scores were highlighted in yellow, with percentage of all respondent scores listed beneath each radio button

An example of the scoring software used in round 2 of the Delphi survey. Individual participant scores were highlighted in yellow, with percentage of all respondent scores listed beneath each radio button Incentive prize draws of £75 and £25 high-street gift vouchers were offered for responses to round 2 within 7 and 12 days, respectively. The aim of the incentive was to improve response rates and minimise the need for reminders. Participants were notified of the incentive within the email invite to round 2. Consensus was predefined using criteria used in a similar methodological priority setting exercise and core outcome development [5, 19]. Consensus that research was of critical importance was reached if > 70% of scores were 7–9 and < 15% of scores were 1–3. Consensus that research was not important was reached if > 70% of scores were 1–3 as long as < 15% scores were 7–9. All research topics from round 2 were ranked according to the percentage of participants scoring a research topic as critically important (scores 7–9). Where strategies achieved the same score they were then ranked order of the percentage of scores 4–6.

Results

Seventy-six chief investigators were approached to complete the survey, but one trial responded to confirm that it did not proceed following the initial pilot and was excluded from the cohort. Fifty of the remaining 75 (67%) surveys were returned (Fig. 2). A breakdown of responder role can be found in Additional file 3: Table S3. Trials represented a broad range of health conditions and intervention types (Additional file 3: Table S4). Over half the trials had closed to recruitment, and 38 (76%) of the trials were aiming to recruit no more than 1000 participants.
Fig. 2

Cohort of National Institute of Health Research (NIHR) Health Technology Assessment Programme (HTA)-funded trials. A search of the NIHR HTA portfolio website was conducted on 23 September 2014 using a keyword ‘randomised’, limited to primary research

Cohort of National Institute of Health Research (NIHR) Health Technology Assessment Programme (HTA)-funded trials. A search of the NIHR HTA portfolio website was conducted on 23 September 2014 using a keyword ‘randomised’, limited to primary research Thirty-nine trials (78%) were aware of retention issues at the outset of the trial, with 19 (49%) citing challenges associated with the patient population, such as high mortality or a mobile population, and 16 (41%) were concerned about patients not returning data (Additional file 3: Table S5). Only seven of the 50 trials (14%) stated that current missing data levels were higher than expected at trial design. Forty-one (84%) trials experienced missing data caused by patient-initiated withdrawal, 30 (61%) through losing contact with patients and 24 (49%) by patients not returning data. However, 14 trials (29%) also reported that data was missing because patients did not attend visits, 12 (25%) due to clinical staff failing to take measurements and 10 (20%) reported that data was not provided by clinical staff (Table 1).
Table 1

Current causes of missing data within the cohort of Health Technology Assessment Programme (HTA)-funded trials

Causes of missing dataNumber of trials (%) n = 49a
Patients withdrawing41 (84%)
Losing contact with patients30 (61%)
Patients not returning questionnaire24 (49%)
Patient deaths23 (47%)
Clinicians withdrawing patients17 (35%)
Patients not attending a visit/clinic14 (29%)
Missed measurement by clinical staff12 (25%)
Patient outcomes other than death preventing measurement, e.g. coma, too ill to complete measures10 (20%)
Data not provided by clinical staff10 (20%)
Other6 (12%)
Technology problems4 (8%)
Laboratory problems2 (4%)

Survey respondents chose all causes of missing data observed in their trial.aOne person did not complete the question

Current causes of missing data within the cohort of Health Technology Assessment Programme (HTA)-funded trials Survey respondents chose all causes of missing data observed in their trial.aOne person did not complete the question Based on the respondent’s experiences of clinical trials, the most effective practices for mitigating missing data were robust monitoring and working closely with research sites. Twenty-five (50%) trials recommended monitoring practices to identify, track and rigorously follow up missing data, 15 (30%) recommended maintaining good relationships with trial sites ensuring regular contact, 11 (22%) highlighted the importance of site training and 10 (20%) recommended offering multiple approaches to collect data such as home visits or telephone interviews (Table 2).
Table 2

Top five recommended practices to mitigate missing data recommended by chief investigators

Retention strategyNumber of respondents (%) n = 50
Monitoring (procedures, methods and systems for monitoring data return and following up outstanding data)25 (50%)
Good site relationship/regular contact with sites to ensure buy in15 (30%)
Site training (initiation training and triggered training)11 (22%)
Multiple methods of data collection10 (20%)
Well-chosen measures and outcomes6 (12%)

See Additional file 3: Table S7 for complete list of recommend practices

Top five recommended practices to mitigate missing data recommended by chief investigators See Additional file 3: Table S7 for complete list of recommend practices

Survey of registered CTUs

Thirty-three out of forty-seven (70%) registered CTUs responded to the survey of current practices. Twenty-nine (90%) registered CTUs routinely adjust their sample size to account for missing data of which 19 (66%) used evidence from other trials to inform the levels of adjustment, nine (31%) used their own past experience, four (14%) used pilot data, two (7%) used estimated figures from the chief investigator, two (7%) used a standard 20% dropout rate and one (3%) used a best guess (Additional file 3: Table S9). Newsletters were the most routinely used missing data strategy reported by 23 registered CTUs (70%) (Table 3). A total of 30 registered CTUs reported using newsletters at some point to mitigate missing data, 20 of which used them to communicate with sites, one used them to communicate with patients and nine registered CTUs used them with both audiences.
Table 3

Missing data interventions and reported evaluations into their effectiveness (registered Clinical Trial Unit (CTU) survey Questions 6 and 7)

Missing data interventionRespondents who have used the intervention (% of all survey respondents, n = 33)Used routinely (% of all survey respondents, n = 33)Used occasionally (% of all survey respondents, n = 33)No response for frequency of use (% of respondents who used intervention)Before/after evaluation (informal)Nested RCT evaluation (formal)Total evaluations
Newslettersa 30 (91%)23 (70%)5 (15%)2 (7%)224
A timeline of participant visits for sites24 (73%)19 (58%)4 (12%)1 (4%)000
Inclusion of prepaid envelope (questionnaires)b 28 (85%)c 19 (58%)6 (18%)1 (4%)011
Telephone reminders28 (85%)18 (55%)9 (27%)1 (4%)213
Data collection scheduled with routine care25 (76%)18 (55%)6 (18%)1 (4%)112
Site initiation training on missing data19 (58%)18 (55%)0 (0%)1 (5%)000
Investigator meetings face to face22 (67%)17 (52%)5 (15%)0 (0%)011
Routines site visits by CTU staff23 (70%)15 (45%)8 (24%)0 (0%)101
Targeted recruitment of sites/GPs21 (64%)15 (45%)6 (18%)0 (0%)022
Flexibility in appointment times21 (64%)15 (45%)4 (12%)2 (10%)112
Communication of trial results20 (61%)15 (45%)3 (9%)2 (10%)101
Investigator teleconferences22 (67%)d 15 (45%)8 (24%)0 (0%)011
Questionnaires completed in clinicb 22 (67%)15 (45%)5 (15%)2 (9%)112
Minimising frequency of questionnairesb 21 (64%)15 (45%)3 (9%)3 (14%)000
Short questionnaireb 24 (73%)14 (42%)9 (27%)1 (4%)011
Collecting multiple contact details22 (67%)13 (39%)8 (24%)1 (5%)213
Email reminders21 (64%)13 (39%)6 (18%)2 (10%)404
Postal reminders23 (70%)12 (36%)10 (30%)1 (4%)224
Total design method for Questionnairesb 15 (46%)12 (36%)3 (9%)0 (0%)112
Re-imbursement of participant expenses24 (73%)e 11 (33%)12 (36%)0 (0%)112
Triggered site training on missing data23 (70%)11 (33%)12 (36%)0 (0%)000
Use of routinely collected data29 (88%)10 (30%)17 (52%)2 (7%)213
Contact GPs for missing data/trace patients27 (82%)10 (30%)14 (42%)3 (11%)011
Patient diaries26 (79%)10 (30%)13 (39%)3 (12%)101
Enhanced cover letter (questionnaires)b 14 (43%)9 (27%)5 (15%)0 (0%)123
Staggered per patient payments to sites20 (61%)8 (24%)12 (36%0 (0%)000
Patient data entry18 (55%)8 (24%)8 (24%)2 (11%)011
Trial identity cards14 (43%)8 (24%)5 (15%)1 (7%)000
Telephone questionnaires20 (61%)7 (21%)11 (33%)2 (10%)213
Trial website18 (55%)7 (21%)10 (30%)1 (6%)000
Taking contact details for a friend/family13 (40%)7 (21%)6 (18%)0 (0%)011
Long but clear questionnaireb 10 (31%)7 (21%)2 (6%)1 (10%)000
Only collecting the primary outcome for patients with missing data17 (52%)6 (18%)10 (30%)1 (6%)213
Gift18 (55%)6 (18%)10 (30%)2 (11%)112
ONS flagging16 (49%)e 6 (18%)7 (21%)2 (13%)000
Flexibility in appointment locations14 (43%)6 (18%)6 (18%)2(14%)101
Money/gift voucher given on completion of a milestone17 (52%)5 (15%)12 (36%)0 (0%)101
Contacting patients between visits13 (40%)4 (12%)9 (27%)0 (0%)000
Christmas and birthday cards13 (40%)4 (12%)9 (27%)0 (0%)011
Freephone number for updating contact7 (22%)4 (12%)3 (9%)0 (0%)000
Follow-up through patient notes only19 (58%)e 3 (9%)15 (45%)0 (0%)000
Transport to and from appointments8 (25%)3 (9%)4 (12%)1 (13%)000
SMS text reminders16 (49%)e 2 (6%)13 (39%)0 (0%)314
Trial certificate8 (25%)2 (6%)6 (18%)0 (0%)000
Medical questions first in questionnaireb 2 (7%)2 (6%)0 (0%)0 (0%)000
Generic questions first in questionnaireb 3 (10%)2 (6%)0 (0%)1 (33%)000
Questionnaires sent less than 3 weeks after a visitb 2 (7%)2 (6%)0 (0%)0 (0%)000
Other (questionnaires) b 2 (7%)2 (6%)0 (0%)0 (0%)000
Money/gift voucher given regardless8 (25%)1 (3%)7 (21%)0 (0%)123
Case management3 (10%)1 (3%)2 (6%)0 (0%)000
Other2 (7%)1 (3%)1 (3%)0 (0%)000
Personal touch (questionnaires)b 9 (28%)1 (3%)8 (24%)0 (0%)000
Questions about health issue first in questionnaireb 1 (4%)1 (3%)0 (0%)0 (0%)101
Questionnaires sent before clinic visitb 5 (16%)1 (3%)4 (12%)0 (0%)011
Prize draw limited to trial participants6 (19%)0 (0%)6 (18%)0 (0%)000
Social media5 (16%)e 0 (0%)3 (9%)1 (20%)000
Priority or recorded post (questionnaires)b 4 (13%)0 (0%)3 (9%)1 (25%)000
Crèche service0 (0%)0 (0%)0 (0%)0 (0%)000
Behavioural motivation0 (0%)0 (0%)0 (0%)0 (0%)000
Charity donation0 (0%)0 (0%)0 (0%)0 (0%)000
National lottery ticket or similar public draw0 (0%)0 (0%)0 (0%)0 (0%)000
Total353166
Number of strategies that have been evaluated (% of all strategies, n = 59)23 (39%)26 (44%)36 (61%)

aOne case where the newsletters were used with patients only, nine cases where newsletters were used with both patients and research sites, and 20 cases where they were only used with research sites. bStrategies used to enhance questionnaire response rates from Question 7 of the registered CTU survey cTwo respondent stated that they would not use this intervention again. dOne person reported both occasional and routine use. eOne respondent stated that they would not use this intervention again

CTU Clinical Trial Unit, GP general practitioner, ONS Office for National Statistics, RCT randomised controlled trial, SMS short message service

Missing data interventions and reported evaluations into their effectiveness (registered Clinical Trial Unit (CTU) survey Questions 6 and 7) aOne case where the newsletters were used with patients only, nine cases where newsletters were used with both patients and research sites, and 20 cases where they were only used with research sites. bStrategies used to enhance questionnaire response rates from Question 7 of the registered CTU survey cTwo respondent stated that they would not use this intervention again. dOne person reported both occasional and routine use. eOne respondent stated that they would not use this intervention again CTU Clinical Trial Unit, GP general practitioner, ONS Office for National Statistics, RCT randomised controlled trial, SMS short message service Nineteen registered CTUs (58%) reported routinely using a timeline of participant visits and the inclusion of a prepaid envelope for questionnaires, 18 (55%) regularly used telephone reminders, data collection scheduled with routine care and site initiation training on missing data. Seventeen (52%) of registered CTUs reported occasionally using routinely collected data, 15 (45%) occasionally used follow-up through patient notes, 14 (42%) contacting GPs for missing data/to trace patients, 13 (39%) patient diaries, SMS (text) reminders and re-imbursement of participant expenses. No-one reported using a crèche service, behavioural motivation, charity donations or public draws such as national lottery tickets. Thirty-one planned, ongoing or complete nested randomised control trials evaluations were reported for 26 out of the 59 (44%) different missing data strategies. Thirty-five informal evaluations of 23 interventions assessed effectiveness by comparing retention before and after the implementation of a retention strategy. In total, 36 of the 59 listed strategies had had some form of evaluation (Table 3). However, no assessments of site initiation training or a timeline of participant visits had been undertaken despite their frequent use by registered CTUs. Thirty-five (74%) registered CTUs responded to round 1 of the Delphi survey with 34 (97%) completing round 2. Two people independently responded on behalf of one registered CTU. Both sets of answers were included and we report the analysis for 36 responses in round 1 and 35 responses in round 2. In round 1 consensus was reached on two strategies; the use of routinely collected data (72%) and site initiation training on missing data (75%) (Table 4).
Table 4

Delphi survey research priorities for assessing the effectiveness of missing data interventions

ScoresRound 1Round 2Ranking
Missing data intervention%1–3%4–6%7–9%1–3%4–6%7–9
Site initiation training on missing dataa 6%19%75%6%11%83%1
Frequency of patient contact during the triala 3%31%66%0%21%79%2
Use of routinely collected dataa 6%22%72%6%17%77%3
Frequency and timing of remindersa 0%34%66%0%24%76%4
Triggered site training on missing dataa 6%31%64%3%23%74%5
Length/time needed to complete the questionnairea 11%28%61%9%17%74%6
Frequency of contact between central trial staff and investigators11%31%58%6%26%69%7
Impact of site recruitment rates on data collection9%31%60%9%23%69%8
Postal or online questionnaires3%39%58%3%31%66%9
Frequency of questionnaires6%36%58%6%29%66%10
Data collection scheduled with routine care8%31%61%9%26%66%11
ONS flagging of patients15%27%58%16%19%66%12
Impact of local site researcher/clinical staff continuity6%34%60%3%32%65%13
Telephone reminders9%34%57%9%26%65%14
Contacting GPs for missing data or to trace patients17%31%51%9%26%65%14
Only collecting the primary outcome for patients with missing primary and secondary data6%32%62%6%30%64%16
Email reminders0%45%55%3%34%63%17
Patient data entry, e.g. use of mobile phone applications (apps), online data or other systems9%34%57%6%32%62%18
Staggered per patient payments based on patient progress and data collection9%29%63%9%29%62%19
A timeline reminder of participant visits for sites11%37%51%9%32%59%20
Flexibility in appointment times, e.g. data collection window6%42%53%6%37%57%21
Site selection strategies11%37%51%11%31%57%22
Questionnaires completed in the presence of researchers/clinical staff11%39%50%11%31%57%22
Case management, e.g. arranging appointments and helping patients access health care19%35%45%20%23%57%24
Re-imbursement of participant expenses11%37%51%6%38%56%25
Postal reminders11%40%49%9%35%56%26
Questionnaires returned to local sites vs central office, e.g. is monitoring of response rates and follow-up of missing questionnaires best performed by local sites or central trial offices9%37%54%9%35%56%26
Data collected by phoning the patient11%37%51%9%37%54%28
Inclusion of prepaid envelope22%36%42%23%26%51%29
SMS text reminders12%45%42%13%38%50%30
Teleconference meetings with investigators11%42%47%9%43%49%31
Clinician/researcher-collected outcomes versus PROMS (patient-reported outcome measures)NANANA13%41%47%32
Location where questionnaires are completed, e.g. home or clinic8%42%50%9%46%46%33
Retention and withdrawal information within the Patient Information Sheets14%42%44%14%40%46%34
Follow-up through patient notes only14%43%43%9%47%44%35
Research nurse teleconferences or face-to-face meetingsNANANA3%54%43%36
Use of social media to contact participants13%47%41%10%48%42%37
Flexibility in appointment locations, e.g. home or clinic14%39%47%14%46%40%38
Site newsletters11%44%44%14%49%37%39
Availability of blinded outcome assessors to ensure data availability and qualityNANANA29%35%35%40
Routine site visits by CTU staff8%53%39%3%63%34%41
Timing of sending questionnaires, e.g. before or shortly after a visit11%51%37%9%57%34%42
Collecting multiple contact details for participants24%38%38%24%42%33%43
Face-to-face meetings with investigators6%58%36%6%63%31%44
Patient diaries to collect data11%50%39%14%54%31%45
Total Design Method (Dillman [31]), a specific approach to maximise questionnaire response rates that utilises cover letters, reminders and resending questionnaires16%44%41%16%53%31%46
Behavioural motivation strategies, e.g. workshop for patients to help facilitate completion of intervention and follow-up24%45%30%30%42%27%47
Format of newsletters and mode of deliveryNANANA12%62%26%48
Offer of trial results for participants11%58%31%11%63%26%49
Frequency of newsletters8%64%28%6%74%20%50
Question order, e.g. health-related, generic or medical questions first11%61%28%6%74%20%50
Patient newsletters16%59%25%13%68%19%52
Open trial design41%44%15%42%45%13%53
Timing of monetary/gift voucher for participants, e.g. given conditionally on completion of assessment or unconditionally at the beginning or end of trial29%47%24%27%61%12%54
Monetary incentives or gift voucher incentives for participants29%44%26%30%58%12%55
Transport to and from appointments14%63%23%15%76%9%56
Taking contact details for friends/family of participants31%50%19%32%61%6%57
Gift for participant42%45%13%42%52%6%58
Prize draw limited to trial participants41%44%16%34%59%6%59
Enhanced cover letter24%55%21%21%73%6%60
Trial certificate44%50%6%50%44%6%61
Trial website31%53%17%26%69%6%62
The use of a Freephone number for updating participant’s contact details35%58%6%40%57%3%63
Trial identity cards39%52%10%42%55%3%64
Use of social media to contact site staff33%55%12%38%59%3%65
Gift for site staff45%52%3%36%61%3%66
Christmas and/or birthday cards for participants48%45%6%64%33%3%67
Type of post used, e.g. priority, standard or recorded post34%54%11%29%68%3%68
Personal touch, e.g. handwritten letter or addition of post it notes26%71%3%27%73%0%69
Offer of a crèche service50%46%4%57%43%0%70
Christmas cards for site staffb 67%33%0%82%18%0%71

aConsensus was achieved that the future research was of critical importance. bConsensus was achieved that future research was not important

CTU Clinical Trial Unit, GP general practitioner, NA not applicable, ONS Office for National Statistics, SMS short message service

Delphi survey research priorities for assessing the effectiveness of missing data interventions aConsensus was achieved that the future research was of critical importance. bConsensus was achieved that future research was not important CTU Clinical Trial Unit, GP general practitioner, NA not applicable, ONS Office for National Statistics, SMS short message service Participants suggested four new strategies during round 1 which were included in round 2: research nurse teleconferences; the use of patient-reported outcome measures (PROMs) versus clinician-collected outcomes; format of newsletters and mode of delivery; availability of blinded outcome assessors to ensure data availability and quality. Nineteen (54%) of those completing round 2 of the survey responded by the first prize draw deadline, a further nine (26%) by the second prize draw deadline and three (9%) completed the survey after the incentives ended but before the survey closed. A further four (11%) completed the survey after it had officially closed following email or telephone communication but were also included. In round 2 consensus was reached for seven interventions (Table 4). Consensus criteria for critically important research were met for: site initiation training on missing data; frequency of patient contact during the trial; use of routinely collected data; frequency and timing of reminders; triggered site training on missing data and the length of time needed to complete questionnaires. Research into Christmas and birthday cards for site staff was found to be of low importance by over 70% of respondents.

Discussion

Surveys of current practices and a Delphi survey have highlighted routinely used approaches within the UK, the lack of evidence informing practice and future methodological research priorities within retention. A strong focus has been placed on improving recruitment in clinical trials and this is routinely monitored and accepted as a key performance indicator of sites impacting funding received [20]. However, attention needs to be extended to cover the retention of randomised participants with sites monitored and rewarded accordingly. Whilst HTA chief investigators are aware of retention issues and registered CTUs are regularly revising sample sizes and proactively implementing a broad range of strategies to maintain contact with patients, improve questionnaire and data return, minimise patient burden and incentivise patients, retention is currently not used as a key performance indicator within UK research networks and is seldom linked to per-patient payments for research costs. Whilst survey formats differed, the perspectives of chief investigators and registered CTUs were similar. Chief investigator’s recommendation of good monitoring processes to identify and address any problems with data collection was congruent with registered CTUs’ routine use of strategies that might facilitate this such as telephone reminders and routine site visits. Both registered CTUs and chief investigators also placed a strong emphasis on training and working with local research site staff to minimise missing data, with six of the top 10 strategies routinely used by registered CTUs focussed in this area. However, many strategies continue to lack evidence for their effectiveness. Whilst 61% (36/59) of strategies had some evaluation reported by registered CTUs, existing evidence was typically low level: 21 strategies had only one nested RCT, and only five strategies had two evaluations that could be combined in a meta-analysis assuming that there was sufficient homogeneity. It is important to replicate findings in multiple trials to ensure the applicability and generalisability and because results of nested RCTs may be more convincing if they span multiple trials. Many of the research strategies prioritised within the Delphi survey relate to methods routinely used by registered CTUs. Methods all have implications for resource use and despite their frequent use in current practice there was an absence of evidence to support them. Their prioritisation within the Delphi survey likely reflects the desire of registered CTUs to have their practices supported by evidence to ensure that they direct their limited resources to effective methods. More embedded trials (SWATs: Studies Within A Trial) [21] are needed to further the evidence base and avoid wasting resources on unproven and potentially ineffective retention strategies. The MRC-funded START [22] project focusses on SWATs targeting improved recruitment and a similar initiative could be beneficial to improve the evidence base for retention. The results of the Delphi survey provide a list of priorities for future SWATs. Site initiation training was routinely used within registered CTUs and a top practice recommended by chief investigators, but was one of the strategies with no reported evaluations. It is, therefore, unsurprising that this strategy was identified as the top priority for future methodological research during the Delphi survey. Triggered site training also reached consensus criteria in round 2 reiterating the need to examine how multicentre trials address attrition when patient contact and outcome measurement are often delegated to sites. Research into site training is supported by the results of recent workshops reviewing recruitment and retention strategies [16, 23]. Anecdotal evidence suggests that coordinated approaches with local sites and strong clinical buy-in can help to retain patient populations perceived to be at risk of attrition [23]. However, analysis of the methods and content of site training have largely focussed on the impact on informed consent and recruitment [24-28], rather than retention of patients and collection of outcome data where there is a paucity of academic literature. Lienard conducted a nested RCT investigating the effect of training and subsequent monitoring visits on data return and quality but found no difference because the study was terminated early [29]. As the Delphi survey topic titles were broad, further work is needed to explore how site training might impact on retention. Chief investigators commented on both content and methods for delivering training within their survey responses. They described using training to communicate data collection priorities and processes, including the opportunity to develop bespoke processes for individual sites. Suggestions of training methods included interactive presentations, newsletters and refresher training. Existing trial processes (e.g. routine use of newsletters, regular contact with sites) may be able to support such site training as well as enhancing or maintaining the buy-in of research site staff. Berger supports this, communicating how training was important in maintaining the continuity of research staff and equipping them to monitor and address withdrawal reasons, negotiate complaints and reiterate the importance of data collection with patients to improve retention [30]. The choice of outcomes for future SWATs are an important consideration for future research. A Cochrane review identified 38 nested randomised studies of retention interventions of which 34 aimed to improve questionnaire return [17]. In our survey of chief investigators, patient withdrawal was the most widely reported cause of missing data and yet there is little published evidence for effective strategies. The four studies measuring patient retention assessed behavioural motivation, case management and a factorial design of a trial certificate and gift. No effect was found and our research shows that these strategies are, at best, infrequently used by registered CTUs.

Future research

The ranked list of research priorities from the Delphi survey provides a ‘roadmap’ to address uncertainties and systematically evaluate key strategies. Six topics were deemed critically important and should be a primary focus of future retention research. We strongly recommend that future studies take account of the range of causes of missing data reported by chief investigators and previous studies. Retention strategies applicable to a range of trial designs, such as site training, frequency of patient contact and the frequency and timing of reminders, should assess impact on patient withdrawal, patients lost to follow-up and clinical staff failing to record primary-outcome measurements as well as questionnaire response rates. Many Delphi survey participants commented on the challenges of scoring the importance of retention strategies when these are often chosen to address challenges within specific trial designs or populations. Considering the number of strategies discussed it was not feasible to explore this and participants were encouraged to score strategies based on the trials frequently delivered within their CTU. Consequently, the results reflect a ranked list of priorities that will have the broadest impact across the range of trials currently undertaken in the UK. Future research is needed to explore the effectiveness of different strategies for specific trial designs, settings and patient populations.

Strengths and limitations

To our knowledge this is the first study to develop a research agenda for evaluating the effectiveness of retention strategies and so provides a valuable reference point for future methodological work. A systematic review and surveys of registered CTUs and HTA chief investigators were used to compile the initial list of retention strategies for the Delphi survey. The two different surveys sought to capture both the practical experience of chief investigators within specific trials and the broader experience of registered CTUs that work across multiple trials and different trial designs. Registered CTUs were invited to complete the Delphi survey in order to create a priority list that was applicable to the broad range of trials represented within the UK. Whilst the study was researcher-led and did not include public or patient perspectives, all three surveys benefited from strong response rates and the Delphi survey retained the majority of participants. Only one person failed to complete round 2 suggesting that the results are unlikely to be affected by attrition bias. The high response rate is attributed to engaging an active network of registered CTUs that were invested in the topic having previously agreed that it was of critical importance [5]. Analysis of the number of retention strategies evaluated by registered CTUs should be considered with some caution as missing responses could not be identified. However, this information did not influence the Delphi process or the ranking of retention strategies for future evaluation.

Conclusion

Whilst trials are proactively implementing a range of strategies to address retention challenges, further evidence is needed to inform practice and minimise waste on ineffective methods. The Delphi survey results provide methodological researchers and trialists with a research agenda and a means of effectively channelling resources towards the assessment of retention strategies that impact UK trials. Research should focus on site training, frequency of patient contact, the use of routinely collected data, the frequency and timing of reminders and the length of time needed to complete questionnaires. Research outcomes should reflect the wide range of missing data causes and, in particular, should include an assessment of their impact on patient withdrawal which was reported to be the most common cause of missing data. HTA chief investigator survey of current practices. (PDF 742 kb) Registered CTU survey of current practices. (PDF 875 kb) Table S1 Compilation of Delphi survey list for round 1. Table S2 Retention strategies within the initial CTU survey but not included in the subsequent Delphi survey. Table S3 Respondents to the survey of HTA-funded trials. Table S4 Contextual data extracted from protocols and survey responses. Table S5 Anticipated causes of missing data at trial design (Question 5). Table S6 Differences between the observed and anticipated levels of missing data (Question 8). Table S7 Effective practices for mitigating missing data recommended from trialists (Question 13). Table S8 Frequency of CTUs usually adjusting their sample size for missing data (Question 2). Table S9 What informs the level of attrition adjustment in the sample size calculation? (Question 2). (DOCX 40 kb)
  23 in total

1.  GRADE guidelines: 2. Framing the question and deciding on important outcomes.

Authors:  Gordon H Guyatt; Andrew D Oxman; Regina Kunz; David Atkins; Jan Brozek; Gunn Vist; Philip Alderson; Paul Glasziou; Yngve Falck-Ytter; Holger J Schünemann
Journal:  J Clin Epidemiol       Date:  2010-12-30       Impact factor: 6.437

2.  Clinical trials: the challenge of recruitment and retention of participants.

Authors:  Raisa B Gul; Parveen A Ali
Journal:  J Clin Nurs       Date:  2010-01       Impact factor: 3.036

Review 3.  Are we missing anything? Pursuing research on attrition.

Authors:  Lenora Marcellus
Journal:  Can J Nurs Res       Date:  2004-09

4.  The design and conduct of clinical trials to limit missing data.

Authors:  R J Little; M L Cohen; K Dickersin; S S Emerson; J T Farrar; J D Neaton; W Shih; J P Siegel; H Stern
Journal:  Stat Med       Date:  2012-07-25       Impact factor: 2.373

5.  A review of reporting of participant recruitment and retention in RCTs in six major journals.

Authors:  Merran Toerien; Sara T Brookes; Chris Metcalfe; Isabel de Salis; Zelda Tomlin; Tim J Peters; Jonathan Sterne; Jenny L Donovan
Journal:  Trials       Date:  2009-07-10       Impact factor: 2.279

6.  Reducing waste from incomplete or unusable reports of biomedical research.

Authors:  Paul Glasziou; Douglas G Altman; Patrick Bossuyt; Isabelle Boutron; Mike Clarke; Steven Julious; Susan Michie; David Moher; Elizabeth Wager
Journal:  Lancet       Date:  2014-01-08       Impact factor: 79.321

7.  How to increase value and reduce waste when research priorities are set.

Authors:  Iain Chalmers; Michael B Bracken; Ben Djulbegovic; Silvio Garattini; Jonathan Grant; A Metin Gülmezoglu; David W Howells; John P A Ioannidis; Sandy Oliver
Journal:  Lancet       Date:  2014-01-08       Impact factor: 79.321

Review 8.  Interventions to improve recruitment and retention in clinical trials: a survey and workshop to assess current practice and future priorities.

Authors:  Peter Bower; Valerie Brueton; Carrol Gamble; Shaun Treweek; Catrin Tudur Smith; Bridget Young; Paula Williamson
Journal:  Trials       Date:  2014-10-16       Impact factor: 2.279

Review 9.  A systematic review of training programmes for recruiters to randomised controlled trials.

Authors:  Daisy Townsend; Nicola Mills; Jelena Savović; Jenny L Donovan
Journal:  Trials       Date:  2015-09-28       Impact factor: 2.279

10.  MOMENT--Management of Otitis Media with Effusion in Cleft Palate: protocol for a systematic review of the literature and identification of a core outcome set using a Delphi survey.

Authors:  Nicola L Harman; Iain A Bruce; Peter Callery; Stephanie Tierney; Mohammad Owaise Sharif; Kevin O'Brien; Paula R Williamson
Journal:  Trials       Date:  2013-03-12       Impact factor: 2.279

View more
  19 in total

1.  Electronic Health Record-Based Recruitment and Retention and Mobile Health App Usage: Multisite Cohort Study.

Authors:  Janelle W Coughlin; Lindsay M Martin; Di Zhao; Attia Goheer; Thomas B Woolf; Katherine Holzhauer; Harold P Lehmann; Michelle R Lent; Kathleen M McTigue; Jeanne M Clark; Wendy L Bennett
Journal:  J Med Internet Res       Date:  2022-06-10       Impact factor: 7.076

2.  What are the most important unanswered research questions in trial retention? A James Lind Alliance Priority Setting Partnership: the PRioRiTy II (Prioritising Retention in Randomised Trials) study.

Authors:  Dan Brunsdon; Linda Biesty; Peter Brocklehurst; Valerie Brueton; Declan Devane; Jim Elliott; Sandra Galvin; Carrol Gamble; Heidi Gardner; Patricia Healy; Kerenza Hood; Joan Jordan; Doris Lanz; Beccy Maeso; Amanda Roberts; Imogen Skene; Irene Soulsby; Derek Stewart; David Torgerson; Shaun Treweek; Caroline Whiting; Sharon Wren; Andrew Worrall; Katie Gillies
Journal:  Trials       Date:  2019-10-15       Impact factor: 2.279

3.  A pilot randomized trial of five financial incentive strategies to increase study enrollment and retention rates.

Authors:  Dustin C Krutsinger; Kuldeep N Yadav; Elizabeth Cooney; Steven Brooks; Scott D Halpern; Katherine R Courtright
Journal:  Contemp Clin Trials Commun       Date:  2019-06-04

Review 4.  Digital tools for the recruitment and retention of participants in randomised controlled trials: a systematic map.

Authors:  Geoff K Frampton; Jonathan Shepherd; Karen Pickett; Gareth Griffiths; Jeremy C Wyatt
Journal:  Trials       Date:  2020-06-05       Impact factor: 2.279

5.  The impact of non-medical cannabis legalization and other exposures on retention in longitudinal cannabis research: a survival analysis of a prospective study of Canadian medical cannabis patients.

Authors:  Philippe Lucas; Susan Boyd; M-J Milloy; Zach Walsh
Journal:  J Cannabis Res       Date:  2021-07-28

6.  Reducing attrition within clinical trials: The communication of retention and withdrawal within patient information leaflets.

Authors:  Anna Kearney; Anna Rosala-Hallas; Naomi Bacon; Anne Daykin; Alison R G Shaw; Athene J Lane; Jane M Blazeby; Mike Clarke; Paula R Williamson; Carrol Gamble
Journal:  PLoS One       Date:  2018-10-31       Impact factor: 3.240

7.  Does handwriting the name of a potential trial participant on an invitation letter improve recruitment rates? A randomised controlled study within a trial.

Authors:  Jennifer McCaffery; Alex Mitchell; Caroline Fairhurst; Sarah Cockayne; Sara Rodgers; Clare Relton; David J Torgerson
Journal:  F1000Res       Date:  2019-05-14

8.  Using digital tools in the recruitment and retention in randomised controlled trials: survey of UK Clinical Trial Units and a qualitative study.

Authors:  Amanda Blatch-Jones; Jacqueline Nuttall; Abby Bull; Louise Worswick; Mark Mullee; Robert Peveler; Stephen Falk; Neil Tape; Jeremy Hinks; Athene J Lane; Jeremy C Wyatt; Gareth Griffiths
Journal:  Trials       Date:  2020-04-03       Impact factor: 2.279

9.  What influenced people with chronic or refractory breathlessness and advanced disease to take part and remain in a drug trial? A qualitative study.

Authors:  N Lovell; S N Etkind; S Bajwah; M Maddocks; I J Higginson
Journal:  Trials       Date:  2020-02-22       Impact factor: 2.279

10.  Non-randomised evaluations of strategies to increase participant retention in randomised controlled trials: a systematic review.

Authors:  Adel Elfeky; Katie Gillies; Heidi Gardner; Cynthia Fraser; Timothy Ishaku; Shaun Treweek
Journal:  Syst Rev       Date:  2020-09-29
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.