Literature DB >> 29122791

Quality of reporting of pilot and feasibility cluster randomised trials: a systematic review.

Claire L Chan1, Clémence Leyrat2, Sandra M Eldridge1.   

Abstract

OBJECTIVES: To systematically review the quality of reporting of pilot and feasibility of cluster randomised trials (CRTs). In particular, to assess (1) the number of pilot CRTs conducted between 1 January 2011 and 31 December 2014, (2) whether objectives and methods are appropriate and (3) reporting quality.
METHODS: We searched PubMed (2011-2014) for CRTs with 'pilot' or 'feasibility' in the title or abstract; that were assessing some element of feasibility and showing evidence the study was in preparation for a main effectiveness/efficacy trial. Quality assessment criteria were based on the Consolidated Standards of Reporting Trials (CONSORT) extensions for pilot trials and CRTs.
RESULTS: Eighteen pilot CRTs were identified. Forty-four per cent did not have feasibility as their primary objective, and many (50%) performed formal hypothesis testing for effectiveness/efficacy despite being underpowered. Most (83%) included 'pilot' or 'feasibility' in the title, and discussed implications for progression from the pilot to the future definitive trial (89%), but fewer reported reasons for the randomised pilot trial (39%), sample size rationale (44%) or progression criteria (17%). Most defined the cluster (100%), and number of clusters randomised (94%), but few reported how the cluster design affected sample size (17%), whether consent was sought from clusters (11%), or who enrolled clusters (17%).
CONCLUSIONS: That only 18 pilot CRTs were identified necessitates increased awareness of the importance of conducting and publishing pilot CRTs and improved reporting. Pilot CRTs should primarily be assessing feasibility, avoiding formal hypothesis testing for effectiveness/efficacy and reporting reasons for the pilot, sample size rationale and progression criteria, as well as enrolment of clusters, and how the cluster design affects design aspects. We recommend adherence to the CONSORT extensions for pilot trials and CRTs. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

Entities:  

Keywords:  primary care

Mesh:

Year:  2017        PMID: 29122791      PMCID: PMC5695336          DOI: 10.1136/bmjopen-2017-016970

Source DB:  PubMed          Journal:  BMJ Open        ISSN: 2044-6055            Impact factor:   2.692


We used a robust search and data extraction procedure, including validation of the screening/sifting process and double data extraction. We may have missed some studies, since our criteria excluded studies not including ‘pilot’ or ‘feasibility’ in the title or abstract, and those not clearly in preparation for a main trial.

Background

In a cluster randomised trial (CRT) clusters, rather than individuals, are the units of randomisation. A cluster is a group (usually predefined) of one or more individuals. For example, clusters could be hospitals and the individuals, the patients within those hospitals. CRTs are often chosen for logistical reasons, prevention of contamination across individuals or because the intervention is targeted at the cluster level. CRTs are useful for evaluating complex interventions. However, they have added complexity in terms of design, implementation and analysis and so it is important to ensure that carrying out a CRT is feasible before conducting the future definitive trial.1 A feasibility study conducted in advance of a future definitive trial is a study designed to answer the question about whether the study can be done and whether one should proceed with it. A pilot study answers the same question but in such a study part or all of the future trial is carried out on a smaller scale.2 Thus, all pilot studies are also feasibility studies. Pilot studies can be randomised or non-randomised; for brevity we use the term pilot CRT throughout this paper to refer to a randomised study with a clustered design that is in preparation for a future definitive trial assessing effectiveness/efficacy.3 4 The focus of pilot trials is on investigating areas of uncertainty about the future definitive trial to see whether it is feasible to carry out, so the data, methods and analysis are different from an effectiveness/efficacy trial. In particular, more data might be collected on items such as recruitment and retention to assess feasibility, methods may include specifying criteria to judge whether to proceed with the future definitive trial, and analysis is likely to be based on descriptive statistics since the study is not powered for formal hypothesis testing for effectiveness/efficacy. Arnold et al highlight the importance of pilot studies being of high quality.5 Good reporting quality is essential to show how the pilot has informed the future definitive trial as well as to allow readers to use the results in preparing for similar future trials. The number of pilot and feasibility studies in the literature is increasing. However, Arain et al indicate that reporting of pilot studies is poor.6 There are no previous reviews of the reporting quality of pilot CRTs, despite the extra complications arising from the clustered structure. The aim of this review is to reveal the quality of reporting of pilot CRTs published between 1 January 2011 and 31 December 2014. We extracted information to describe the sample of pilot CRTS and to assess quality, with quality criteria based on the Consolidated Standards of Reporting Trials (CONSORT) extension for CRTs,7 and a CONSORT extension for pilot trials for which SE and CC were involved in the final stages of development during this review.3 4 We present recommendations for improving the conduct, analysis and reporting of these studies and expect this to improve the quality, usefulness and interpretation of pilot CRTs in the future. We know current reporting of CRTs is suboptimal,8–11 and thus we expected the reporting of pilot CRTs to be even poorer. The questions addressed by this review are: How many pilot CRTs have been conducted between 1 January 2011 and 31 December 2014? Are pilot CRTs using appropriate objectives and methods? To what extent is the quality of reporting of pilot CRTs sufficient?

Methods

Inclusion and exclusion criteria

We included papers published in English with a publication date (print or electronic) between 1 January 2011 and 31 December 2014. We chose the start date to be after the updated CONSORT 2010 was published.12 We estimated a search covering 4 years would give us a reasonable number of papers to perform our quality assessment, and that later papers would be similar in terms of quality of reporting since the CONSORT for pilot trials was not published until the end of 2016. The study had to be a CRT, have the word ‘pilot’ or ‘feasibility’ in the title or abstract, be assessing some element of feasibility and show evidence that the study was in preparation for a specific trial assessing effectiveness/efficacy that is planned to go ahead if the pilot trial suggests it is feasible (ie, not just a general assessment of feasibility issues to help researchers in general, although pilot trials may do this as an addition). Regardless of how authors described a study, we did not consider it to be a pilot trial if it was only looking at effectiveness/efficacy because we wanted to exclude those studies that claim to be a pilot/feasibility trial simply as justification for small sample size.13 The paper had to be reporting results (ie, not a protocol or statistical analysis plan) and had to be the first published paper reporting pilot outcomes (ie, not an extension/follow-up study for a pilot study already reported, and not a second paper reporting further pilot outcomes). Interim analyses, analyses before the study was complete and internal pilots were excluded; the CONSORT extension for pilot trials on which we based the quality assessment does not apply to internal pilots.3 4 No studies were excluded on the basis of quality since the aim was to assess the quality of reporting.

Data sources and search methods

We searched PubMed for relevant papers in September 2015. We searched for the words ‘pilot’ or ‘feasibility’ in the title or abstract, a search strategy similar to that used by Lancaster et al.14 We combined this with a search strategy to identify CRTs; this was similar to the strategy used by Diaz-Ordaz et al.8 The full electronic search strategy is given in online supplementary appendix 1.

Sifting and validation

The titles and abstracts of all papers identified by the electronic search were screened by CC for possible inclusion. Full texts were obtained for those papers identified as definitely or possibly satisfying the inclusion criteria and sifted by CC for inclusion. As validation, CL carried out the same screening and sifting process independently on a 10% random sample of electronically identified papers. For full texts where there was uncertainty whether the paper should be included, it was referred to SE for a final decision.

Refining the inclusion process

We refined the screening and sifting process following piloting. In particular, we rejected a more restrictive PubMed search that required ‘pilot’ or ‘feasibility’ in the title rather than allowing these words to occur in the title or abstract because this missed relevant papers; we altered the order of the exclusion criteria to make the process more streamline; and we relaxed one inclusion criteria, requiring evidence that the pilot trial was in preparation for a future definitive trial rather than an explicit statement that authors were planning a future definitive trial. The protocol was updated, and is available from the corresponding author.

Data extraction

CC and CL independently extracted data from all papers selected for inclusion in the review, and followed rules on what to extract (see ’Further information' column of online supplementary appendix 2). Extracted data were recorded in an Excel spreadsheet. Discrepancies were resolved by discussion between CC and CL, and where agreement could not be reached a final decision was made by SE. For each pilot CRT included in the review, we extracted information to describe the trials, including publication date (print date unless there was an earlier electronic date), country in which the trial was set, number of clusters randomised, method of cluster randomisation and following the CONSORT extension for pilot trials’ recommendation to focus on objectives rather than outcomes, the primary objective. We defined the primary objective using method similar to that used by Diaz-Ordaz et al8 for primary outcomes that is, as that specified by the author, else the objective used in the sample size justification, or else the first objective mentioned in the abstract or else main text. To assess whether the pilot trials were using appropriate objectives and methods, we collected information on whether the primary objective was about feasibility, the method used to address the main feasibility objective, the rationale for numbers in the pilot trial and whether there was formal hypothesis testing for, or statements about, effectiveness/efficacy without a caveat about the small sample size. To assess reporting quality, we created a list of quality assessment items based on the CONSORT extension for pilot trials.3 4 We also looked at the CONSORT extension for CRTs,7 and incorporated any cluster-specific items into our quality assessment items. Where a CRT item became less relevant in the context of a pilot trial, we did not extract it (eg, whether variation in cluster sizes was formally considered in the sample size calculation). In addition, where there was a substantial difference between the item for the CONSORT extension for CRTs and that for the pilot trial extension and the items were not compatible, we used the latter item (eg, focusing on objectives rather than outcomes). We recognised the need to balance comprehensiveness and feasibility.11 Therefore, where items referred to objectives or methods, we extracted this for the primary objective only. We also did not extract on whether papers reported a structured summary of trial design, methods, results and conclusions. The final version of the full list of data extracted, and further information on each item extracted, is included in online supplementary appendix 2.

Refining data extraction

Initially, CC extracted data on a random 10% sample of papers. However, some of the items were difficult to extract in a clear, standardised way, as similarly noted by Ivers et al,11 so these items were removed. In particular, whether the objectives, intervention or allocation concealment were at the individual level, cluster level or both; and other analyses performed or other unintended consequences (difficult to decipher from papers whether it classified as an ‘other’). Furthermore, some items were deemed easier to extract if split into two items, for example, ‘reported why the pilot trial ended/stopped’ which we subsequently split into ‘reported the pilot trial ended/stopped’ and ‘if so, what was the reason’.

Analysis

Data were analysed using Excel V.2013. We describe the characteristics of the pilot CRTs using descriptive statistics. Where we extracted text, we established categories during analysis by grouping similar data, for example, grouping the different primary objectives. To assess adherence to the CONSORT checklists, we present the number and percentage reporting each item. This report adheres, where appropriate, to the Preferred Reporting Items for Systematic reviews and Meta-Analyses statement.15

Patient involvement

No patients were involved in the development of the research question, design or conduct of the study, interpretation or reporting. No patients were recruited for this study. There are no plans to disseminate results of the research to study participants.

Results

The electronic PubMed search identified 257 published papers. We rejected 108 during screening (29 not reporting results; 32 not about a single randomised trial; 46 not cluster randomised; 1 interim analysis). The remaining 149 full-text articles were assessed for eligibility, and 131 more papers were rejected (1 not reporting results; 13 not about a single randomised trial; 25 not cluster randomised; 8 analyses before study complete/internal pilot; 32 not assessing feasibility; 50 not in preparation for a future definitive effectiveness/efficacy trial; 2 not the first published paper reporting pilot outcomes). This left 18 studies to be included in the analysis.[A1-A18]. The full list of studies is included in table 1, with citations in online Supplementary file 2. Figure 1 shows the flow diagram of the identification process for the sample of 18 pilot CRTs.
Table 1

Pilot cluster randomised trials included in this review

AuthorYear*JournalTitleCluster
Begh [A1]2011TrialsPromoting smoking cessation in Pakistani and Bangladeshi men in the UK: pilot cluster randomised controlled trial of trained community outreach workers.Census lower layer super output areas
Jones [A2]2011Paediatric exercise sciencePromoting fundamental movement skill development and physical activity in early childhood settings: a cluster randomised controlled trial.Childcare centres
Légaré [A3]2010Health expectationsTraining family physicians in shared decision making for the use of antibiotics for acute respiratory infections: a pilot clustered randomised controlled trial.Family medicine groups
Hopkins [A4]2012Health education researchImplementing organisational physical activity and healthy eating strategies on paid time: process evaluation of the UCLA WORKING pilot study.Worksites—health and human service organisations
Jago [A5]2012International Journal of Behavioral Nutrition and Physical ActivityBristol girls dance project feasibility trial: outcome and process evaluation results.Secondary schools
Taylor [A6]2011Clinical rehabilitationA pilot cluster randomised controlled trial of structured goal-setting following stroke.Rehabilitation services
Drahota [A7]2013Age and ageingPilot cluster randomised controlled trial of flooring to reduce injuries from falls in wards for older people.Study areas—bays within hospitals
Frenn [A8]2013Journal for Specialists in Pediatric NursingAuthoritative feeding behaviours to reduce child BMI through online interventions.Classrooms
Gifford [A9]2012World views on evidence-based nursingDeveloping leadership capacity for guideline use: a pilot cluster randomised control trial.Service delivery centres with nursing care for diabetic foot ulcers
Jones [A10]2013Journal of Medical Internet ResearchRecruitment to online therapies for depression: pilot cluster randomised controlled trial.Postcode areas
Moore [A11]2013Substance abuse treatment, prevention, and policyAn exploratory cluster randomised trial of a university halls of residence-based social norms marketing campaign to reduce alcohol consumption among first year students.Residence halls
Pai [A12]2013Implementation scienceStrategies to enhance venous thromboprophylaxis in hospitalised medical patients (SENTRY): a pilot cluster randomised trial.Hospitals
Reeves [A13]2013BMC health services researchFacilitated patient experience feedback can improve nursing care: a pilot study for a phase III cluster randomised controlled trial.Wards
Teut [A14]2013Clinical interventions in AgeingEffects and feasibility of an Integrative Medicine programme for geriatric patients: a cluster randomised pilot study.Shared apartments
Jago [A15]2014International journal of Behavioural nutrition and physical activityRandomised feasibility trial of a teaching assistant-led extracurricular physical activity intervention for those aged 9–11 years: action 3:30.Primary schools
Michie [A16]2014ContraceptionPharmacy-based interventions for initiating effective contraception following the use of emergency contraception: a pilot study.Pharmacies
Mytton [A17]2014Health technology assessmentThe feasibility of using a parenting programme for the prevention of unintentional home injuries in the under-fives: a cluster randomised controlled trial.Children’s centres
Thomas [A18]2014TrialsIdentifying continence options after stroke (ICONS): a cluster randomised controlled feasibility trial.Stroke services

*We extracted the earlier of the print and electronic publication year.

Figure 1

Flow diagram of the identification process for the sample of 18 pilot cluster randomised trials included in this review.

Pilot cluster randomised trials included in this review *We extracted the earlier of the print and electronic publication year. Flow diagram of the identification process for the sample of 18 pilot cluster randomised trials included in this review. There was 96% agreement between CC and CL for the 10% random sample used for the screening and sifting validation (based on 26 papers), with a kappa coefficient of 0.84.

Trial characteristics

In general, the more recent the publication date, the more pilot CRTs were identified, but with the most identified in 2013 (table 2). Of the 18 included studies, the majority (56%) were set in the UK. All other countries were represented only once except for Canada (three trials) and the USA (two trials). Of those reporting the method of randomisation, the majority (69%) used stratified with blocked randomisation. The median number of clusters randomised was 8 (IQR: 4–16) with a range from 2 to 50.
Table 2

Characteristics of pilot cluster randomised trials included in this review

CharacteristicNumber of trials (%)
Publication year (earlier of the print and electronic publication date)
 2010*1 (6)
 20113 (17)
 20123 (17)
 20137 (39)
 20144 (22)
Country
 UK10 (56)
 Canada3 (17)
 USA2 (11)
 Germany1 (6)
 New Zealand1 (6)
 Australia1 (6)
Method of cluster randomisation†
 Simple1 (8)
 Stratified with blocks9 (69)
 Blocked only2 (15)
 Bias coin method1 (8)
Number of clusters randomised‡
 Median (IQR)8 (4 to 16)
 Range2 to 50
Average cluster size§
 Median (IQR)32 (14 to 82)
 Range7 to 588

*One paper has an extracted publication year outside of the 2011–2014 range. This is because the print publication date for this paper was 2011 but the online publication date was 2010, so the paper satisfies the inclusion criteria which states that the publication date, print or electronic, must be between 2011 and 2014, but we extract the earlier of the print and electronic dates.

†13 of the 18 trials reported their method of randomisation. Percentages are given as a percentage of these 13 trials.

‡Not reported for one trial.

§Defined as number of individuals randomised divided by number of clusters randomised, based on 12 trials that reported information on both.

Characteristics of pilot cluster randomised trials included in this review *One paper has an extracted publication year outside of the 2011–2014 range. This is because the print publication date for this paper was 2011 but the online publication date was 2010, so the paper satisfies the inclusion criteria which states that the publication date, print or electronic, must be between 2011 and 2014, but we extract the earlier of the print and electronic dates. †13 of the 18 trials reported their method of randomisation. Percentages are given as a percentage of these 13 trials. ‡Not reported for one trial. §Defined as number of individuals randomised divided by number of clusters randomised, based on 12 trials that reported information on both.

Pilot trial objectives and methods

Ten (56%) of the 18 included pilot trials had feasibility as their primary objective, for example, assessing feasibility of implementing the intervention (6 trials), of recruitment and retention (3 trials) and of the cluster design (1 trial) (table 3). All 10 trials reported a corresponding measure to assess the feasibility objective; most (90%) used descriptive statistics and/or qualitative methods to address the objective. In one trial, a statistical test was used to address their primary feasibility objective without the authors designing the study to be adequately powered to do so.
Table 3

Pilot trial objectives and methods

CharacteristicNumber of trials (%)
Primary objective is feasibility*10 (56)
Main feasibility objective given
 Where feasibility is primary objective
  Implementing intervention6/10 (60)
  Recruitment and retention3/10 (30)
  Feasibility of cluster design1/10 (10)
 Where feasibility is not primary objective†
  Implementing intervention3/8 (38)
  Recruitment2/8 (25)
  Cluster design1/8 (13)
  Feasibility of trial being able to answer the effectiveness question (and what study design would enable this)1/8 (13)
  Feasibility of larger study1/8 (13)
Method used to address main feasibility objective given
Where feasibility is primary objective
  Descriptive statistics and/or qualitative9/10 (90)
  Statistical test1/10 (10)
 Where feasibility is not primary objective
  Descriptive statistics/qualitative3/8 (38)
  None given/reported elsewhere5/8 (63)
Rationale for numbers in pilot trial based on formal power calculation for effectiveness/efficacy‡0/8 (0)
Performing any formal hypothesis testing for effectiveness/efficacy9/18 (50)
Making any statements about effectiveness/efficacy without a caveat4/18 (22)

*Where the primary objective was not feasibility, the primary objective was effectiveness/potential effectiveness and was addressed using statistical tests.

†One of the inclusion criteria was that studies were assessing feasibility, but it did not have to be the primary objective.

‡Based on eight trials that reported a rationale for the sample size of the pilot trial.

Pilot trial objectives and methods *Where the primary objective was not feasibility, the primary objective was effectiveness/potential effectiveness and was addressed using statistical tests. †One of the inclusion criteria was that studies were assessing feasibility, but it did not have to be the primary objective. ‡Based on eight trials that reported a rationale for the sample size of the pilot trial. The remaining eight trials had an effectiveness/efficacy primary objective, and used statistical tests to address this. Nevertheless, these eight trials all had feasibility as one of their other objectives (this was an inclusion criterion). The feasibility objectives were similar to those where the feasibility was primary, but expressed more generally in two trials, for example, looking at the feasibility of the future definitive trial,[A16] and looking at whether the future definitive trial could answer the effectiveness question and which study design would enable this.[A10] In only three trials was a measure to assess the feasibility objective reported, using either quantitative or qualitative measures. Eight trials reported a rationale for the numbers in the pilot trial, with all of these following best practice in not basing the rationale on a formal sample size calculation for effectiveness/efficacy. Nine (50%) trials performed any formal hypothesis testing for effectiveness/efficacy, regardless of whether this was for the primary or a secondary objective. Of these nine trials, four of the conclusions about effectiveness/efficacy were made without any caveats about the imprecision of estimates or possible lack of representativeness because of the small samples.

Quality of reporting—by items

The pilot CRTs in our review are published after the CONSORT 2010 for RCTs but before the CONSORT extension for pilot trials. Therefore, to present data on quality of reporting, we looked at our list of quality assessment items based on the CONSORT extension for pilot trials, and grouped reporting items into three categories (table 4): (1) items in the CONSORT extension for pilot trials that are new compared with CONSORT 2010 for RCTs, (2) items in the CONSORT extension for pilot trials that are substantially adapted from CONSORT 2010 for RCTs and (3) items in the CONSORT extension for pilot trials that are the same as or have only minor differences from CONSORT 2010 for RCTs, plus items in the CONSORT extension for CRTs.3 4 7 12
Table 4

Number (%) of reports adhering to pilot cluster randomised trial quality criteria

ItemCriterionn (%)
Title and abstract1aTerm ‘pilot’ or ‘feasibility’ included in the title Identification as a pilot or feasibility randomised trial in the title15 (83) 12 (67)
1aTerm ‘cluster’ included in the title Identification as a cluster randomised trial in the title12 (67) 12 (67)
Introduction2a [S]Scientific background and explanation of rationale for future definitive trial reported Reasons for randomised pilot trial reported18 (100) 7 (39)
2aRationale given for using cluster design6 (33)
Methods—trial design3aDescription of pilot trial design18 (100)
3aDefinition of cluster18 (100)
3bReported any changes to methods after pilot trial commencement If yes, reported reasons5 (28) 5/5 (100)
Methods—participants4aReported eligibility criteria for participants13 (72)
4aReported eligibility criteria for clusters9 (50)
4bReported settings and locations where the data were collected18 (100)
4 c [N]Reported how participants were identified Reported how clusters were identified Reported how participants were consented * Reported how clusters were consented9 (50) 6 (33) 13/17 (76) 2 (11)
Methods—interventions5Described the interventions for each group13 (72)
Methods—outcomes6bReported any changes to pilot trial assessments or measurements after pilot trial commencement If yes, reported reasons1 (6) 1/1 (100)
6 c [N]Reported criteria used to judge whether, or how, to proceed with the future definitive trial3 (17)
Methods—sample size7a [S]Reported a rationale for the sample size of the pilot trial8 (44)
7aCluster design considered during the description of the rationale for numbers in the pilot trial3 (17)
7bReported stopping guidelines0 (0)
Methods— randomisation8aReported method used to generate the random allocation sequence9 (50)
8bReported randomisation method13 (72)
9Reported mechanism used to implement the random allocation sequence Reported allocation concealment4 (22) 7 (39)
10/ 10aReported who: Generated the random allocation sequence Enrolled clusters Assigned clusters to interventions8 (44) 3 (17) 4 (22)
10cReported from whom consent was sought Reported whether consent was sought from participants Reported whether consent was sought from clusters Reported whether participant consent was sought before or after randomisation*2 (11) 17 (94) 2 (11) 8/17 (47)
Methods— blinding11aReported on whether there was blinding Reported who was blinded† Reported how they were blinded†10 (56) 6/14 (43) 1/14 (7)
Methods— analytical methods12aReports clustering accounted for in any of the methods used to address pilot trial objectives/research questions13/17 (76)
Results—participant flow13§Reports a diagram with flow of individuals through the trial12 (67)
13§Reports a diagram with flow of clusters through the trial10 (56)
13a/ 13a [S]Reported number of: Individuals (clusters) approached and/or assessed for eligibility¶ Individuals (clusters) randomly assigned¶ Individuals (clusters) that received intended treatment¶; ¶ Individuals (clusters) that were assessed for primary objective¶; ¶8/17 (47); 10/18 (56) 13/17 (76); 17/18 (94) 8/17 (47); 5/17 (29) 16/17 (94); 14/17 (82)
13b/ 13bReported number of: Losses for individuals (clusters) after randomisation**; ¶ Exclusions for individuals (clusters) after randomisation¶; ¶11/16 (69); 6/17 (35) 1/17 (6); 3/17 (18)
14aReported on dates defining the periods of recruitment Reported on dates defining the periods of follow-up8 (44) 11 (61)
14bReported the pilot trial ended/stopped0 (0)
Results—baseline data15Reported a table showing baseline characteristics for the individual level If yes, by group12 (67) 11/12 (92)
15Reported a table showing baseline characteristics for the cluster level If yes, by group2 (11) 2/2 (100)
Results—outcomes and estimation17aReported results for main feasibility objective (quantitative or qualitative)††13/17 (76)
Results— harms19Reported on harms or unintended effects4 (22)
19a [N]Reported other unintended consequences0 (0)
Discussion20 [S]Reported limitations of pilot trial Reported sources of potential bias Reported remaining uncertainty17 (94) 10 (56) 10 (56)
21 [S]Reported generalisability of pilot trial methods/findings to future definitive trial or other studies16 (89)
22Interpretation of feasibility consistent with main feasibility objectives and findings††12/17 (71)
22A [N]Reported implications for progression from the pilot to the future definitive trial16 (89)
Other information23Reported registration number for pilot trial Reported name of registry for pilot trial11 (61) 11 (61)
24 [S]Reported where the pilot trial protocol can be accessed7 (39)
25Reported source of funding18 (100)
26 [N]Reported ethical approval/research review committee approval If yes, reported reference number17 (94) 8/17 (47)

Item numbers in normal font refer to the item in the CONSORT extension for pilot trials that the quality assessment item is based on.

Item numbers in bold italics refer to the item in the CONSORT extension for CRTs that the quality assessment item is based on.

[N] represents new items in the CONSORT extension for pilot trials compared with the CONSORT 2010 for RCTs.

[S] represents items in the CONSORT extension for pilot trials that are substantially adapted from the CONSORT 2010 for RCTs.

*Item not relevant for one trial [A12] because they said that the Ethics Board determined it could be conducted without informed consent from patients or surrogates.

†Item not relevant for four trials [A7, A10, A12, A18] because they reported that blinding was not used.

‡Item not relevant for one trial because no CIs/p values were given, [A17] so clustering did not need to be accounted for in any of their methods because effect estimates are not biased by cluster randomisation, only CIs/p values.

§The CONSORT statements do not include an item 13 but there is a participant flow subheading which strongly recommends a diagram. We therefore reference this subheading as ‘item 13’ here.

¶Not relevant for one trial due to the design of the study.[A10] (This paper was different from the others such that it was not relevant to extract these items. The clusters were postcode areas and they were assessing two online recruitment interventions and comparing the success of the recruitment interventions. As such, participants were those who completed the online questions, and each arm of the study had a ‘total population ranging from 1.6 to 2 million people clustered in four postcode areas’.)

**Not relevant for two trials due to the design of these studies.[A10, A12] (See reason above for A10. For A12, data were collected from medical patient charts so these items were not relevant to extract.)

††One paper reports the feasibility results in a separate paper so is not included.[A3]

Number (%) of reports adhering to pilot cluster randomised trial quality criteria Item numbers in normal font refer to the item in the CONSORT extension for pilot trials that the quality assessment item is based on. Item numbers in bold italics refer to the item in the CONSORT extension for CRTs that the quality assessment item is based on. [N] represents new items in the CONSORT extension for pilot trials compared with the CONSORT 2010 for RCTs. [S] represents items in the CONSORT extension for pilot trials that are substantially adapted from the CONSORT 2010 for RCTs. *Item not relevant for one trial [A12] because they said that the Ethics Board determined it could be conducted without informed consent from patients or surrogates. †Item not relevant for four trials [A7, A10, A12, A18] because they reported that blinding was not used. ‡Item not relevant for one trial because no CIs/p values were given, [A17] so clustering did not need to be accounted for in any of their methods because effect estimates are not biased by cluster randomisation, only CIs/p values. §The CONSORT statements do not include an item 13 but there is a participant flow subheading which strongly recommends a diagram. We therefore reference this subheading as ‘item 13’ here. ¶Not relevant for one trial due to the design of the study.[A10] (This paper was different from the others such that it was not relevant to extract these items. The clusters were postcode areas and they were assessing two online recruitment interventions and comparing the success of the recruitment interventions. As such, participants were those who completed the online questions, and each arm of the study had a ‘total population ranging from 1.6 to 2 million people clustered in four postcode areas’.) **Not relevant for two trials due to the design of these studies.[A10, A12] (See reason above for A10. For A12, data were collected from medical patient charts so these items were not relevant to extract.) ††One paper reports the feasibility results in a separate paper so is not included.[A3] In the tables, denominators for proportions are based on papers for which the item is relevant. Not all items are relevant for all trials, due to their design, so we highlight where this applies in the table footnotes. The footnote of table 4 also explains where the quality assessment items come from, with different font differentiating items based on the CONSORT extension for pilot trials and the CONSORT extension for CRTs, and a key to highlight which of the three categories above the item falls under.

New items

Five new items were added to the CONSORT extension for pilot trials on the identification and consent process, progression criteria, other unintended consequences, implications for progression and ethical approval.3 4 See items with [N] in column 2 of table 4. In our review, how participants were identified and consented was reported by 50% and 76% of the pilot CRTs, respectively, but how clusters were identified and consented was reported by just 33% and 11%, respectively. Only three trials (17%) reported criteria used to judge whether or how to proceed with the future definitive trial, with two giving numbers that must be exceeded such as recruitment, retention, attendance and data collection percentages,[A17, A2] and one giving categories of ‘definitely feasible’, ‘possibly feasible’ and ‘not feasible’.[A12] The item on other unintended consequences was reported by none of the pilot CRTs, although it is unclear whether this is due to poor reporting or because no unintended consequences occurred. Implications for progression from pilot to future definitive trial was reported by 16 trials (89%), with 9 reporting to proceed/proceed with changes, 5 reporting further research or piloting is needed first and 2 reporting to not go ahead with the future definitive trial. Ninety-four per cent reported ethical approval/research review committee approval, but only 47% of them also reported the corresponding reference number.

Substantially adapted items

Six items in the CONSORT extension for pilot trials were substantially adapted from CONSORT 2010 for RCTs, regarding reasons for the randomised pilot trial, sample size rationale for the pilot trial, numbers approached and/or assessed for eligibility, remaining uncertainty about feasibility, generalisability of pilot trial methods and findings and where the pilot trial protocol can be accessed.3 4 See items with [S] in column 2 of table 4. Reasons for the randomised pilot trial were reported by 39% of the pilot CRTs. Eight trials (44%) gave a rationale for the sample size of the pilot trial. Pilot trials should always report a rationale for their sample size; this can be qualitative or quantitative, but should not be based on a formal sample size calculation for effectiveness/efficacy. In this review, the rationales were based on logistics,[A15] resources,[A14] time,[A16] a balance of practicalities and need for reasonable precision,[A18] a general statement that it was considered sufficient to address the objectives of the pilot trial,[A17] formal [A6] and non-formal [A7] calculation to enable estimation of parameters in the future definitive trial, and a formal calculation based on the primary feasibility outcome.[A12] Of these rationales, good examples include ‘The decision to include eight apartment-sharing communities was based on practical feasibility that seemed appropriate according to funding and the personal resources available’,[A14] as well as ‘The sample size was chosen in order to have two clusters per randomised treatment and the number of participants per cluster was based on the number of degrees of freedom (df) needed within each cluster to have reasonable precision to estimate a variance’.[A6] The number of individuals approached and/or assessed for eligibility was reported by 47%, and the number of clusters by 56%. Remaining uncertainty was reported by 56% of the pilot CRTs. 89% reported generalisability of pilot trial methods/findings to the future definitive trial or other studies, but clarity of reporting was lacking as it was difficult to distinguish between references to the future definitive trial versus other future studies due to ambiguous phrases such as ‘in a future trial’. Only 39% reported where the pilot trial protocol could be accessed.

Items essentially taken from CONSORT 2010 for RCTs or the CONSORT extension for CRTs

For the remaining items, reporting quality was variable. Some were reported by fewer than 20% of the pilot CRTs, for example, considering the cluster design in the sample size rationale for the pilot trial (17%) (item 7a), reporting whether consent was sought from clusters (11%) and who enrolled them (17%) (items 10 c and 10a), how people were blinded (7% of applicable trials) (item 11a), number of excluded individuals (6% of applicable trials) and clusters (18% of applicable trials) after randomisation (item 13b) and a table showing baseline cluster characteristics (11%) (item 15). Those reported most well, by >80% of the pilot CRTs, included reporting ‘pilot’ or ‘feasibility’ in the title (83%) (item 1a), scientific background and explanation of rationale for future definitive trial (100%) (item 2a), pilot trial design (100%) (item 3a), nature of the cluster (100%) (item 3a), settings and locations where the data were collected (100%) (item 4b), whether consent was sought from participants (94%) (item 10c), number of clusters randomised (94%) and assessed for primary objective (82% of applicable trials) (item 13a), number of individuals assessed for primary objective (94% of applicable trials) (item 13a), limitations of pilot trial (94%) (item 20) and source of funding (100%) (item 25).

Quality of reporting—by study

Finally, in table 5 we present the number (percentage) of quality assessment items reported by each study. We provide an overall score, as well as a score by categories of CONSORT. The quality of reporting varies across studies, with five of the pilot CRTs reporting over 65% of the quality assessment items and two of the pilot CRTs reporting under 30%. There does not appear to be a trend of reporting quality with time. Five of the studies report 90% or more of the quality assessment items in the ‘discussion and other information’ category, and only two studies report <50%. Two of the studies report 100% of the items in the ‘title and abstract and introduction’ category, and five studies report <50%. The highest percentage of items reported by a study in the ‘methods’ category is 66% and the lowest is 14%. Similarly, the highest percentage of items reported by a study in the ‘results’ category is 78% and the lowest is 18%. Within studies, the category that is best reported tends to be the ‘discussion and other information’ category (had the highest percentage for 10 of the 18 pilot CRTs).
Table 5

Number (%) of quality assessment criteria reported by each pilot cluster randomised trial in this review

StudyOverall n (%)*Title and abstract and introduction n (%)Methods n (%)Results n (%)Discussion and other information n (%)
Drahota [A7]50 (70)6 (86)17 (59)18 (78)9 (75)
Pai [A12]48 (69)5 (71)17 (61)18 (78)8 (67)
Mytton [A17]50 (68)4 (57)21 (66)13 (57)12 (100)
Thomas [A18]46(67)5 (71)17 (59)15 (65)9 (90)
Teut [A14]49 (66)6 (86)20 (63)14 (61)9 (75)
Taylor [A6]47 (64)7 (100)16 (52)13 (57)11 (92)
Légaré [A3]42 (58)3 (43)18 (56)14 (61)7 (64)
Begh [A1]41 (56)5 (71)16 (52)11 (48)9 (75)
Jago [A15]39 (55)4 (57)11 (38)13 (57)11 (92)
Jones [A10]32 (52)7 (100)10 (33)6 (50)9 (75)
Moore [A11]37 (52)5 (71)13 (45)8 (35)11 (92)
Michie [A16]36 (51)3 (43)15 (52)8 (36)10 (83)
Jones [A2]37 (51)3 (43)15 (48)10 (45)9 (75)
Jago [A5]33 (46)4 (57)13 (45)10 (43)6 (50)
Gifford [A9]33 (45)6 (86)12 (39)8 (35)7 (58)
Reeves [A13]29 (41)6 (86)11 (38)7 (32)5 (42)
Frenn [A8]18 (26)1 (14)5 (17)7 (32)5 (42)
Hopkins [A4]16 (23)2 (29)4 (14)4 (18)6 (50)

*This is the overall number (percentage) of the quality assessment items in table 4 that are reported by each study. The other columns look at this within categories. Note that the denominator varies between studies because not all quality assessment items are relevant for all studies (see footnote of table 4) and not applicable for some items if a related item is not reported (see items 3b, 6b, 15, 26 in table 4).

Number (%) of quality assessment criteria reported by each pilot cluster randomised trial in this review *This is the overall number (percentage) of the quality assessment items in table 4 that are reported by each study. The other columns look at this within categories. Note that the denominator varies between studies because not all quality assessment items are relevant for all studies (see footnote of table 4) and not applicable for some items if a related item is not reported (see items 3b, 6b, 15, 26 in table 4).

Discussion

Main findings

This is the first study to assess the reporting quality of pilot CRTs using the recently developed CONSORT checklist for pilot trials.3 4 Our search strategy and inclusion criteria identified 18 pilot CRTs published between 2011 and 2014. Most studies were published in the UK, perhaps driven by the availability of funding or the large number of CRTs and interest in complex interventions in the UK. With respect to the pilot CRT objectives and methods, a considerable proportion of papers did not have feasibility as their primary objective. Of the trials reporting a sample size rationale for the pilot, all followed best practice in not carrying out a formal sample size calculation for effectiveness/efficacy, yet a substantial proportion performed formal hypothesis testing for effectiveness/efficacy. This could indicate an inappropriate attachment to hypothesis testing, although many did explain it was an indication of potential effectiveness or that the study was underpowered. Investigators wanting to assess effectiveness/efficacy and use statistical tests to do so should be performing a properly powered definitive trial, otherwise there is the potential for misleading conclusions affecting clinical decisions as well as misinformed decisions about the future definitive trial.16 One may however look at potential effectiveness, for example, using an interim or surrogate outcome, with a caveat about the lack of power.3 4 Moreover, one may include a progression criterion based on potential effect. If so, Eldridge and Kerry recommend any interpretation of potential effect is done by looking at the limits of the CI,13 and one should also pay attention to features of the pilot which might have biased the result (eg, convenience sampling of clusters). A positive effect finding excluding the null value would still justify the future definitive trial to estimate the effect with greater certainty, but a negative effect finding excluding the null value (ie, strongly suggesting harm), or even a finding where the clinically important difference is excluded, might suggest not proceeding. It is good practice to prestate such progression criteria. Finally, one may use estimates from outcome data, for example, as inputs for the sample size calculation for the future definitive trial. In particular, for pilot CRTs we may be interested in estimating the intracluster correlation coefficient (ICC), although we note that the ICC estimate from a pilot CRT should not be the only source for the future definitive trial sample size, because of the large amount of imprecision in a pilot trial.17 Reporting quality of pilot CRTs was variable. Items reported well included reporting the term ‘pilot’ or ‘feasibility’ in the title, generalisability of pilot trial methods/findings to the future definitive trial or other studies and implications for progression from the pilot to the future definitive trial, although clarity could be improved when referring to the future definitive trial rather than other future studies in general. Items least well reported included reasons for the randomised pilot trial, sample size rationale for the pilot trial, criteria used to judge whether or how to proceed with the future definitive trial and where the pilot trial protocol can be accessed. These items are important so that readers can understand whether the uncertainty they are facing about their future trial has already been addressed in a pilot, researchers can make sure they have enough patients to achieve the pilot trial objectives, readers can understand the criteria for progression and to prevent against selective reporting. For items related to the cluster aspect of pilot CRTs, most pilot CRTs reported the nature of the cluster, and the number of clusters randomised and assessed for the primary objective. The items reported least well included considering the cluster design during the sample size rationale for the pilot trial, reporting who enrolled clusters and how they were consented, number of exclusions for clusters after randomisation and a table showing baseline cluster characteristics. Although the number of clusters in a pilot trial is usually small, it is still important to, for example, describe the cluster-level characteristics using a baseline table as it may give helpful information important for planning the future definitive trial. Moreover, while nearly all trial reports described whether consent was sought from individuals or not, seeking agreement from clusters was only described in a small minority. The items on agreement from and enrolment of clusters, baseline cluster characteristics and number of excluded clusters are particularly important to report, since they may affect assessment of feasibility. If we consider why some items may have been well adhered to and others not, it is interesting to observe that new items added to the CONSORT extension for pilot trials and items substantially adapted from CONSORT 2010 for RCTs were in general not well adhered to. This could perhaps be because of somewhat newer ideas that may not have been considered during design such as specifying progression criteria and considering a rationale for numbers in the pilot. Alternatively, perhaps there were aspects sometimes done but not reported due to lack of reporting guidance to remind authors; for example, the new items on how clusters were identified and consented, other unintended consequences and ethical approval/research review committee approval reference number, and the substantially adapted items on reporting reasons for the pilot trial, number of individuals approached and/or assessed for eligibility and where the pilot trial protocol can be accessed. With the item on unintended consequences, we recognise that investigators are free to choose what they interpret and report as an unintended consequence. We recommend careful thought that all unintended consequences that may affect the future definitive trial are reported. It is also interesting to observe that many of the most poorly reported items concerned methods/design (progression criteria; enrolment and consent of clusters), and in particular, justification of design aspects (reasons for randomised pilot trial; sample size rationale for pilot trial including consideration of cluster design). Within studies, the category that is worst reported is the methods, despite being crucial to allow the reader to judge the quality of the trial.

Comparison with other studies

There has not been a previous review of pilot trials using the new CONSORT extension for pilot trials.3 4 However, the review by Arain et al looking at pilot and feasibility studies reported that 81% were performing hypothesis testing with sample sizes known to be insufficient,6compared with 50% of pilot CRTs in our review. Arain et al also reported 36% of studies performing sample size calculations for the pilot. In our review, 17% performed calculations (all based on feasibility objectives), but if we include those that also correctly reported a rationale for the numbers in the pilot but without any calculation then this was 44%. The general message that reporting of CRTs is suboptimal still holds.8–11 The review by Diaz-Ordaz et al8 of definitive trial CRTs reported that 37% presented a table showing baseline cluster characteristics, compared with 11% of pilot CRTs in our review. Diaz-Ordaz et al8 also reported that 27% accounted for clustering in sample size calculations,8 and a recent review by Fiero et al reported 53%.10 However, just 17% of pilot CRTs in our review considered the cluster design in the sample size rationale for the pilot trial. Both these CRT reviews reviewed effectiveness/efficacy CRTs, for which the need to take account of clustering in sample sizes is generally well understood compared with pilot trials. In pilot trials, the rationale for considering the clustered design in deciding on numbers in the pilot may be different, for example, considering the number of df needed within each cluster to estimate a variance.[A6] In pilot trials, including a number of clusters with different characteristics may also be important to get an idea about the implementation of an intervention across different clusters.

Strengths and limitations

We used a robust search and data extraction procedure, including validation of the screening/sifting process and double data extraction. However, the use of only one database, PubMed, which is comprehensive but not exhaustive, may have missed eligible papers, and the use of conditions #3, #5 and #6 (see online supplementary appendix 1) may have been restrictive. Our aim was to get a general idea of reporting issues in the area, rather than doing a completely comprehensive search. Our inclusion criteria stipulated that papers must have the word ‘pilot’ or ‘feasibility’ in the title or abstract, so we may have missed some pilot CRTs and thus may have overestimated the percentage reporting ‘pilot’ or ‘feasibility’ in the title. This strategy may also have resulted in a skewed sample of papers with a greater tendency to adhere to CONSORT guidelines. However, our review suggests reporting of pilot CRTs need improving, so our conclusion would remain the same. We required authors to report that the trial was in preparation for a future definitive trial, so we expect that items related to the future definitive trial (eg, progression criteria, generalisability, implications) may be better reported than they would for all publications of pilot CRTs, which might include papers that did not report that they were in preparation for a future definitive trial clearly enough to be included. During sifting, we identified 32 trials that had ‘pilot’ or’ feasibility’ in the title/abstract, but were not assessing feasibility. A third of these were identified because they referred to ‘pilot’ or ‘feasibility’ at some point in the abstract but it was not in reference to the current trial (eg, stating feasibility has already been shown), but the other two-thirds were labelled as a pilot or feasibility trial yet showed no evidence of assessing feasibility and were only assessing effectiveness. This is an important point as our review may appear to overestimate reporting quality by not including these studies. That there are underpowered main trials being published as pilot or feasibility studies is something that the academic community should look to prevent. During sifting, we also identified 50 trials that were assessing feasibility but did not show evidence of being in preparation for a future definitive trial. Most were assessing the feasibility of implementing an intervention targeted at members of the public, or discussing feasibility of the intervention with the aim of providing information to help researchers wanting to implement a similar intervention in similar settings or to raise questions for future research, rather than being in preparation for a trial assessing effectiveness/efficacy. Some of these 50 trials also appeared to be small effectiveness studies labelled as a pilot, usually only mentioning feasibility once or twice throughout the paper, with one trial explicitly stating that “Because of organisational changes… we had to stop the inclusion after 46 participants, and the study is consequently defined as a pilot study”.18 For the few trials that were potentially pilot CRTs not reported clearly enough, the authors only spoke of future studies in general rather than clearly specifying the study was in preparation for a specific future definitive trial. Related to this, it is of interest to know the proportion of our 18 pilot CRTs that are actually followed by a future definitive trial, and we plan to investigate this in future.

Conclusion

We may have overestimated the reporting quality of pilot CRTs; nevertheless, our review demonstrates that reporting of pilot CRTs need improving. The identification of just 18 pilot CRTs between 2011 and 2014, mainly from the UK, highlights the need for increased awareness of the importance of carrying out and publishing pilot CRTs and good reporting so that these studies can be identified. Pilot CRTs should primarily be assessing feasibility, and avoiding formal hypothesis testing for effectiveness/efficacy. Improvement is needed in reporting reasons for the pilot, rationale for the pilot trial sample size and progression criteria, as well as the enrolment stage of clusters and how the cluster design affects aspects of design such as numbers of participants. We recommend adherence to the new CONSORT extension for pilot trials, in conjunction with the CONSORT extension for CRTs.3 4 7 We encourage journals to endorse the CONSORT statement, including extensions.
  17 in total

Review 1.  Lessons for cluster randomized trials in the twenty-first century: a systematic review of trials in primary care.

Authors:  Sandra M Eldridge; Deborah Ashby; Gene S Feder; Alicja R Rudnicka; Obioha C Ukoumunne
Journal:  Clin Trials       Date:  2004-02       Impact factor: 2.486

Review 2.  The design and interpretation of pilot trials in clinical research in critical care.

Authors:  Donald M Arnold; Karen E A Burns; Neill K J Adhikari; Michelle E Kho; Maureen O Meade; Deborah J Cook
Journal:  Crit Care Med       Date:  2009-01       Impact factor: 7.598

3.  Early, intensified home-based exercise after total hip replacement--a pilot study.

Authors:  Lone R Mikkelsen; Søren S Mikkelsen; Finn B Christensen
Journal:  Physiother Res Int       Date:  2012-03-26

4.  CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials.

Authors:  Kenneth F Schulz; Douglas G Altman; David Moher
Journal:  BMJ       Date:  2010-03-23

Review 5.  What is a pilot or feasibility study? A review of current practice and editorial policy.

Authors:  Mubashir Arain; Michael J Campbell; Cindy L Cooper; Gillian A Lancaster
Journal:  BMC Med Res Methodol       Date:  2010-07-16       Impact factor: 4.615

6.  Pilot Studies: A Critical but Potentially Misused Component of Interventional Research.

Authors:  Caroline Kistin; Michael Silverstein
Journal:  JAMA       Date:  2015-10-20       Impact factor: 56.272

7.  CONSORT 2010 statement: extension to randomised pilot and feasibility trials.

Authors:  Sandra M Eldridge; Claire L Chan; Michael J Campbell; Christine M Bond; Sally Hopewell; Lehana Thabane; Gillian A Lancaster
Journal:  BMJ       Date:  2016-10-24

Review 8.  A systematic review of cluster randomised trials in residential facilities for older people suggests how to improve quality.

Authors:  Karla Diaz-Ordaz; Robert Froud; Bart Sheehan; Sandra Eldridge
Journal:  BMC Med Res Methodol       Date:  2013-10-22       Impact factor: 4.615

9.  Defining Feasibility and Pilot Studies in Preparation for Randomised Controlled Trials: Development of a Conceptual Framework.

Authors:  Sandra M Eldridge; Gillian A Lancaster; Michael J Campbell; Lehana Thabane; Sally Hopewell; Claire L Coleman; Christine M Bond
Journal:  PLoS One       Date:  2016-03-15       Impact factor: 3.240

10.  CONSORT 2010 statement: extension to randomised pilot and feasibility trials.

Authors:  Sandra M Eldridge; Claire L Chan; Michael J Campbell; Christine M Bond; Sally Hopewell; Lehana Thabane; Gillian A Lancaster
Journal:  Pilot Feasibility Stud       Date:  2016-10-21
View more
  10 in total

1.  A website for pilot and feasibility studies: giving your research the best chance of success.

Authors:  Claire Louise Chan
Journal:  Pilot Feasibility Stud       Date:  2019-11-05

2.  Periorbital aesthetic concerns in patients seeking corneal refractive surgery.

Authors:  Milind N Naik; Mohammed A Khader; Somasheila I Murthy
Journal:  Indian J Ophthalmol       Date:  2021-10       Impact factor: 1.848

3.  Reporting quality for abstracts of randomised trials on child and adolescent depression prevention: a meta-epidemiological study on adherence to CONSORT for abstracts.

Authors:  Jascha Wiehn; Johanna Nonte; Christof Prugger
Journal:  BMJ Open       Date:  2022-08-03       Impact factor: 3.006

4.  Study protocol for a type III hybrid effectiveness-implementation trial to evaluate scaling interoperable clinical decision support for patient-centered chronic pain management in primary care.

Authors:  Ramzi G Salloum; Lori Bilello; Jiang Bian; Julie Diiulio; Laura Gonzalez Paz; Matthew J Gurka; Maria Gutierrez; Robert W Hurley; Ross E Jones; Francisco Martinez-Wittinghan; Laura Marcial; Ghania Masri; Cara McDonnell; Laura G Militello; François Modave; Khoa Nguyen; Bryn Rhodes; Kendra Siler; David Willis; Christopher A Harle
Journal:  Implement Sci       Date:  2022-07-15       Impact factor: 7.960

Review 5.  Systematic review of the characteristics of school-based feasibility cluster randomised trials of interventions for improving the health of pupils in the UK.

Authors:  Kitty Parker; Saskia Eddy; Michael Nunns; ZhiMin Xiao; Tamsin Ford; Sandra Eldridge; Obioha C Ukoumunne
Journal:  Pilot Feasibility Stud       Date:  2022-07-02

6.  The reporting of pilot and feasibility studies in the top dental specialty journals is suboptimal.

Authors:  Mohammed I U Khan; Hartirath K Brar; Cynthia Y Sun; Rebecca He; Hussein A El-Khechen; Katie Mellor; Lehana Thabane; Carlos Quiñonez
Journal:  Pilot Feasibility Stud       Date:  2022-10-04

7.  Applying mixed methods to pilot feasibility studies to inform intervention trials.

Authors:  Kelly A Aschbrenner; Gina Kruse; Joseph J Gallo; Vicki L Plano Clark
Journal:  Pilot Feasibility Stud       Date:  2022-09-26

8.  Pilot and feasibility trials in traditional Chinese medicine: a literature review of current practice.

Authors:  Guowei Li; Darong Wu; Xuejiao Chen; Jie Zeng; Ziyi Li; Lehana Thabane
Journal:  Pilot Feasibility Stud       Date:  2020-04-22

9.  Use of selective gut decontamination in critically ill children: protocol for the Paediatric Intensive Care and Infection Control (PICnIC) pilot study.

Authors:  Alanna Brown; Paloma Ferrando; Mariana Popa; Gema Milla de la Fuente; John Pappachan; Brian Cuthbertson; Laura Drikite; Richard Feltbower; Theodore Gouliouris; Isobel Sale; Robert Shulman; Lyvonne N Tume; John Myburgh; Kerry Woolfall; David A Harrison; Paul R Mouncey; Kathryn M Rowan; Nazima Pathan
Journal:  BMJ Open       Date:  2022-03-11       Impact factor: 2.692

10.  Adolescents' Empowerment for Mental Health Literacy in School: A Pilot Study on ProLiSMental Psychoeducational Intervention.

Authors:  Tânia Morgado; Luís Loureiro; Maria Antónia Rebelo Botelho; Maria Isabel Marques; José Ramón Martínez-Riera; Pedro Melo
Journal:  Int J Environ Res Public Health       Date:  2021-07-29       Impact factor: 3.390

  10 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.