Literature DB >> 28786333

Where have all the pilot studies gone? A follow-up on 30 years of pilot studies in Clinical Rehabilitation.

Navaldeep Kaur1,2,3, Sabrina Figueiredo1,2,3, Vanessa Bouchard1,2,3, Carolina Moriello1,2, Nancy Mayo1,2,3.   

Abstract

INTRODUCTION: Pilot studies are meritorious for determining the feasibility of a definitive clinical trial in terms of conduct and potential for efficacy, but their possible applications for planning a future trial are not always fully realized. The purpose of this review was to estimate the extent to which pilot/feasibility studies: (i) addressed needed objectives; (ii) led to definitive trials; and (iii) whether the subsequent undertaking of a definitive trial was influenced by the strength of the evidence of outcome improvement.
METHODS: Trials published in the journal Clinical Rehabilitation, since its inception, were eligible if the word 'pilot' or 'feasibility' was specified somewhere in the article. A total of 191 studies were reviewed, results were summarized descriptively, and between-group effect sizes were computed.
RESULTS: The specific purposes of piloting were stated in only 58% ( n = 110) of the studies. The most frequent purpose was to estimate the potential for efficacy (85%), followed by testing the feasibility of the intervention (60%). Only 12% of the studies were followed by a definitive trial; <4% of studies had a main study underway or a published study protocol. There was no relationship between observed effect size and follow-up of pilot studies, although the confidence intervals were very wide owing to small number of trials that followed on. DISCUSSION: Labelling and reporting of pilot studies needs to be improved to be concordant with the recently issued CONSORT guidelines. Feasibility needs to be fully tested and demonstrated prior to committing considerable human and monetary resources.

Entities:  

Keywords:  Rehabilitation; controlled clinical trial; randomized controlled trial

Mesh:

Year:  2017        PMID: 28786333      PMCID: PMC5557106          DOI: 10.1177/0269215517692129

Source DB:  PubMed          Journal:  Clin Rehabil        ISSN: 0269-2155            Impact factor:   3.477


Background

Clinical trials answer questions about deliberate interventions that are often innovations in treatment and the results are meant to inform clinical practice.[1,2] Because of the innovation, many of the details of a definitive trial are unknown before starting, and should be investigated systematically before committing to a larger study. Trials of rehabilitation interventions are particularly challenging as they often involve testing of interventions with different active ingredients.[3,4] How the multiple elements of the trial work together need to be tested, including the best way to identify participants, whether randomization is accepted, the processes around delivering the intervention, and the optimal control for the experimental intervention, to name but a few of the questions to be answered. Many trials suffer from recruitment challenges, high attrition rates because of tediousness or intolerable study demands[2] making it essential to identify these potential bumps in the trial road beforehand. In the past, some authors have weighed in on definitions and purposes of pilot and/or feasibility studies.[5-7] The first consensus on the definitions of pilot and feasibility studies was published in 2016 and provided a conceptual framework to unify the disparate concepts that are grouped under feasibility or pilot studies.[8] Many pieces of research may need to be done before a main study can be justified, essentially to probe into the feasibility of various aspects of a study protocol. Important parameters, including recruitment rates, completion rates, adherence rates, and resources needed, that are crucial for designing a definitive trial, can be estimated through these preparatory studies.[7] However, if no evidence is provided to show that the intervention may produce some change in the outcome, with or without a control group, it is hard to judge the efficacy potential of an intervention. Both the processes planned for the trial and the potential for efficacy are necessary for a full trial to be feasible, but neither alone is sufficient. The recently achieved consensus[8] is that ‘pilot’ studies fall under the dimension of ‘feasibility’ studies. Up to now, the terms ‘feasibility’ and ‘pilot’ have been used and misused in the medical literature, particularly when these terms have been used ‘post hoc’ to disguise underpowered main studies,[7,9] those studies with methodological limitations, or those not completed because of inadequate funding.[10] Feasibility of processes and outcome potential are essential elements for funding of a clinical trial, but the vast majority of feasibility studies are hardly published.[4] Those that are published tend to be the ones with data reporting efficacy potential. Some reviews on the methodological quality of pilot (feasibility) studies have been undertaken[9,11-13] and their conclusions are summarized in Table 1. Overall, it has been observed that the pilot studies are frequently employed for hypothesis testing and feasibility of processes are rarely considered. Although follow-up is recommended, only a meagre proportion of preliminary studies have been pursued further in a confirmatory study.
Table 1.

Summary of previous reviews on the practices associated with pilot studies.

Study referenceNumber and type of journal articles includedTime period of review
Lancaster et al.[11]90; Four general clinical journals, British Medical Journal (BMJ), Lancet, Journal of the American Medical Association (JAMA), and New England Journal of Medicine (NEJM), and three subject-specific journals, the British Journal of Cancer (BJC), British Journal of Obstetrics and Gynaecology (BJOG) and the British Journal of Surgery (BJS)2000–2001
Only four articles stated that the pilot study was in preparation for a clinical trial; >50% of the articles concluded that a further study was required.
Arain et al.[9]54; Follow-up to previous review;[13] describes same journals as given above, with the addition of 12 reports from the UK Clinical Research Network Portfolio Database2007–2008
Of the studies, 48%were identified as ‘pilot’ and the rest were ‘feasibility’ studies; 81% focused on hypothesis testing and 81% highlighted the need for further study. Only eight of the 90 articles (9%) identified by the previous review[11] were followed up in a subsequent larger study.
Shanyinde et al.[12]50; Pilot and feasibility randomized controlled trials from EMBASE and MEDLINE databases2000–2009
In only 56% of studies, (95% confidence intervals 41% to 70%) were methodological issues discussed in adequate details, 18% (95% confidence interval 9% to 30%) mentioned future trials in the discussion section, and only 12% (95% confidence interval 5% to 24%) of investigators were actually undertaking a subsequent trial.
Kannan and Gowri[13]93; Indian journals of allopathic medicine, dentistry, and complementary and alternative systems of medicineBetween January and December 2013
None of the studies presented the reason for piloting; none of them discussed feasibility; 2/3 of the articles did hypothesis testing and inferred the significance of differences between the groups and none of these studies mentioned power for these contrasts.
Summary of previous reviews on the practices associated with pilot studies. Over the last few years, a major emphasis has been laid upon informing researchers about appropriate objectives[7,11,14] and methodological features of pilot studies.[7,14] Seven evidence-based objectives of conducting such studies have been identified: to evaluate the integrity of a study protocol for a larger study; to acquire preliminary estimates for sample size computation; to test data collection questionnaires; to test the randomization technique(s); to estimate the recruitment and consent rates; to test the acceptability of the intervention; and to choose the most suitable outcome measure(s).[11] The extent to which pilot studies in the rehabilitation literature live up to these expectations is unknown and is the topic of this review using the pilot/feasibility studies published in Clinical Rehabilitation over the past three decades as examples. The specific objectives are to estimate the extent to which the identified pilot/feasibility studies: (i) address needed objectives; (ii) lead to definitive trials; and (iii) whether the subsequent undertaking of a definitive trial is influenced by the strength of the evidence of outcome improvement.

Methods

Eligibility criteria

Trials published in the journal Clinical Rehabilitation, since its inception, were eligible if the word ‘pilot’ was specified anywhere in the article by the authors. Also eligible were research publications labelled as ‘feasibility studies’, ‘preliminary studies’, and ‘proof-of-concept studies’ as these terms are often used interchangeably with ‘pilot’ studies.[9,14-16] For the purpose of this review, the delineation between pilot and feasibility studies was not made, although the consensus definition[8] considers all preparatory studies as ‘feasibility’. The consensus reserves the term ‘pilot’ for those small-scale studies that have a specific design feature (either randomized or not) that test some or all aspects of a future trial. We use the term ‘pilot’ study to refer to the studies reviewed here and the term ‘feasibility’ if the authors of the chosen articles used this term rather than ‘pilot’.

Search strategy

This study was embedded within a larger study reviewing the methodological features of trials published over the past 30 years in the journal Clinical Rehabilitation.[3] The search strategy has been described previously[3] and yielded 581 clinical trials that had a control group, randomized or not. Of these trials, 191 articles were identified to be pilot or feasibility studies by one of the 23 reviewers conducting the comprehensive data abstraction for the larger study.

Data abstraction

For the review of full clinical trials,[3] a data abstraction form was devised and much of the data elements related to both full and pilot trials. A separate data extraction form was created specifically for pilot studies to include additional fields for: (i) location of declaration of the ‘pilot’ nature of the study; (ii) whether the authors stated the purpose of the piloting; (iii) the reason(s) for the piloting; (iv) inferring the reason(s) if not clearly mentioned; (v) whether the study was followed up in a definitive clinical trial; (vi) whether the sample size calculation was made for the future definitive study; and (vii) data for calculating effect size for the between-group comparisons. Both data abstraction forms included information on sample size at randomization and at the end of intervention and, hence, this information was abstracted by two reviewers. For the other fields not duplicated for full and pilot trials, the data extraction was conducted by one reviewer (NK), areas that lacked clarity were discussed with a senior reviewer (NM), and a decision was made as to the data to be abstracted. The information on the follow-up of a pilot study was obtained by examining its citations in SCOPUS. Additio-nally, the corresponding author of each article was sent an email inquiry to verify if a definitive study was undertaken. If an article did not provide an email address for correspondence, the follow-up status was decided based upon SCOPUS entries.

Data analysis

Frequency distributions and means and standard deviations were used to describe the features of the pilot studies. Between-group effect size was computed for each pilot study using the standardized mean difference. Effect size was not calculated for studies that provided only median and interquartile range values. Based on the distribution of data, effect size was classified into six categories (≤0.1; >0.1 and ≤0.2; > 0.2 and ≤0.5; 0.5 and ≤0.8; >0.8 and ≤2.0; and >2.0). Logistic regression was used to estimate the association between the observed effect size and follow-up of pilot studies, with the effect size category ≤0.1 as the referent. Odds ratio (OR) and associated 95% confidence intervals (CI) were calculated. The pilot studies were divided into three eras based upon the year of publication (<1999; ≥1999 and ≤2009; and >2009 to ≤2015). Chi square analysis was employed to estimate the influence of era on the association between the strength of effect and follow-up. All analyses were conducted in SAS 9.3.

Results

A total of 191 pilot studies were identified for the time period 1987 to March 2015: seven (4%) before 1999; 71 (37%) between 1999 to 2009; and 113 (59%) after 2009 and up to the year 2015. Table 2 shows the key characteristics of the chosen studies. Pilot status was declared most often in the title (87%) and abstract (68%). The objective indicated pilot status in only 54% of studies. The purpose of piloting was specified in only 58% (n = 110) of the pilot studies. Among these 110 studies, the most frequent purpose was to estimate the potential for efficacy (85%) followed by testing the feasibility of the intervention (60%). Feasibility of outcome measures, safety of intervention, sample size computation, and feasibility of recruitment rates were reported as one of the main objectives of piloting in <11% of studies. Other feasibility reasons that were identified included estimating retention rates[17-20] and testing the acceptability of intervention.[20-23]
Table 2.

Characteristics of the 191 pilot studies included in the review.

Characteristic N %
Location of declaring ‘pilot’
 Title16887
 Abstract13168
  Title or abstract18496
 Introduction/objective10454
 Study design7841
 Results74
 Discussion10555
 Conclusion4926
Purpose of ‘piloting’ declared
 Yes11058
 No8142
Purpose of ‘piloting’ declared as [a]
 Feasibility of recruitment rates9/1108
 Compliance or adherence rates4/1104
 Timing of effect of an outcome0/1100
 Feasibility of intervention66/11060
 Feasibility of outcome measures12/11011
 Estimation of efficacy potential94/11085
 Safety of intervention11/11010
 Computation of sample size10/1109
Purpose inferred when not declared as [a]
 Feasibility of recruitment rates5/816
 Compliance or adherence rates1/811
 Timing of effect of an outcome5/816
 Feasibility of intervention12/8115
 Feasibility of outcome measures0/810
 Estimation of efficacy potential72/8189
 Safety of intervention2/812
 Computation of sample size1/811
Studies labelled as ‘feasibility’ 126
Power calculation made for a definitive clinical trial 6434

Studies could have more than one reason for ‘piloting’.

Characteristics of the 191 pilot studies included in the review. Studies could have more than one reason for ‘piloting’. For the 42% of studies (n = 81) that did not declare a clear purpose for piloting, inference was based upon the information provided in the manuscripts. Estimation of efficacy potential (89%) and feasibility of intervention (15%) were the most common reasons followed by feasibility of recruitment rates and timing of intervention effect on outcomes. Almost half of the studies used the terms ‘pilot’ and ‘feasibility’ interchangeably and only 6% of the 191 studies were uniquely labelled as ‘feasibility’ studies. Only 34% of the studies had done a power calculation for a future definitive clinical trial. Table 3 indicates the sample sizes and drop-out proportions. The average sample size was 31 with a large range from 7 to 120. The proportion of drop-outs averaged 3% with a range from 0% to 31%.
Table 3.

Distribution of sample size in the included 191 studies.

Total sample sizeDrop-out proportions
Mean (SD)31 (18.3)3 (4.6)
25 percentile200
50 percentile282
75 percentile405
Range7–1200–31
Distribution of sample size in the included 191 studies. Table 4 shows that, of the 191 pilot studies, 12% (n = 23) were followed by a definitive clinical trial; an additional small percentage ~3.5% (n = 7) of studies had a main study underway or a published study protocol. The remaining (85%; n = 162) did not appear to have had any further follow-up. There was no effect of era on follow-up status (data not shown). Also demonstrated in Table 4 is the follow-up process. A total of 173 emails were sent, enquiring about follow-up status. Of these, 44% (n = 76) were not delivered, 15% (n = 26) were unanswered, and the remaining 41% (n = 71) were answered. Only 17 of the corresponding authors provided a reason for non-pursuance of their pilot work. The commonly encountered cause for no follow-up was the lack of a funding resource.
Table 4.

Follow-up status of the 191 pilot studies.

N %
Follow-up of pilot studies
 Completed2312
 Trial underway or completed but not yet published42
 Published protocol available31.5
 None16285
 Email contact available17390
Outcome of email contact
 Undelivered76/17344
 Unanswered26/17315
 Answered71/17341
Reasons presented for no follow-up
 Lack of funding9/1753
 Results confirmed by another team2/1712
 Pilot work conducted as a part of student’s thesis2/1712
 Principal investigator no longer doing research2/1712
 Product to be evaluated not made available1/176
 Power analysis indicated recruitment not feasible1/176
Follow-up status of the 191 pilot studies. Table 5 presents the association between effect size and follow-up of the 144 studies that presented sufficient data to allow computation of between-group effect size. Logistic regression analyses demonstrated that in comparison with the lowest effect size, i.e. Cohen’s d less than 0.1, there was a tendency for a lower odds of follow-up for studies with effect sizes between 0.8 and to 2.0 to be followed up (OR 0.69, CI 0.08 to 5.46). Studies with effect sizes greater than 2.0 had the same odds of follow-up as those with very small effect sizes, <0.1 (OR 0.94, CI 0.11 to 7.49). However, none of the associations were statistically significant as the CIs included the null value of 1.0.
Table 5.

Association between effect size and follow-up of 144 studies.[a]

Effect size (Cohen’s d)Total number of studies
Number of studies with follow-up
Odds ratioConfidence intervals
N (%)N (%)
No data47 (25)8 (17)
<0.118 (13)2 (11)Referent
>0.1 to ≤0.221 (15)6 (29)3.200.05 to 18.38
>0.2 to ≤0.533 (23)5 (15)1.420.24 to 8.23
>0.5 to ≤0.828 (19)5 (18)1.730.29 to 10.1
>0.8 to 225 (17)2 (8)0.690.08 to 5.46
>219 (13)2 (11)0.940.11 to 7.49

Studies with sufficient data to estimate between-group effect size.

Association between effect size and follow-up of 144 studies.[a] Studies with sufficient data to estimate between-group effect size.

Discussion

This article reviewed the state of pilot/feasibility studies published in Clinical Rehabilitation since its inception (1987). During the 30-year time period, 191 pilot studies were published, while the corresponding number of full trials was 390, indicating the importance of pilot trials in the rehabilitation literature. The specific purposes of piloting were not always stated. In fact, only 58% (n = 110) of the studies clearly declared what was being piloted. The most frequent purpose was to estimate the potential for efficacy (85%) followed by testing the feasibility of the intervention (60%). The terms ‘pilot’ and ‘feasibility’ were often used interchangeably to describe studies designed to inform future trials. However, only 12% of these studies were followed by a definitive clinical trial and <4% of studies had a main study underway or a published study protocol. This review identified, in one journal only and over 30 years, that one-third of clinical trials were pilot trials. An Ovid MEDLINE search of the term ‘pilot study’ in the titles of the research articles published over only 1 year (2014) yielded 6002 records, indicating the widespread use of the term. However, the previous corpus of reviews on the state of pilot studies (see Table 1) has indicated that not enough justice has been done to their conduct or follow-up.

What should be the focus for the next 30 years of pilot studies?

Label pilot studies correctly

The recently devised conceptual framework for the definitions of preparatory studies[8] and the CONSORT reporting guidelines for pilot and feasibility studies[24] should be followed. As proposed by the conceptual framework, when there is uncertainty about the feasibility of a future randomized controlled trial, a ‘feasibility’ study should be carried out.[8] Not addressed by this framework[8] are internal pilot studies, which are fundamentally part of the definitive trial, but are mainly used to revise the sample size estimates upwards based on initial effect size estimates.[11] In rehabilitation, these internal pilot studies would be rare. As per the framework, ‘feasibility’ is an umbrella term that encompasses three types of preparatory studies.[8] Randomized pilot studies are those in which a future definitive clinical trial involving randomized study groups or its components are investigated on a miniature scale. Non-randomized pilot studies are similar to the randomized pilot studies except these do not include randomization of study participants. Another category includes feasibility studies that are not pilot studies. These endeavour to test whether some component of a future trial can be executed and may address the development of an intervention in some manner. However, these do not involve implementing an intervention or other components associated with processes that may be needed to be carried out in a future main study.[8] This review found only 6% of the included studies were labelled as ‘feasibility’. Mostly, the terms ‘pilot’ and ‘feasibility’ were employed without any distinction between the two. Consistency in the terms used to label the preparatory studies should improve with the advent of the consensus definitions[8] and the CONSORT reporting guidelines for pilot and feasibility studies.[24] Also, declaration of pilot/feasibility status both in the title or abstract is deemed useful for indexing purposes and for an easy identification in the electronic database searches.[7,8] This is recommended by the CONSORT guidelines[24] as well. Most of the reviewed pilot studies complied with this recommendation, whereas a minor proportion designated the status in the conclusion section only. Better labelling of the pilot studies can improve their impact and visibility.

Distinguish pilot studies from small clinical trials

It has been recognized that authors designate small clinical trials as ‘pilot’ studies ‘post-hoc’ when it is clear that they cannot reach a definitive answer from the data accrued; some reviewers or journal editors also insist on this labelling, although the trial was not developed as ‘pilot’.[12] Consequently, small trials primarily estimating efficacy end up being labelled as ‘pilot’ without the objectives compatible with pilot/feasibility status. This practise has also been adopted by Clinical Rehabilitation. As suggested by Sackett and Cook over two decades ago,[25] methodologically sound small clinical trials can lead to vital lessons. They have a potential to challenge traditional therapeutic judgements that have not been put to investigation before. Therefore, they should be labelled as such and not disguised as ‘pilot’. Many prominent journals, including Clinical Rehabilitation, do not yet have an existing policy for the conduct and reporting of pilot studies.[9] Changing journal policy about reporting of pilot studies would help improve the situation. Recently, a new open access journal called Pilot and Feasibility Studies has been created to ensure that all foundational work conducted for large-scale studies can be brought to light.[26]

Focus on the required objectives of pilot/feasibility studies

Based on the pilot studies published in Clinical Rehabilitation, it was found that most were undertaken with specific objectives of estimating the potential for efficacy and testing the feasibility of intervention. Indicators of feasibility did not receive much attention in the reviewed studies. This is in line with the previous reviews.[12] One aspect of feasibility that is unique to rehabilitation studies is the feasibility of the measurement strategy. Unlike the pharmacological trials where the outcomes are mostly directly measured, rehabilitation studies tend to have multiple outcomes that are a mix of directly measured as well as patient-reported, and can also be measures of complex, theoretical constructs, such as health-related quality of life.[4] Therefore, it is crucial to test the feasibility of the outcome measurement strategy to avoid missing data arising from a measurement approach that is too burdensome for respondents. As outlined by one of the most comprehensive guides on pilot studies,[7] objectives should address process (e.g. recruitment, refusal, retention, and adherence rates), resources (e.g. adequacy of equipment), and management (e.g. data handling) issues. Our review found scant emphasis on these objectives (see Table 2). Including all this information in the pilot phase not only facilitates the conduct of a full-strength trial, but also leads to a more competitive proposal for funding purposes.[10,27]

Justify the rationale for chosen sample size

In terms of sample size, there was an average of 31 participants in the pilot studies reviewed in this manuscript, with a wide range from seven to 120 participants. A group of researchers conducted an audit of sample sizes in pilot studies carried out in the UK and reported a comparable range, from eight to 114 participants. On the other hand, feasibility trials had a minimum of 10 to a maximum of 300 participants.[28] Several authors have made recommendations for sample size to be included. For example, there is a general rule of thumb to recruit at least 30 participants or higher for parameter estimation,[29] whereas another researcher[30] suggests recruiting at least 12 participants per study group. To minimize the imprecision surrounding the estimation of standard deviation, a total sample size of 70 participants is deemed necessary in a study with two treatment arms.[31] Although rationale for establishing sample size is important to be included, there is a view that a formal calculation may not be appropriate.[28] For example, if the intention is to support feasibility of the main trial from adherence or completion rates from a pilot, the confidence one can have from these pilot estimates is a function of the pilot sample size. If a pilot study of 30 people has observed a completion rate of 80%, the 95% confidence around this proportion is 63% to 90%. This means that a definitive trial is more likely to have a lower completion rate than higher; even increasing the pilot size to 50 participants would not yield much greater estimation confidence (95% CI 67% to 90%).[32]

Use the correct analytic approach

It is essential to underscore that pilot studies are not intended for testing hypothesis,[16] however, they can certainly indicate the potential for efficacy,[24] which would support pursuing a definitive trial. Between-group comparisons should not be performed, as the study, by design, is not powered for this contrast.[16] Ideally, the authors should report descriptive statistics, point estimates, and CI for the effect observed.[11] As recommended by Lee and his colleagues,[33] CI should be interpreted in relation to a priori-determined minimally important differences (MID).[16,34] Effect size is the parameter that best indicates the potential for efficacy.[24] Pilot studies with effect sizes in the small or trivial range could be considered to provide weak evidence of efficacy potential. It is vital to mention that only 144 studies could be included in the main analysis, as the rest of the studies did not provide data for the computation of the effect size. Most of such studies presented either median and interquartile range values or had included no data that could lead to the effect size estimation. In the future, researchers need to ensure that ample data are given in the manuscript. Approaches to statistical analysis that go beyond simply reporting mean changes, such as defining responder-status, are also recommended.[3] This particular approach involves dichotomizing a continuous primary outcome measure into ‘responders’ and ‘non-responders’ based on a magnitude of effect deemed to be important. This information can be used to enhance interpretability of the data collected and can provide preliminary estimates of number needed to treat.[10]

Improve on the reporting of pilot studies

The recently generated reporting guidelines for pilot and feasibility studies, produced as an extension for the CONSORT statement,[24] should be implemented, however, additional emphasis is likely needed for rehabilitation studies. Most rehabilitation interventions require pilot testing as these interventions tend to be complex owing to the need for tailoring to the individual, their multi-modal nature, the number of other active ingredients,[4] such as people, setting, and attention, that need to be balanced by the control situation,[3] and the difficulty in masking research personnel and participants[2] when assessing outcomes. In these complex situations, pilot studies serve a crucial role of ensuring a robust methodological approach in a subsequent definitive trial. There is some guidance available on how to approach pilot and feasibility studies associated with occupational therapy interventions systematically.[4] To improve the reporting standards in rehabilitation in general, the authors of pilot studies should clearly incorporate information on the success of randomization, suitability of control condition, optimal recruitment strategy, drop-out rates, intervention integrity (i.e. if the intervention is delivered as per the original plan to each participant),[35] adverse events, and power calculation for a full-strength trial, in addition to the other aspects of feasibility. Appropriate objectives should be stated explicitly. The authors should acknowledge that the validity of findings could be dubious if they employ inferential statistics in a pilot study with a small sample size.[16] And for future studies, the data collected from pilots serve two purposes: Inform the need for a future trial and the determination of sample size required to confirm the hypothesis.

Justify the need for a further trial, and do it

One of the main objectives of this particular review was to estimate the extent to which the pilot study outcomes influenced the undertaking of a subsequent definitive trial. Only a small proportion of pilot studies were eventually followed up and these findings concur with previous reports.[9] Contrary to the usual expectations, it was found that the strength of effect observed in pilot studies was not associated with the follow-up status. The thinking here was that if the effect was nil or close to nil, a main trial may not be justified as there was no evidence for efficacy potential. On the other hand, pilot studies with very large effect sizes may also not progress to the definitive stage as the authors and funders may think it is no longer ethical to offer the control condition. Cronbach counselled against thinking that the results of pilot studies will be replicated in a larger study by introducing ‘superrealization bias’. It is an outcome of the observation that in small scale studies, the researchers are capable to attain a high quality implementation which could never be achieved on a larger scale.[36] Only one pilot study was identified as addressing the issue indicating that the effect size found in the small pilot on a walking intervention for cancer fatigue was ‘over optimistic’.[37] Funding agencies often require that a pilot study is truly encouraging to persuade funders that a full strength clinical trial should be carried out.[10,38] Although the reasons for not being able to conduct a subsequent trial were not probed in this review, two of the corresponding authors pointed out that lack of funding hindered them from pursuing their work. Among other reasons, was change-over in personnel, particularly if the pilot was undertaken by a trainee.

Be vigilant while calculating sample size needed in next trial

Over 60% of reviewed studies did not provide sample size estimates for a subsequent full-scale trial using the data accrued from the pilot. Although sample size computation for a future trial is considered one of the fundamental objectives of pilot studies, it is worth noting that small datasets tend to yield imprecise effect size estimates, so the sample size estimates should be interpreted with caution.[28,38,39] In an observational study by Salbach and her colleagues,[40] the responsiveness of gait speed over time had an effect size of 1.22 with CIs (0.93 to 1.50). With a total of 50 participants, the width of the CI is quite wide and choosing the mid-point rather than the lower bound would greatly affect the sample size projections. Sample size estimates should not be based solely on the size needed to reject the null hypothesis (no effect), rather the size needed to reject a trivial alternative. This requires estimating sample size for a desired CI, one that excludes an effect of smaller than say 0.2 of a standard deviation (Cohen’s effect size of 0.2).

Limitations

The studies incorporated in this review were acquired from a companion review of clinical trials.[3] An independent, systematic search of pilot/feasibility studies published in the journal was carried out subsequently and identified no additional studies. There is a likelihood of reviewer bias as not all elements of data abstraction were validated by a second reviewer. Also, it is possible that the information on the follow-up of pilot studies may not have been captured correctly, as citation entries may not be updated in SCOPUS. However, email enquiries were sent to the corresponding authors for the information on follow-up. Moreover, the studies that had no citations in SCOPUS were also searched in PubMED. Although feasibility and pilot studies build a rich groundwork for definitive clinical trials, the practise surrounding their conduct and reporting needs to be bettered. For clarity and uniformity in the definitional context, the recently devised framework[8] for preparatory studies should be implemented. The notorious tradition of presenting the small clinical trials as pilot studies on account of their meagre sample size or other flaws should be avoided. Researchers need to ensure that methodologically sound pilot studies are undertaken and multiple aspects of workability of study protocol are investigated so that potential bumps in the road can be dealt with before embarking upon a full strength clinical trial.

Key messages

As there is likely considerable uncertainty when planning a full-scale trial of a rehabilitation intervention, feasibility needs to be tested and demonstrated prior to committing considerable human and monetary resources. Feasibility is the overarching term encompassing: (i) pilot trials (randomized or not) testing both the intervention and other aspects of the trial process; and (ii) other feasibility studies that mainly test a process or may address development of an intervention. The recently issued reporting guidelines for pilot and feasibility studies need to be followed by researchers, reviewers, and journals alike. Small studies or studies that go wrong should not be labelled ‘pilot’ after the fact. Pilot studies should fully describe the distribution of the sample on all outcomes at all time points, and provide point estimates of change with CIs rather than only a p-value. Effect sizes estimated from pilot studies should be interpreted as potentially over-optimistic and power calculations should be adjusted accordingly.
  33 in total

1.  Trials in primary care: statistical issues in the design, conduct and evaluation of complex interventions.

Authors:  G A Lancaster; M J Campbell; S Eldridge; A Farrin; M Marchant; S Muller; R Perera; T J Peters; A T Prevost; G Rait
Journal:  Stat Methods Med Res       Date:  2010-05-04       Impact factor: 3.021

2.  The role and interpretation of pilot studies in clinical research.

Authors:  Andrew C Leon; Lori L Davis; Helena C Kraemer
Journal:  J Psychiatr Res       Date:  2010-10-28       Impact factor: 4.791

3.  Value of a pilot study.

Authors:  Karen H Morin
Journal:  J Nurs Educ       Date:  2013-10       Impact factor: 1.726

4.  Two-sided confidence intervals for the single proportion: comparison of seven methods.

Authors:  R G Newcombe
Journal:  Stat Med       Date:  1998-04-30       Impact factor: 2.373

5.  Responsiveness and predictability of gait speed and other disability measures in acute stroke.

Authors:  N M Salbach; N E Mayo; J Higgins; S Ahmed; L E Finch; C L Richards
Journal:  Arch Phys Med Rehabil       Date:  2001-09       Impact factor: 3.966

6.  Video game play (Dance Dance Revolution) as a potential exercise therapy in Huntington's disease: a controlled clinical trial.

Authors:  Anne D Kloos; Nora E Fritz; Sandra K Kostyk; Gregory S Young; Deb A Kegelmeyer
Journal:  Clin Rehabil       Date:  2013-06-20       Impact factor: 3.477

7.  Sample size requirements to estimate key design parameters from external pilot randomised controlled trials: a simulation study.

Authors:  M Dawn Teare; Munyaradzi Dimairo; Neil Shephard; Alex Hayman; Amy Whitehead; Stephen J Walters
Journal:  Trials       Date:  2014-07-03       Impact factor: 2.279

8.  A tutorial on pilot studies: the what, why and how.

Authors:  Lehana Thabane; Jinhui Ma; Rong Chu; Ji Cheng; Afisi Ismaila; Lorena P Rios; Reid Robson; Marroon Thabane; Lora Giangregorio; Charles H Goldsmith
Journal:  BMC Med Res Methodol       Date:  2010-01-06       Impact factor: 4.615

9.  Defining Feasibility and Pilot Studies in Preparation for Randomised Controlled Trials: Development of a Conceptual Framework.

Authors:  Sandra M Eldridge; Gillian A Lancaster; Michael J Campbell; Lehana Thabane; Sally Hopewell; Claire L Coleman; Christine M Bond
Journal:  PLoS One       Date:  2016-03-15       Impact factor: 3.240

10.  CONSORT 2010 statement: extension to randomised pilot and feasibility trials.

Authors:  Sandra M Eldridge; Claire L Chan; Michael J Campbell; Christine M Bond; Sally Hopewell; Lehana Thabane; Gillian A Lancaster
Journal:  Pilot Feasibility Stud       Date:  2016-10-21
View more
  11 in total

1.  Blood flow restriction added to usual care exercise in patients with early weight bearing restrictions after cartilage or meniscus repair in the knee joint: a feasibility study.

Authors:  Thomas Linding Jakobsen; Kristian Thorborg; Jakob Fisker; Thomas Kallemose; Thomas Bandholm
Journal:  J Exp Orthop       Date:  2022-10-04

2.  Introducing the CONSORT extension to pilot trials: enhancing the design, conduct and reporting of pilot or feasibility trials.

Authors:  Luciana P F Abbade; Joelcio F Abbade; Lehana Thabane
Journal:  J Venom Anim Toxins Incl Trop Dis       Date:  2018-02-02

3.  Feasibility of a smartphone app to enhance physical activity in progressive MS: a pilot randomized controlled pilot trial over three months.

Authors:  Navina N Nasseri; Eghbal Ghezelbash; Yuyang Zhai; Stefan Patra; Karin Riemann-Lorenz; Christoph Heesen; Anne C Rahn; Jan-Patrick Stellmann
Journal:  PeerJ       Date:  2020-06-23       Impact factor: 2.984

4.  Pilot randomized controlled trials in the orthopaedic surgery literature: a systematic review.

Authors:  Bijal Desai; Veeral Desai; Shivani Shah; Archita Srinath; Amr Saleh; Nicole Simunovic; Andrew Duong; Sheila Sprague; Mohit Bhandari
Journal:  BMC Musculoskelet Disord       Date:  2018-11-24       Impact factor: 2.362

5.  A methodological survey on reporting of pilot and feasibility trials for physiotherapy interventions: a study protocol.

Authors:  Luiz Felicio Cadete Scola; Anne M Moseley; Lehana Thabane; Matheus Almeida; Lucíola da Cunha Menezes Costa
Journal:  BMJ Open       Date:  2019-05-22       Impact factor: 2.692

6.  Should treatment effects be estimated in pilot and feasibility studies?

Authors:  Julius Sim
Journal:  Pilot Feasibility Stud       Date:  2019-08-28

7.  Preliminary effect and feasibility of physiotherapy with strength training and protein-rich nutritional supplement in combination with anabolic steroids in cross-continuum rehabilitation of patients with hip fracture: protocol for a blinded randomized controlled pilot trial (HIP-SAP1 trial).

Authors:  Signe Hulsbæk; Ilija Ban; Tobias Kvanner Aasvang; Jens-Erik Beck Jensen; Henrik Kehlet; Nicolai Bang Foss; Thomas Bandholm; Morten Tange Kristensen
Journal:  Trials       Date:  2019-12-23       Impact factor: 2.279

8.  Status, reporting completeness and methodological quality of pilot randomised controlled trials in acupuncture: protocol for a systematic review.

Authors:  Yajun Zhang; Hantong Hu; Xiaoyu Li; Jiali Lou; Xiaofen He; Yongliang Jiang; Jianqiao Fang
Journal:  BMJ Open       Date:  2021-12-03       Impact factor: 2.692

9.  The reporting of pilot and feasibility studies in the top dental specialty journals is suboptimal.

Authors:  Mohammed I U Khan; Hartirath K Brar; Cynthia Y Sun; Rebecca He; Hussein A El-Khechen; Katie Mellor; Lehana Thabane; Carlos Quiñonez
Journal:  Pilot Feasibility Stud       Date:  2022-10-04

10.  Modulating Emotional Experience Using Electrical Stimulation of the Medial-Prefrontal Cortex: A Preliminary tDCS-fMRI Study.

Authors:  Rany Abend; Roy Sar-El; Tal Gonen; Itamar Jalon; Sharon Vaisvaser; Yair Bar-Haim; Talma Hendler
Journal:  Neuromodulation       Date:  2018-05-09
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.