| Literature DB >> 32293510 |
Molly Byrne1, Jenny McSharry2, Oonagh Meade2, Kim L Lavoie3,4, Simon L Bacon4,5.
Abstract
BACKGROUND: Non-communicable chronic diseases are linked to behavioral risk factors (including smoking, poor diet and physical inactivity), so effective behavior change interventions are needed to improve population health. However, uptake and impact of these interventions is limited by methodological challenges. We aimed to identify and achieve consensus on priorities for methodological research in behavioral trials in health research among an international behavioral science community.Entities:
Keywords: Behavior change interventions; Delphi study; Methodological research; Randomized controlled trials; Research prioritization
Mesh:
Year: 2020 PMID: 32293510 PMCID: PMC7092577 DOI: 10.1186/s13063-020-04235-z
Source DB: PubMed Journal: Trials ISSN: 1745-6215 Impact factor: 2.279
Fig. 1Flow chart to illustrate the stages of the Delphi process
The “long-list” of items for methodological research in trials of behavioral interventions agreed in Phase 1
| Categories | Item |
|---|---|
| 1. Using theory in behavioral intervention development | |
| 2. Use of systematic approaches to move from evidence to intervention components | |
| 3. Specifying intervention components | |
| 4. Exploring impact of mode of intervention delivery | |
| 5. Tailoring interventions to specific populations and contexts | |
| 6. Selection of suitable comparison group(s) within trials | |
| 7. Contamination between study arms (intervention and comparison) within trials | |
| 8. Blinding of researchers and participants to study-arm allocation | |
| 9. Impact on intervention delivery of characteristics (such as qualifications and training) of those delivering interventions | |
| 10. Strategies to optimize intervention fidelity | |
| 11. Methods to assess intervention fidelity | |
| 12. Strategies to maximize trial participant recruitment and retention | |
| 13. Establishing criteria for progressing from trial piloting phases to full randomized controlled trial (RCT) | |
| 14. Sample-size calculations for pilot trials | |
| 15. Novel approaches and designs for piloting behavioral interventions | |
| 16. Standardizing methods for reporting behavioral trials | |
| 17. Reporting intervention and comparison group(s) intervention content | |
| 18. Standardized methods for reporting and registering behavioral trials’ protocols | |
| 19. Development of novel research designs to test behavioral interventions as alternatives to, or to complement, standard RCTs | |
| 20. Strategies for handling missing data within behavioral trials | |
| 21. Developing novel statistical techniques to enhance behavioral trials | |
| 22. Determining clinically significant changes in outcomes within trials | |
| 23. Selecting appropriate behavioral outcomes for trials | |
| 24. Relationship between behavioral outcomes and clinical/other outcomes | |
| 25. Determining ideal timing of outcome measurement within trials | |
| 26. Measurement of process(es) of change or mechanisms of action within interventions | |
| 27. Methods for cost-effectiveness analyses for behavioral trials | |
| 28. Methods for ensuring that behavioral interventions are implementable into practice and policy | |
| 29. How to disseminate behavioral trial research findings to increase implementation | |
| 30. How to optimize stakeholder engagement in behavioral trial research | |
| 31. Incorporating stakeholder input in intervention development and delivery | |
| 32. Testing the impact of stakeholder engagement in behavioral trial research | |
| 33. Trials’ research to test and develop behavioral theories |
Professional background and demographic data for survey completers
| Survey round 1 | Survey round 2 | |
|---|---|---|
| Gender | ||
| Male | 23 (29.9%) | 17 (29.8%) |
| Female | 53 (68.8%) | 39 (68.4%) |
| Other | 1 (1.3%) | 1 (1.8%) |
| Professional position | ||
| University student (undergraduate/postgraduate) | 17 (22.1%) | 12 (21.1%) |
| Academic staff (e.g., researchers, lecturers, professors) | 49 (63.6%) | 38 (66.7%) |
| Health care practitioner | 2 (2.6%) | 1 (1.8%) |
| Health policy-maker or planner | 2 (2.6%) | 1 (1.8%) |
| Other | 7 (9.1%) | 5 (8.8%) |
| Country of residence (in alphabetical order) | ||
| Australia | 2 (2.6%) | 1 (1.8%) |
| Brazil | 1 (1.3%) | 1 (1.8%) |
| Canada | 33 (42.9%) | 25 (43.9%) |
| China | 1 (1.3%) | 1 (1.8%) |
| Columbia | 1 (1.3%) | 1 (1.8%) |
| France | 4 (5.2%) | 2 (3.5%) |
| Ireland | 12 (15.6%) | 8 (14.0%) |
| Israel | 1 (1.3%) | 1 (1.8%) |
| Netherlands | 1 (1.3%) | 1 (1.8%) |
| Portugal | 1 (1.3%) | 1 (1.8%) |
| Sweden | 1 (1.3%) | 1 (1.8%) |
| UK | 7 (9.1%) | 5 (8.8%) |
| USA | 12 (15.6%) | 9 (15.8%) |
| Age group | ||
| 18–30 years | 18 (23.4%) | 14 (24.6%) |
| 31–40 years | 28 (36.4%) | 19 (33.3%) |
| 41–50 years | 17 (22.1%) | 11 (19.3%) |
| 51 + years | 14 (18.2%) | 13 (22.8%) |
| Years of experience in trials of behavioral interventions | ||
| Less than 1 year | 12 (15.6%) | 8 (14.0%) |
| 1–5 years | 27 (35.1%) | 19 (33.3%) |
| 6–10 years | 18 (23.4%) | 14 (24.6%) |
| 10 + years | 20 (26.0%) | 16 (28.1%) |
Mean importance ratings for individual items in surveys 1 and 2, ordered by survey 2 importance ratings (possible score range 1–9: 1 = lowest importance, 9 = highest importance)
| Research items | Survey 1 | Survey 2 | ||||
|---|---|---|---|---|---|---|
| Mean | SD | Rank | Mean | SD | Rank | |
| 1. Specifying intervention components | 7.81 | 1.31 | 3 | 8.33 | .81 | 1 |
| 2. How to disseminate behavioral trial research findings to increase implementation | 7.83 | 1.45 | 2 | 8.3 | .93 | 2 |
| 3. Methods for ensuring that behavioral interventions are implementable into practice and policy | 7.75 | 1.52 | 4 | 8.21 | .90 | 3 |
| 4. Use of systematic approaches to move from evidence to intervention components | 7.9 | 1.19 | 1 | 8.11 | .98 | 4 |
| 5. Selecting appropriate behavioral outcomes for trials | 7.66 | 1.23 | 6 | 8.04 | .68 | 5 |
| 6. Tailoring interventions to specific populations and contexts | 7.69 | 1.66 | 5 | 7.96 | 1.20 | 6 |
| 7. Reporting intervention and comparison group(s) intervention content | 7.64 | 1.26 | 7 | 7.93 | 0.92 | 7 |
| 8. Selection of suitable comparison group(s) within trials | 7.55 | 1.15 | 8 | 7.86 | 0.85 | 8 |
| 9. Development of novel research designs to test behavioral interventions as alternatives to, or to complement, standard randomized controlled trials (RCTs) | 7.42 | 1.46 | 11 | 7.60 | 1.25 | 9 |
| 10. Measurement of process(es) of change or mechanisms of action within interventions | 7.44 | 1.53 | 10 | 7.56 | 1.38 | 10 |
| 11. Strategies to optimize intervention fidelity (including adherence) | 7.29 | 1.43 | 13 | 7.53 | 1.09 | 11 |
| 12. Using theory in behavioral intervention development | 7.04 | 1.8 | 18 | 7.49 | 1.35 | 12 |
| 13. Standardizing methods for reporting behavioral trials | 7.25 | 1.49 | 15 | 7.47 | 1.31 | 13 |
| 14. Determining clinically significant changes in outcomes within trials | 7.27 | 1.38 | 14 | 7.44 | 1.21 | 14 |
| 15. Relationship between behavioral outcomes and clinical/other outcomes | 7.45 | 1.42 | 9 | 7.4 | 1.07 | 15 |
| 16. Standardized methods for reporting and registering behavioral trials’ protocols | 7.16 | 1.58 | 16 | 7.35 | 1.38 | 16 |
| 17. Methods to assess intervention fidelity | 7.27 | 1.64 | 14 | 7.3 | 1.38 | 17 |
| 18. Exploring impact of mode of intervention delivery | 7.36 | 1.33 | 12 | 7.19 | 1.08 | 18 |
| 19. Strategies to maximize trial participant recruitment and retention | 7.05 | 1.75 | 17 | 7.11 | 1.13 | 19 |
| 20. Investigating the impact of intervention intensity on outcomes (new item in survey 2) | N/A | N/A | N/A | 7.11 | 1.18 | 19 |
| 21. Incorporating stakeholder input in intervention development and delivery | 6.91 | 1.48 | 19 | 7.09 | 1.15 | 20 |
| 22. How to optimize stakeholder engagement in behavioral trial research | 6.86 | 1.41 | 21 | 7.07 | .94 | 21 |
| 23. Determining ideal timing of outcome measurement within trials | 6.88 | 1.49 | 20 | 6.95 | 1.06 | 22 |
| 24. Novel approaches and designs for piloting behavioral interventions | 6.86 | 1.88 | 21 | 6.91 | 1.47 | 23 |
| 25. Establishing criteria for progressing from trial piloting phases to full RCT | 6.91 | 1.61 | 19 | 6.89 | 1.35 | 24 |
| 26. Contamination between study arms (intervention and comparison) within trials | 6.84 | 1.57 | 22 | 6.79 | 1.24 | 25 |
| 27. Testing the impact of stakeholder engagement in behavioral trial research | 6.70 | 1.74 | 23 | 6.70 | 1.36 | 26 |
| 28. Impact on intervention delivery of characteristics (such as qualifications and training) of those delivering interventions | 6.56 | 1.57 | 24 | 6.67 | 1.26 | 27 |
| 29. Engaging stakeholders in the selection of outcomes (New item in survey 2) | N/A | N/A | N/A | 6.63 | 1.54 | 28 |
| 30. Methods for cost-effectiveness analyses for behavioral trials | 6.43 | 1.56 | 26 | 6.30 | 1.22 | 29 |
| 31. Trials’ research to test and develop behavioral theories | 6.45 | 1.70 | 25 | 6.18 | 1.34 | 30 |
| 32. Strategies for handling missing data within behavioral trials | 6.26 | 1.80 | 27 | 6.12 | 1.44 | 31 |
| 33. Developing novel statistical techniques to enhance behavioral trials | 6.03 | 1.69 | 29 | 6.04 | 1.15 | 32 |
| 34. Blinding of researchers and participants to study arm allocation | 6.1 | 1.90 | 28 | 5.93 | 1.50 | 33 |
| 35. Sample-size calculations for pilot studies | 5.84 | 2.30 | 30 | 5.49 | 1.96 | 34 |
Number and percentage of participants who ranked each item as their top priority in surveys 1 and 2, listed in order of the items that were most often selected as the top priority in survey 2
| Research items | Survey 1 | Survey 2 | ||
|---|---|---|---|---|
| % | % | |||
| Tailoring interventions to specific populations and contexts | 7 | 9.1 | 13 | 22.8 |
| Methods for ensuring that behavioral interventions are implementable into practice and policy | 6 | 7.8 | 8 | 14.0 |
| Development of novel research designs to test behavioral interventions as alternatives to, or to complement, standard RCTs | 6 | 7.8 | 6 | 10.5 |
| Use of systematic approaches to move from evidence to intervention components | 6 | 7.8 | 5 | 8.8 |
| Determining clinically significant changes in outcomes within trials | 2 | 2.6 | 5 | 8.8 |
| Using theory in behavioral intervention development | 10 | 13 | 4 | 7.0 |
| How to disseminate behavioral trial research findings to increase implementation | 4 | 5.2 | 2 | 3.5 |
| Selection of suitable comparison group(s) within trials | 5 | 6.5 | 2 | 3.5 |
| Standardizing methods for reporting behavioral trials | 0 | 0 | 2 | 3.5 |
| Standardized methods for reporting and registering behavioral trials’ protocols | 3 | 3.9 | 2 | 3.5 |
| Specifying intervention components | 7 | 9.1 | 2 | 3.5 |
| Reporting intervention and comparison group(s) intervention content | 1 | 1.3 | 1 | 1.8 |
| Measurement of process(es) of change or mechanisms of action within interventions | 2 | 2.6 | 1 | 1.8 |
| Relationship between behavioral outcomes and clinical/other outcomes | 1 | 1.3 | 1 | 1.8 |
| Methods to assess intervention fidelity | 1 | 1.3 | 1 | 1.8 |
| Strategies to maximize trial participant recruitment and retention | 3 | 3.9 | 1 | 1.8 |
| Engaging stakeholders in the selection of outcomes (New item in survey 2) | n/a | n/a | 1 | 1.8 |
| Strategies to optimize intervention fidelity (including adherence) | 1 | 1.3 | 0 | 0 |
| Exploring impact of mode of intervention delivery | 1 | 1.3 | 0 | 0 |
| Investigating the impact of intervention intensity on outcomes (New item in survey 2) | n/a | n/a | 0 | 0 |
| Incorporating stakeholder input in intervention development and delivery | 0 | 0 | 0 | 0 |
| How to optimize stakeholder engagement in behavioral trials’ research | 1 | 1.3 | 0 | 0 |
| Determining ideal timing of outcome measurement within trials | 0 | 0 | 0 | 0 |
| Novel approaches and designs for piloting behavioral interventions | 2 | 2.6 | 0 | 0 |
| Establishing criteria for progressing from trial piloting phases to full RCT | 2 | 2.6 | 0 | 0 |
| Contamination between study arms (intervention and comparison) within trials | 0 | 0 | 0 | 0 |
| Testing the impact of stakeholder engagement in behavioral trial research | 0 | 0 | 0 | 0 |
| Impact on intervention delivery of characteristics (such as qualifications and training) of those delivering interventions | 2 | 2.6 | 0 | 0 |
| Methods for cost-effectiveness analyses for behavioral trials | 0 | 0 | 0 | 0 |
| Trials’ research to test and develop behavioral theories | 0 | 0 | 0 | 0 |
| Strategies for handling missing data within behavioral trials | 0 | 0 | 0 | 0 |
| Developing novel statistical techniques to enhance behavioral trials | 0 | 0 | 0 | 0 |
| Blinding of researchers and participants to study-arm allocation | 0 | 0 | 0 | 0 |
| Sample-size calculations for pilot | 0 | 0 | 0 | 0 |
| Selecting appropriate behavioral outcomes for trials | 4 | 5.2 | 0 | 0 |
Weighted ranking of participant responses to the “top-five” priorities question order by the most highly ranked item in survey 2
| Item name | Weighted ranking score | Overall rank | Weighted ranking score Survey 2 | Overall rank |
|---|---|---|---|---|
| Tailoring interventions to specific populations and contexts | 94 | 1 | 109 | 1 |
| Methods for ensuring that behavioral interventions are implementable into practice and policy | 72 | 5 | 97 | 2 |
| Specifying intervention components | 80 | 2 | 75 | 3 |
| Use of systematic approaches to move from evidence to intervention components | 72 | 5 | 73 | 4 |
| Development of novel research designs to test behavioral interventions as alternatives to, or to complement, standard randomized controlled trials (RCTs) | 74 | 4 | 67 | 5 |
| How to disseminate behavioral trial research findings to increase implementation | 75 | 3 | 63 | 6 |
| Using theory in behavioral intervention development | 70 | 6 | 57 | 7 |
| Measurement of process(es) of change or mechanisms of action within interventions | 59 | 7 | 57 | 7 |
| Determining clinically significant changes in outcomes within trials | 48 | 9 | 43 | 8 |
| Selection of suitable comparison group(s) within trials | 49 | 8 | 33 | 9 |
| Reporting intervention and comparison group(s) intervention content | 37 | 13 | 25 | 10 |
| Selecting appropriate behavioral outcomes for trials | 44 | 12 | 20 | 11 |
| Standardizing methods for reporting behavioral trials | 11 | 23 | 18 | 12 |
| Methods to assess intervention fidelity | 32 | 14 | 17 | 13 |
| Strategies to maximize trial participant recruitment and retention | 47 | 10 | 16 | 14 |
| Strategies to optimize intervention fidelity (including adherence) | 22 | 18 | 13 | 15 |
| Engaging stakeholders in the selection of outcomes (New item in survey 2) | N/A | n/a | 13 | 15 |
| Relationship between behavioral outcomes and clinical/other outcomes | 32 | 14 | 12 | 16 |
| Standardized methods for reporting and registering behavioral trials’ protocols | 29 | 15 | 10 | 17 |
| Novel approaches and designs for piloting behavioral interventions | 45 | 11 | 9 | 18 |
| Establishing criteria for progressing from trial piloting phases to full RCT | 27 | 16 | 8 | 19 |
| Incorporating stakeholder input in intervention development and delivery | 8 | 24 | 7 | 20 |
| How to optimize stakeholder engagement in behavioral trials’ research | 12 | 22 | 4 | 21 |
| Impact on intervention delivery of characteristics (such as qualifications and training) of those delivering interventions | 20 | 19 | 4 | 21 |
| Methods for cost-effectiveness analyses for behavioral trials | 18 | 20 | 3 | 22 |
| Strategies for handling missing data within behavioral trials | 3 | 27 | 2 | 23 |
| Sample-size calculations for pilot | 7 | 25 | 0 | 24 |
| Developing novel statistical techniques to enhance behavioral trials | 3 | 27 | 0 | 24 |
| Determining ideal timing of outcome measurement within trials | 8 | 24 | 0 | 24 |
| Testing the impact of stakeholder engagement in behavioral trial research | 6 | 26 | 0 | 24 |
| Trials’ research to test and develop behavioral theories | 14 | 21 | 0 | 24 |
| Investigating the impact of intervention intensity on outcomes (New item in survey 2) | N/A | n/a | 0 | 24 |
| Exploring impact of mode of intervention delivery | 23 | 17 | 0 | 24 |
| Contamination between study arms (intervention and comparison) within trials | 14 | 21 | 0 | 24 |
| Blinding of researchers and participants to study-arm allocation | 0 | 28 | 0 | 24 |
NB. Weights were calculated as follows: first priority = 5; second priority = 4; third priority = 3; fourth priority = 2; fifth priority = 1