Literature DB >> 25745129

Response to "A false dichotomy: RCTs and their contributions to evidence-based public health".

James D Shelton1.   

Abstract

Entities:  

Mesh:

Year:  2015        PMID: 25745129      PMCID: PMC4356284          DOI: 10.9745/GHSP-D-15-00045

Source DB:  PubMed          Journal:  Glob Health Sci Pract        ISSN: 2169-575X


× No keyword cloud information.
. I appreciate the thoughtful response by Hatt et al. to my editorial on evidence-based public health and am happy we agree on several points: There is some definite value to randomized trials. Because public health operates in complex program environments, it is generally necessary to lay out a valid “theory of change” or causal pathway to understand and evaluate the intended and actual program effects. Mixed-method approaches are essential to understand what is going on in such complex arenas. “Does it work” is always affected by context. The paradigm Hatt et al. put forward asserts that one strength of randomized trials is to answer definitively, “Does it work?” But for the kinds of complex programs public health must muster, there is generally no absolute answer to that question or to its companion question, “How well does it work?” Rather, the answer much depends on how and in what situation “it” is done. That is often true even for fairly consistent biologic phenomena, such as the wide variation in polio vaccine efficacy that I cited in my editorial. Perhaps the most recent example is the use of antenatal corticosteroids to prevent newborn mortality, which is effective in developed countries with sophisticated health resources but was found to actually increase mortality in certain resource-constrained environments. Yes, randomized trials and other research methods can answer, “Can it work?” and are often fairly generalizable for discrete biologic questions. But for programming at scale, that takes us only part of the way. How it might be made to work practicably at scale is the key question for public health. A good example is the paper by Curry et al. in this issue of GHSP, describing the many programmatic elements implemented to provide contraception very successfully and at fairly large scale in crisis-affected situations. Had the program failed to assure an effective supply chain, provide competency-based training, ensure good supervision, and mobilize communities, would the results have been so successful? Highly doubtful. And the richness of their evidence is enhanced by their description of the effect of variation in country context; for example, policy differences on the availability of contraceptive implants and the impact of poor compensation of health workers in certain places made big differences. Variation in country context can make big differences in the effects of interventions. Ironically, some of the randomized studies Hatt et al. cite neither completely answer the question of “Does it work?” nor provide enough understanding through mixed methods of what is going on. The study on pay for performance in Rwanda found an increase in such outcomes as institutional deliveries and children's preventive health visits but no increase in completion of 4 prenatal care visits or of full immunization schedules. And other than some “anecdotal evidence,” we are left wondering about the crucial question of how the incentive system actually may have influenced the behavior of providers and clients. The deworming study in Kenya found a decrease in worms and an increase in school participation and attendance. But there was no discernible impact on anemia (a commonly hypothesized mechanism to affect school attendance) through which deworming might improve school attendance and there was no impact on actual school achievement. Also the study was carried out in a situation with high worm infestation. Might the results be different in areas with less infestation? And we have no direct information from families themselves on what may have influenced school attendance. Community deworming is actually a prime example of the very kind of variability that undermines the generalizability of randomized trials. Notably, a 2012 Cochrane review of numerous studies on community deworming concluded: For haemaglobin and cognition, community deworming seems to have , and the evidence in relation to school attendance and school performance is generally poor, with . Our interpretation of these data is that it is probably [emphasis added] based on consistent benefit on nutrition, haemoglobin, school attendance or school performance as there is simply insufficient reliable information to know whether this is so. Most studies failed to show impact, inescapably because situations varied. So actually, the issue of whether an intervention “is effective on a larger scale” has no single answer. And the recent decision in India to extend mass deworming to large populations has been justifiably roundly criticized. While I really do appreciate randomized studies, perhaps my biggest concern is the “hierarchy” whereby some colleagues place controlled trials at the top of a pyramid as manifestly the best evidence. For understanding public health programming, I see that as quite misguided. Randomized studies help us to understand some things, but they are only one piece of the picture in “triangulating” evidence for programming. And evidence from real-world programming is especially key. RCTs are only one piece of the picture in triangulating evidence for public health programming. Building on the core randomized component and adding other methodologies to the conduct of those randomized trials makes them much more useful. The large Mexico PROGRESA conditional cash transfer program focusing on health, education, and nutrition cited by Hatt et al. is to some extent an example of such a mixed-method approach and was conducted at large scale. My fear, however, is that the desire to control the research environment, and the resulting narrow focus and often artificiality of trials, limits understanding of the potential programmatic practicability. It also may limit the ability to use additional methodologies to help answer whether the intervention might work practicably at scale. Thus, to answer the questions well of what, how, and why an intervention may have worked, we need lots of methodologies. Ultimately, some of the “best evidence” or gold standard comes from programs already operating successfully at scale, as illustrated by the paper by Curry. For public health programming, there is no absolute methodological hierarchy. We need to respect and use all legitimate methodologies.
  8 in total

1.  Effect on maternal and child health services in Rwanda of payment to primary health-care providers for performance: an impact evaluation.

Authors:  Paulin Basinga; Paul J Gertler; Agnes Binagwaho; Agnes L B Soucat; Jennifer Sturdy; Christel M J Vermeersch
Journal:  Lancet       Date:  2011-04-23       Impact factor: 79.321

2.  Deworming programme in India sparks controversy.

Authors:  Ganapati Mudur
Journal:  BMJ       Date:  2015-02-11

3.  Do Conditional Cash Transfers Improve Child Health? Evidence from PROGRESA’s Control Randomized Experiment.

Authors:  Paul Gertler
Journal:  Am Econ Rev       Date:  2004

Review 4.  Deworming drugs for soil-transmitted intestinal worms in children: effects on nutritional indicators, haemoglobin and school performance.

Authors:  David C Taylor-Robinson; Nicola Maayan; Karla Soares-Weiser; Sarah Donegan; Paul Garner
Journal:  Cochrane Database Syst Rev       Date:  2012-11-14

5.  Delivering high-quality family planning services in crisis-affected settings I: program implementation.

Authors:  Dora Ward Curry; Jesse Rattan; Jean Jose Nzau; Kamlesh Giri
Journal:  Glob Health Sci Pract       Date:  2015-02-04

6.  Evidence-based public health: not only whether it works, but how it can be made to work practicably at scale.

Authors:  James D Shelton
Journal:  Glob Health Sci Pract       Date:  2014-08-31

7.  Use of antenatal corticosteroids and tocolytic drugs in preterm births in 29 countries: an analysis of the WHO Multicountry Survey on Maternal and Newborn Health.

Authors:  Joshua P Vogel; João Paulo Souza; A Metin Gülmezoglu; Rintaro Mori; Pisake Lumbiganon; Zahida Qureshi; Guillermo Carroli; Malinee Laopaiboon; Bukola Fawole; Togoobaatar Ganchimeg; Jun Zhang; Maria Regina Torloni; Meghan Bohren; Marleen Temmerman
Journal:  Lancet       Date:  2014-08-12       Impact factor: 79.321

8.  A false dichotomy: RCTs and their contributions to evidence-based public health.

Authors:  Laurel E Hatt; Minki Chatterji; Leslie Miles; Alison B Comfort; Benjamin W Bellows; Francis O Okello
Journal:  Glob Health Sci Pract       Date:  2015-03-05
  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.