| Literature DB >> 35431349 |
Rebecca C H Brown1, Mícheál de Barra2, Brian D Earp1.
Abstract
This paper argues that there exists a collective epistemic state of 'Broad Medical Uncertainty' (BMU) regarding the effectiveness of many medical interventions. We outline the features of BMU, and describe some of the main contributing factors. These include flaws in medical research methodologies, bias in publication practices, financial and other conflicts of interest, and features of how evidence is translated into practice. These result in a significant degree of uncertainty regarding the effectiveness of many medical treatments and unduly optimistic beliefs about the benefit/harm profiles of such treatments. We argue for an ethical presumption in favour of openness regarding BMU as part of a 'Corrective Response'. We then consider some objections to this position (the 'Anti-Corrective Response'), including concerns that public honesty about flaws in medical research could undermine trust in healthcare institutions. We suggest that, as it stands, the Anti-Corrective Response is unconvincing.Entities:
Keywords: Ethics; Evidence based medicine; Medicine; Science communication; Trust
Year: 2022 PMID: 35431349 PMCID: PMC8994926 DOI: 10.1007/s11229-022-03666-2
Source DB: PubMed Journal: Synthese ISSN: 0039-7857 Impact factor: 2.908
Box 1 Description of some of the methodological factors which contribute to the overestimation of the effectiveness of medical interventions
| Selection bias/enrichment strategies | Selection bias results from salient differences between control and experimental groups besides the intervention in a trial, and may result from ‘enrichment strategies’—the intentional inclusion/exclusion of participants from a trial in order to influence the results. Randomisation is intended to mitigate against selection bias, but is not always used appropriately (Pildal et al., |
| Surrogate end points | Surrogate end points may be used as proxies to estimate how effective an intervention is (e.g. the use of HbA1c as a measure of diabetic control; tumour size as a surrogate for cancer survival). Although improvement in surrogate outcomes is often the sole basis for treatment approval and implementation, these surrogates often fail to reliably track the outcomes that we ultimately care about, like survival (Kemp & Prasad, |
| Poorly designed instruments | Measures have been developed to assess the effects of interventions on things that we care about—e.g. to see what effect antidepressants have on people with depression. But these measures may distort the picture of the effect of an intervention. Stegenga illustrates this using the Hamilton Depression Rating Scale which scores people on the severity of their depression. According to this scale, if an intervention reduces insomnia but has no effect on the intensity of depression someone is feeling, it may still be recognised as an effective treatment for depression (Stegenga, |
| P-hacking | Flexibility in choice of statistical analyses, participant inclusion criteria etc. can be exploited to generate statistically significant findings. If a dataset can be plausibly analyzed in numerous different ways, researchers sometimes select the specific analysis that spuriously generates a |
| Outcome switching | Where researchers specify in a trial protocol that they will measure a particular outcome to judge effectiveness, but after results have been collected and the trial is written up for publication, they report a different outcome instead or in addition (an outcome that typically makes the intervention appear more effective than the originally specified outcome does) (Altman et al., |
| Passive harm detection | Most data on the harms of interventions comes from passive surveillance and observational studies (contrast this with the careful design of trials to detect even a small benefit of an intervention). This means that many harms go un(der)reported and sometimes ignored |
| Publication bias | The results of about half of trials have never been reported (Song, Parekh, et al. 2010, Ross, Mulvey et al. 2009). 33% of trials on the EU clinical trials register and 29% of trials on ClinicalTrials.gov contravene requirements to report results within a year (Goldacre, DeVito et al. 2018, DeVito, Bacon et al. 2020). Positive results (those showing an intervention to be effective) are more likely to be published than ‘negative’ results (Song et al., |
| Samples are not representative | Trials are often performed on a set of people who are different from the set of people who end up receiving that treatment. One big difference is prevalence of multi-morbidity (Fortin, |