| Literature DB >> 35960759 |
Till Bruckner1,2, Susanne Wieschowski1,2, Miriam Heider3, Susanne Deutsch4, Natascha Drude1, Ulf Tölch1, André Bleich3, René Tolba4, Daniel Strech1,2.
Abstract
BACKGROUND: Existing evidence indicates that a significant amount of biomedical research involving animals remains unpublished. At the same time, we lack standards for measuring the extent of results reporting in animal research. Publication rates may vary significantly depending on the level of measurement such as an entire animal study, individual experiments within a study, or the number of animals used.Entities:
Mesh:
Year: 2022 PMID: 35960759 PMCID: PMC9374215 DOI: 10.1371/journal.pone.0271976
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.752
Incomplete publication of results: Three levels of analysis.
| Level | Domain | Unit | Incomplete publication example |
|---|---|---|---|
| Study | CT | Entire clinical trial | Trial outcomes not reported at all |
|
|
|
| |
| Experiment | CT | Intervention arm or sub-group | Certain arms and/or sub-groups not reported |
|
|
|
| |
| Subjects | CT | Individual patient | Non-reporting of dropouts, outliers, etc |
|
|
|
|
Note: CT = human clinical trial, AS = biomedical animal study.
Selected quotes from interviewees.
| MEASUREMENT OF INCOMPLETE REPORTING | |||
|---|---|---|---|
|
| |||
| Funding not secured | R07 | Q1 | For example with [German public funder] DFG you have to submit a [pre-] approved protocol ( |
| Due to staffing | R12 | Q2 | Especially in areas where clinicians do animal studies, they have a one year sabbatical or time window in which they have to do their research. They often do not manage that, or they manage to do the experiments but then they are back in the clinic and it doesn’t get published. |
| For scientific reasons | R04 | Q3 | The preparatory work–cell cultures, in vitro, whatever can be done without animals–is done in the time while you write the |
|
| |||
| Maintain flexibility | R09 | Q7 | You don’t know what exactly you need at the point in time at which you write the |
| Experiments not performed | R09 | Q10 | It can end up with me writing a group for day 2, day 3, day 4 into my |
| Aenderungsantraege | R08 | Q13 | You would have to go through every |
|
| |||
| Experiment terminated early | R13 | Q14 | You also always want to work in a way that is protective of animal welfare. (…) Just because I have approval for 100 animals does not mean that, if I notice after several experiments that it is futile, that I continue just because the protocol says so. |
| Reductions not documented | R04 | Q12 | Even after |
|
| |||
| Crossover of publications | R15 | Q19 | In a clinical trial you probably expect one trial per publication. . . Within animal research you could have ten different experiments in the same publication. And then trying to track down which methods actually correspond to which experiments and which result is really, really hard. |
| Saved for future publication | R03 | Q21 | If I had published this result in a low impact factor journal straight after my doctorate, just to also publish the negative result, as is desirable, I wouldn’t have had the opportunity two years later to include that as a control and thereby upgrade the [new] study so that I can publish it better [in a more highly ranked journal]. |
|
| |||
|
| |||
| Hard to publish high impact | R02 | Q26 | If it’s a single, negative result, and you have another 20 papers to publish with positive results, then it is very likely which you will tackle first, because they promise a high impact and so on. (…) Do I aim at one big publication, to become visible, or do I–in inverted commas–“waste” time for the publication of negative results that possibly benefit other people in decades down the line? You can’t burden young ECRs with that. But at the end of the day, it’s them who have to generate the data and do the groundwork. |
| Fixation on p-values | R17 | Q23 | Because there is no asterix next to it, it doesn’t get published. But the observed effect can nonetheless be relevant. |
| Hard to publish replications | R05 | Q28 | In immunology and vaccine development (…) it’s a race against time. (…) But when I’ve found something similar or identical, it gets difficult to publish it again because it is no longer new. Or you can publish it again, but no longer in a highly ranked or equally ranked journal. |
| When can null results be published ‘well’ | R02 | Q29 | Regarding the publication of negative data, it depends on how confrontational or spectacular a result is. If for example a certain mechanism was postulated for decades and now it is shown in an animal study that that is not the case, then you can surely publish such negative data very prominently. |
|
| |||
| Selective reporting | R08 | Q33 | I think that’s the most frequent kind of fault, that animals are excluded. (…) I don’t stand behind every person using a pipette. But I believe that of course things like that happen. People are under pressure, they need a job, else they don’t have anything to do–there’s no need to deceive ourselves. And I believe that that can also lead to wrong [“ |
|
| |||
| Depends on context | R07 | Q52 | There are things that fall between the cracks where I would really absolutely say: It’s like that, I see no problem at all. And with other things: It gives me stomach aches [“ |
| Data ruined | R16 | Q35 | There can be unforeseen events and those are the things where I then say, I don’t want them published as negative data. During the study the air conditioning failed… so there were unintentional influences that the best planning and competence could not prevent. (…) So, erroneous [“ |
| Dropout pre-measurement | R11 | Q37 | You only realise it afterwards, once you’re done the experiment, euthanised and autopsied them, looked at the bio-distribution, and then see: “What we applied didn’t reach its intended destination.” (…) So basically we have no result, because the [research] question had not been addressed. |
| Experiment terminated early after tiny pilot group | R13 | Q39 | If we have a |
|
| |||
| Commercial influence | R14 | Q41 | If you work completely within contract research [outside a university] it’s different. There it’s: “I bought that and the study didn’t show that result, so it gets put on ice and possibly the study gets done again at a different site.” (…) Then the study is done three, four, five times with a different CRO until you get the result that you want. (…) And when it isn’t expedient, then it gets swept beneath the carpet rather than somehow being brought into connection with a product. |
|
| |||
| May suggest study was flawed | R01 | Q43 | Was that a thought through hypothesis, or was that from the outset a hypothesis that cannot work at all? |
| Drop-out rates can reflect on competence | R15 | Q45 | People don’t want to say that their surgery is only effective 50 percent of the time (…) They want to give this impression that everything works all the time, but science is messy. |
|
| |||
| Stigma and external pressure | R06 | Q49 | I think we need to get away from this, ‘I am being controlled or even punished’. Sometimes it’s scary. If you try to follow the rules, you actually are more afraid that someone could point the finger at you, because there is suddenly something to control. But if I don’t document anything, I run less danger. |
Note: R = respondent number (interview partner); Q = number of quote as cited in the paper.