Literature DB >> 29167737

Make researchers revisit past publications to improve reproducibility.

Clare Fiala1, Eleftherios P Diamandis1,2,3.   

Abstract

Scientific irreproducibility is a major issue that has recently increased attention from publishers, authors, funders and other players in the scientific arena.  Published literature suggests that 50-80% of all science performed is irreproducible.  While various solutions to this problem have been proposed, none of them are quick and/or cheap.  Here, we propose one way of reducing scientific irreproducibility by asking authors to revisit their previous publications and provide a commentary after five years. We believe that this measure will alert authors not to over sell their results and will help with better planning and execution of their experiments.  We invite scientific journals to adapt this proposal immediately as a prerequisite for publishing.

Entities:  

Keywords:  Scientific irreproducibility; accountability; bias; improve scientific reproducibility; inflated research; reflection on past publications; revisit past publications

Year:  2017        PMID: 29167737      PMCID: PMC5676189          DOI: 10.12688/f1000research.12715.1

Source DB:  PubMed          Journal:  F1000Res        ISSN: 2046-1402


Introduction

Hardly a day goes by without a screed against perverse incentives in research. It goes like this: Scientists get better rewards for announcing breakthroughs than for producing solid work. The achievements needed to win grants, jobs, and publications - combined with researchers’ (often noble) ambitions - encourage them to build castles in the air. After that comes a plea for large-scale change. One recent proposal would require scientists to complete rigid, time-consuming confirmation studies before publishing a single paper [1]. We propose something that is quicker, cheaper, and simpler: Require researchers to write post-publication reflections five years after their papers appear. In these self-reviews, researchers would assess how their claims held up. They should describe whether an invention or discovery was translated or commercialized, and how (or whether!) others could build on their work. The practice would provide a straightforward, non-stigmatized way to identify errors, misinterpretations, and other roadblocks. For many, these-self-reviews would be a welcome opportunity for clarification, celebration, and even self-promotion. But the main advantage is that self-reviews would encourage scientists to think in advance how they might be wrong.

Causes of irreproducibility

How might this work? Let’s consider the sources of irreproducibility. We put this down to a half-dozen causes: Often several occur together in the same paper! Fraud captures the most attention, but is rare. Self-deception, or bias, occurs aplenty. It is easier to attribute an observation to a hoped-for reason than to imagine trivial causes. Who wants to believe that a test result depends on the brand of test tube or day of the week rather than the earliest detectable sign of disease? Then there are unrecognized technical deficiencies; researchers who know how to operate a machine, but lack enough experience to recognize artifacts and infelicities. They enter the wrong parameters or use the wrong pipette tips without realizing that they have rendered their data meaningless. Similarly, big data and data crunchers readily produce false interpretations. In 2007, one crystallographer had to retract five prominent papers after discovering a small computer glitch [2]. All of these problems are exacerbated by fragmented science. Projects are now executed in pieces in various laboratories and results knitted together without anyone knowing exactly what happened at each site, so no one is able to bring sufficient scrutiny to bear. In each of these cases, the problems are clear with hindsight. If post-publication self-review was commonplace, some of these problems would become clear as experiments were being planned and conducted. In our own lab, we have made a habit of reflecting on our papers (though not necessarily with a strict five-year timeframe). Though several papers led to work taken up by biotech companies and other scientists, others proved much less valuable than we had hoped. Bias and technical deficiencies are the most prominent reasons behind our papers that did not ‘succeed.’ That realization has made one of us a better mentor and supervisor over time. It has also led to several publications pointing out flaws in common reagents and lab practices. Work by the psychologists Philip Tetlock and Jessica Lerner suggests that simple steps meant to hold people accountable for their judgment calls actually improves their judgment [3]. They become more accurate in their thinking and more objective when they evaluate evidence. Accountability in science is ad hoc. Researchers get credit for a publication well before enough time has passed for the scientific community to really know whether the paper has made a valuable contribution. No wonder that researchers bent on submitting a paper are obsessed with making the best possible case for its acceptance rather than illustrating its limitations. If researchers are forced to consider how well their paper will stand up five years hence, they will be more careful when doing the work and more critical in their analysis. About ten years ago, one of us came up with the idea of a new journal, tentatively titled Reflections in Medicine, in which authors of prominent papers could publish their post-publication thoughts, and contacted about 20 prospective authors, who all ignored or refused the request. We believe some did not want to revisit problematic results. With the advent of electronic publishing, it is now possible for journals (or funders or other platforms, such as PubMed) to create a space for these five-year reflections and to connect them with the original paper. Self-evaluation, based on strict criteria and instructions, can be revealing even if the authors try to inflate the impact of old work. For example, the boldest claims in a scientific paper should be annotated and addressed directly in authors’ reflections. Researchers could also be asked a series of straightforward yes/no questions about whether the results of a paper have changed clinical or scientific practice. Journals, funders, or research institutions could oblige scientists to write self-reflections. Failing to do so would be a red flag. One can imagine a system in which publications in reference lists or literature databases could be annotated as lacking self-review, and so taken less seriously. With luck, care, and enthusiasm, this simple, inexpensive step would counter perverse incentives. Instead of being stigmatized for correcting a paper, researchers would be stigmatized for failing to do so. Junior scientists would learn by example how to read papers critically and design more-rigorous experiments. The public would learn that a paper is not a definitive statement, but a single contributor to a gradually emerging picture of how nature works. In short, self-reflections could demote scientific papers to their rightful place and turn a vicious cycle into a virtuous one. This review article brings forward a recommendation of mitigating the current issues of reproducibility in academic research by encouraging authors to perform post-publication self-review on their studies in 5 years. The authors of the review article suggest that the implementation of self-review will allow for academic research to be undertaken with better judgment, as accountability in their work is emphasized. The value of self-review post-publication provides in principle, an excellent platform for authors to reflect on the impact and influence of their research while also providing insight on what led to possible reproducible concerns. The review article appropriately includes limitations of their proposal, such as the adoption rate by the scientific community and implementation into research programs. I believe this is an important point that should be emphasized as the value of self-review is highly dependent on the acceptance of the community. The authors insightfully recommend that self-review be included as terms for publication by "Journals, funders, and research institutions" to promote this process. In summary, this is a well-written review article introducing self-reflection as a means to identify sources of irreproducibility by the authors of academic research. By encouraging self-review, peers will be able to properly place academic findings into the appropriate place in their knowledge base. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. SYNOPSIS: This overview deals with an enlarging key issue related to the reproducibility of published data in journals of all stripes, ranging from ‘high’ to ‘low’ impact quality. A ‘checkpoint’ process is proposed requiring authors to write a 5-year-post-publication ‘reflection’ on the previously published data. Failure to do so would place a ‘red flag’ beside references to that author’s work in subsequent databases and presumably also in subsequent manuscript reference sections quoting that work. The authors suggest a ‘simple’ yes/no checklist to determine whether the results of a paper have changed clinical or scientific practice. CRITIQUE This contribution is a well-written, thoughtful and concise overview of a deepening thorn in the side of published peer-reviewed data in the literature. The strength of the commentary is that the issue of long-term ‘accountability’ for those who publish the data is raised; and a ‘checkpoint’ solution is well described. The weakness in the overview is that the challenge of implementing the ‘simple questionnaire’ and the alternatives for failure of compliance are not dealt with in the text. For instance, one of the authors would have needed to respond to reflect on over 170 publications over the past 5 years. To determine an accurate answer about the subsequent reproducibility of the 170 findings or about the impact of the findings on clinical or basic science practice would represent a considerable challenge; and the request would be an invitation to side-step accuracy in favour of expediency to get the annoying fly off the desk. This issue is not dealt with in the text. Further, although author accountability is of prime importance, the suggested ‘solution’ would not surface the frustration of those e.g. other laboratories or even new trainees in the authors’ laboratories, who have difficulties repeating the published data. Those ‘negative’ findings, which are of immense importance, almost never see the light of day. The article does not deal with that key issue, which merits attention. SUMMARY AND RECOMMENDATION: This brief, well-written overview surfaces a key issue plaguing the scientific literature and offers a valuable process to enhance author accountability via a 5-year post-publication ‘reflection’ mechanism. That said, the above comments raise two issues that the authors may wish to deal with as they finalize their text. Thus, the possible outcomes of their approach (i.e. expected lack of compliance; or even worse, continued mis-representation of the data) could be dealt with and alternatives to poor outcomes for their process could be suggested. Further, a process that would enable the ‘reporting’ of lack of reproducibility from others in the field, with an opportunity for the authors to respond with ‘further reflections’ would enhance the process of accountability. As an example, this reviewer has had the opportunity to follow an enzyme isolation procedure exactly as described in a JBC manuscript; and the enzyme activity of the product was essentially as expected. That said, a biochemical sequencing of the enzyme product (NOT done in the previous JBC manuscript) showed that the enzyme was not at all the one claimed to be yielded by the published procedure. How would the ‘reflection’ process deal with that issue? One solution would be for the journal to be alerted as to the lack of reproducibility; and for the journal to insist on accountability from EACH AND EVERY ONE of those who have their names on the originally published article. The authors are encouraged to suggest an expanded ‘accountability’ process that would enhance the ‘reflection’ mechanism suggested and surface the lack of reproducibility by others of findings of published data. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above. The long term relevance of our published works is a point for both secret pride and disappointment for many continuing in careers with active engagement in research. This opinion article suggests that compelling researchers to reflectively evaluate their past publications and to produce a published commentary on these works (after 5 years) will improve the reliability of published work. In my opinion, this is an idea that may have reached its time and in some ways is long overdue.  While it remains to be seen if this strategy improves the quality of published work (if implemented), the user friendly and low cost proposal made by these authors at least represents a feasible approach toward improving the quality of publications by causing researchers to think twice about what they submit for publication. And this by knowing that publication involves a long term commitment and accountability to the ideas presented and to be revisited by reflective review 5 years later. Certainly, this is a well written and interesting concept that in my opinion merits consideration by journal editors and publishers, and the support of researchers committed to research excellence. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. Scientific irreproducibility is indeed a serious problem nowadays. In the current opinion article, the authors state the reason that govern irreproducibility and for the first time they provide a potential method to treat such results. Importantly, their suggested method is based on a self-evaluation by the authors of a published article after a 5-year period. Indeed, this “self-review” process is simple, quick and with no additional cost. The article is very well-written for a broad audience. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
  3 in total

1.  Accounting for the effects of accountability.

Authors:  J S Lerner; P E Tetlock
Journal:  Psychol Bull       Date:  1999-03       Impact factor: 17.737

2.  Scientific publishing. A scientist's nightmare: software problem leads to five retractions.

Authors:  Greg Miller
Journal:  Science       Date:  2006-12-22       Impact factor: 47.728

3.  No publication without confirmation.

Authors:  Jeffrey S Mogil; Malcolm R Macleod
Journal:  Nature       Date:  2017-02-22       Impact factor: 49.962

  3 in total
  2 in total

Review 1.  The democratization of scientific publishing.

Authors:  Clare Fiala; Eleftherios P Diamandis
Journal:  BMC Med       Date:  2019-01-18       Impact factor: 8.775

Review 2.  Uncovering the Depths of the Human Proteome: Antibody-based Technologies for Ultrasensitive Multiplexed Protein Detection and Quantification.

Authors:  Annie H Ren; Eleftherios P Diamandis; Vathany Kulasingam
Journal:  Mol Cell Proteomics       Date:  2021-09-28       Impact factor: 7.381

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.