Mariska M G Leeflang1. 1. Epidemiology and Data Science, Amsterdam Public Health, Amsterdam University Medical Centers, University of Amsterdam. Electronic address: m.m.leeflang@amsterdamumc.nl.
Trust in science is crucial and concerns about the credibility of scientists may undermine evidence-based policy making. During the COVID-19 pandemic, scientific credibility was challenged by a lack of scientific evidence to back up policy decisions at the start of the pandemic, followed by a huge and overwhelming increase of scientific publications, often of poor quality [1]. However, concerns about the trust that the general public has in science are not new. In 1999, researchers stated that the scientific community has a credibility problem because of scientific involvement in genetic modification of crops [2]. More recently, similar discussions have arisen after decline in pre-pandemic vaccination rates and after the so called reproducibility crisis was called out [3, 4, 5].Trust should be earned. If we scientists worry about trust from society, then we must realize that we are responsible for the research we do, the claims we make and the reports we publish. We should take the responsibility for asking the relevant scientific questions, apply appropriate designs and methods, report in a usable and unbiased way, and report the question-methods-results in accessible manuscripts [6].What a relevant research question constitutes may be a topic for debate. Where some researchers argue that research should be curiosity driven, societal stakeholders and funders rather see research questions driven by their potential for societal impact. Clinical research involving human beings has a ethical obligation not to burden research participants only because the researcher is curious. Relevant clinical research questions are therefore preferably derived in collaboration with patients and stakeholders [7,8]. The expertise of a researcher should lie in his or her ability to rephrase questions from stakeholders into answerable and researchable scientific questions, for example by using the Participant-Intervention-Comparison-Outcome-(PICO-)framework [9].Research questions drive the methodology used, but also drives the interpretation of the results. For example, a research question may be aimed at establishing a causal link between an exposure or intervention and an outcome. In that case, one would ideally use an experimental design, such as a randomized controlled trial. And if that is not possible, then an observational study would require attention for potential confounders and mediators [10]. The statistical method used would probably be a multivariable model, built to adjust for confounding. However, a similar multivariable model may be used to accurately predict a certain outcome, which requires a different interpretation [11].Reporting the study in an unbiased and usable way can be achieved through the use of reporting guidelines. More than 500 reporting guidelines can be found through the website of the Equator network (https://www.equator-network.org/). A reporting guidelines exists for almost every specific research design. But some principles are true for each and every design. For example, reporting of methods and results should be complete, and not limited to the positive results only. Also, for each step in the research design, all details should be reported. That means that the origin of the study subjects – being humans, mice, cells, study reports, etc – should be reported, including selection criteria. All interventions and measurements done on these subjects should be reported in such a way that a colleague will be able to replicate them. All statistical analyses, including the software packages used and models or tests used, should be reported in such a way that a colleague can replicate them. Finally, scientific reports should be free of ‘spin’: overoptimistic reporting of the results and conclusions, or making inferences that cannot be made.These reports may be published in an open access journal, but we should realize that being open and transparent is more than just publishing in open access journals. Studies can be registered before they have started and their protocols can be published, including all planned analyses and outcomes. Most scientific journals require clinical trials to be registered before they have started. This enables a comparison between what was planned and what has been actually done. Although some authors claim that preregistration would not be desirable for exploratory research, most observational studies can be planned beforehand. Another way to make research publicly available, is the publication of so called preprints. These are the versions of the scientific manuscript before it is sent to a journal. Preprints have usually not been peer-reviewed, although the idea of most preprint servers is that readers can comment and thus review the manuscripts before publication.Scientific credibility requires responsible research. This involves research integrity, transparency and reproducibility. It also requires using the right methods for the right questions. In this theme issue, we have invited three author teams with specific expertise in methodology, to address a specific research type in the area of clinical microbiology and infectious diseases. They address state-of-the-art methodology for designing primary studies on antimicrobial resistance, for designing systematic reviews of prognostic models, and evidence-based guidelines on diagnostic questions [12, 13, 14].First, Van Leth and Schulz explain the pitfalls and advantages of population-based surveys for antimicrobial resistance [12]. These surveys are used to determine the prevalence of antimicrobial resistance in a country or region. They provide a more realistic and clinically relevant estimate of antimicrobial resistance than laboratory based studies. Although population and environment surveys may be more challenging than laboratory based studies, they do allow for a One Health approach, combining veterinary data with human data and environmental data.Second, Damen and colleagues provide guidance for conducting a systematic review of prognostic modelling studies, including guidance for data-extraction, quality assessment and data analysis [13]. They start explaining that ‘prognosis studies’ may imply a variety of designs and outcomes and then focus on prognostic model studies, which combine “multiple prognostic factors in one multivariable prognostic model aimed at making predictions for occurrence of a certain outcome”. The authors also explain how the implications and usefulness of these reviews depend on complete reporting of primary studies.Third, El Mikati and colleagues explain how the process of developing trustworthy guidelines should be systematic and transparent, and should be supported by all stakeholders [14]. They present a case example from four diagnostic COVID-19 guidelines for the Infectious Diseases Society of America, done in a time when evidence was scarce. For these guidelines, a rapid and living systematic review methodology was adopted and the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach was followed to ensure transparency and structure.Enjoy reading these narrative reviews and use them as guidance whenever relevant. As editors, we try to ensure transparent and unbiased reporting and we therefore encourage authors to follow the reporting guidelines as well. Together we are the scientific community. So let us take our responsibility and restore the trust in science, by asking the relevant questions, applying appropriate methods and unbiased and transparent reporting.
Conflict of interest
Dr. Leeflang is a methodologist, involved both in Cochrane and in the GRADE working group and has previously collaborated with the authors of the narrative reviews.