| Literature DB >> 35690782 |
Robert Schulz1, Adrian Barnett2, René Bernard3, Nicholas J L Brown4, Jennifer A Byrne5, Peter Eckmann6, Małgorzata A Gazda7, Halil Kilicoglu8, Eric M Prager9, Maia Salholz-Hillel1, Gerben Ter Riet10, Timothy Vines11, Colby J Vorland12, Han Zhuang13, Anita Bandrowski6, Tracey L Weissgerber14.
Abstract
The rising rate of preprints and publications, combined with persistent inadequate reporting practices and problems with study design and execution, have strained the traditional peer review system. Automated screening tools could potentially enhance peer review by helping authors, journal editors, and reviewers to identify beneficial practices and common problems in preprints or submitted manuscripts. Tools can screen many papers quickly, and may be particularly helpful in assessing compliance with journal policies and with straightforward items in reporting guidelines. However, existing tools cannot understand or interpret the paper in the context of the scientific literature. Tools cannot yet determine whether the methods used are suitable to answer the research question, or whether the data support the authors' conclusions. Editors and peer reviewers are essential for assessing journal fit and the overall quality of a paper, including the experimental design, the soundness of the study's conclusions, potential impact and innovation. Automated screening tools cannot replace peer review, but may aid authors, reviewers, and editors in improving scientific papers. Strategies for responsible use of automated tools in peer review may include setting performance criteria for tools, transparently reporting tool performance and use, and training users to interpret reports.Entities:
Keywords: Automated screening; Peer review; Reproducibility; Rigor; Transparency
Mesh:
Year: 2022 PMID: 35690782 PMCID: PMC9188010 DOI: 10.1186/s13104-022-06080-6
Source DB: PubMed Journal: BMC Res Notes ISSN: 1756-0500
Examples of automated tools used to screen preprints, submitted papers or publications
| Tool | Screening topics and rationale |
|---|---|
| Sciscore [ | Many factors, including: RRIDs: Unique persistent identifiers that allow readers to determine exactly what resource (e.g., cell line, antibody, model organism, software) was used Ethics & consent statements: Required for legal compliance Blinding & randomization: The failure to blind or randomize experiments is associated with overestimated effect sizes Sample size calculations: Provide information about whether the study was designed and powered to detect an effect of an expected size Sex/gender: Effects may differ according to sex or gender |
| ODDPub [ | Open data, open code: Open data and open code make it easier to reproduce analyses, identify potential errors, and re-use data |
| Limitation-recognizer [ | Author-acknowledged limitations: Every study has limitations. Acknowledging limitations provides essential context that allows readers to interpret the study results |
| Barzooka [ | Bar graphs of continuous data: Many datasets can lead to the same bar graph and the actual data may suggest different conclusions from the summary statistics alone. These graphs should be replaced with dot plots, box plots or violin plots |
| Jetfighter [ | Rainbow color maps: Rainbow color maps are not colorblind accessible, and create visual artifacts for readers with normal color vision |
Trial registration number screener | Clinical trial registration numbers: Clinical trials must be registered in an International Clinical Trials Registry Platform registry, and this number must be reported in publications. This makes it easier to detect practices like outcome switching |
| Statcheck [ | Misreported p-values: p-values that do not match the reported test statistic and degrees of freedom are common and can sometimes alter study conclusions |
| Scite reference check | Citation of retracted papers, or papers with corrections or errata: Checking cited papers for editorial notices can help to identify potentially problematic citations |
| Seek and blastn (semi-automated) [ | Confirms that nucleotide sequences were correctly identified: Incorrect identification or use of nucleotide sequences makes it difficult to interpret or reproduce study results. Results from this tool require confirmation from an expert reviewer |