| Literature DB >> 31120888 |
Dean A Fergusson1,2, Marc T Avey1,2, Carly C Barron3,4, Mathew Bocock2, Kristen E Biefer5, Sylvain Boet1,6,7, Stephane L Bourque5, Isidora Conic1, Kai Chen8, Yuan Yi Dong2, Grace M Fox1, Ronald B George9, Neil M Goldenberg10, Ferrante S Gragasin5, Prathiba Harsha3, Patrick J Hong2, Tyler E James4, Sarah M Larrigan1,2,6, Jenna L MacNeil1, Courtney A Manuel11, Sarah Maximos12, David Mazer10, Rohan Mittal5, Ryan McGinn2, Long H Nguyen2, Abhilasha Patel2, Philippe Richebé12, Tarit K Saha8, Benjamin E Steinberg10, Sonja D Sampson11, Duncan J Stewart13,14, Summer Syed3, Kimberly Vella9, Neil L Wesch1, Manoj M Lalu1,2,6,13.
Abstract
Poor reporting quality may contribute to irreproducibility of results and failed 'bench-to-bedside' translation. Consequently, guidelines have been developed to improve the complete and transparent reporting of in vivo preclinical studies. To examine the impact of such guidelines on core methodological and analytical reporting items in the preclinical anesthesiology literature, we sampled a cohort of studies. Preclinical in vivo studies published in Anesthesiology, Anesthesia & Analgesia, Anaesthesia, and the British Journal of Anaesthesia (2008-2009, 2014-2016) were identified. Data was extracted independently and in duplicate. Reporting completeness was assessed using the National Institutes of Health Principles and Guidelines for Reporting Preclinical Research. Risk ratios were used for comparative analyses. Of 7615 screened articles, 604 met our inclusion criteria and included experiments reporting on 52 490 animals. The most common topic of investigation was pain and analgesia (30%), rodents were most frequently used (77%), and studies were most commonly conducted in the United States (36%). Use of preclinical reporting guidelines was listed in 10% of applicable articles. A minority of studies fully reported on replicates (0.3%), randomization (10%), blinding (12%), sample-size estimation (3%), and inclusion/exclusion criteria (5%). Statistics were well reported (81%). Comparative analysis demonstrated few differences in reporting rigor between journals, including those that endorsed reporting guidelines. Principal items of study design were infrequently reported, with few differences between journals. Methods to improve implementation and adherence to community-based reporting guidelines may be necessary to increase transparent and consistent reporting in the preclinical anesthesiology literature.Entities:
Mesh:
Substances:
Year: 2019 PMID: 31120888 PMCID: PMC6532843 DOI: 10.1371/journal.pone.0215221
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Fig 1Constructing our reporting checklist.
The National Institutes of Health preclinical reporting guidelines (NIH-PRG) consist of seven domains, each containing a multi-faceted recommendation. This recommendation for the domain of blinding was deconstructed and two unidimensional items were identified.
Fig 2Preferred reporting items for systematic reviews and meta-analyses (PRISMA [24]) study selection diagram.
Fig 3Distribution of publications.
World map depicting the number of articles published per country based on the corresponding author’s residency at the time of publication (image created using Tableau Software; Seattle, Washington, United States).
Fig 4Reporting assessment results.
Completeness of reporting across all included studies (N = 604) against the deconstructed NIH-PRG. The data is displayed by item in each domain as a frequency (n), and as a percentage (n/N), where black and white correspond to an item being reported or not reported, respectively.