Literature DB >> 33125915

Transparency assessment of COVID-19 models.

Mohammad S Jalali1, Catherine DiGennaro2, Devi Sridhar3.   

Abstract

Entities:  

Mesh:

Year:  2020        PMID: 33125915      PMCID: PMC7833180          DOI: 10.1016/S2214-109X(20)30447-2

Source DB:  PubMed          Journal:  Lancet Glob Health        ISSN: 2214-109X            Impact factor:   26.763


× No keyword cloud information.
The COVID-19 pandemic has strained societal structures and created a global crisis. Scientific models have a crucial role in mitigating harm from the pandemic, by estimating the spread of outbreaks of the virus and analysing the effects of public health policies. The context-sensitive and time-sensitive measures provided by COVID-19 models offer real population health impacts and are of great importance. However, these models must be completely transparent before policies and insights are enacted. Transparency is a cornerstone of scientific methodology, and efforts to improve transparency and reproducibility of research have been increasing over the past decade. Researchers have called for complete transparency of COVID-19 models. An absence of transparency in the design, development, and analysis of these models reduces the trust in their timely messages and limits their reproducibility, impeding scientists from verifying the findings and improving the model's performance. Many modellers have already shared the details of their models openly. However, the overall status of transparency of COVID-19 models remains unknown. We assessed whether COVID-19 modellers adhere to best practices in reporting and documentation; we did not evaluate whether a model's projections are correct. To systematically evaluate the transparency of COVID-19 models, we reviewed a sample of models that have earned global attention and been referenced in governmental public health efforts. We first collected models that included a methods report from the US Centers for Disease Control and Prevention's compilation, then identified the most-referenced models through Google Scholar and the PlumX News Mentions metric. This search took place on June 13, 2020, and resulted in the identification of 29 models. Due to the urgency created by the pandemic, preprints and project websites made available in advance of research publication have had an essential role during the crisis. Therefore, we included models from these sources (n=14), in addition to models in peer-reviewed publications (n=15). We assessed these models against 27 binary criteria to evaluate the transparency of their reports. The transparency assessment criteria were adopted from several transparency checklists,5, 6, 7 and include two main themes. First, the specificity of model items, including but not limited to discussion of assumptions, parameterisation, codes, and sensitivity analyses, and second, general research items, such as disclosure of research limitations, funding, and potential conflicts of interest. Two researchers reviewed the full text and appendix of each modelling report. A third reviewer helped to discuss the discrepancies between the first two reviews to reach a resolution. Each of the 27 criteria were satisfied by an average of 22 (76%) of 29 models in our sample. Eight criteria were satisfied by more than 90% of the models, but most criteria were satisfied by a much smaller percentage of models (appendix p 1). For example, seven (24%) of 29 models did not report the equations used, nine (31%) did not report their estimated parameters, 13 (45%) did not share all of their longitudinal data, and 15 (52%) did not report their code (appendix p 2). Only four articles (14%) satisfied more than 90% of our transparency checklist items. This evaluation shows that models that are not fully transparent can still posit analytical insights and inform policy. Rather than presenting recommendations at face value, modellers must ensure that their claims are independently verifiable. The scientific and modelling communities need to make transparency the norm, rather than the exception. Otherwise, they risk losing the faith of policy makers and the public. Such consequences were observed for a model released on March 30, 2020, which was often cited by government agencies, including the White House. Amid concerns about the model's projections, the scientific community was frustrated by the absence of information regarding codes and specific model details. Another high-profile preprint model published on March 16, 2020, faced similar scrutiny after predicting 510 000 deaths in a scenario with no interventions, prompting researchers to attempt replication. When the code was released about 6 weeks later, several bugs and issues with the modelling assumptions were unearthed. Issues like these suggest that governments should not rely on a small number of models to inform policy. Instead, policy makers can mitigate potential harm by aggregating the available models and synthesising their results to help inform action. A crucial element of model transparency is achieved by providing codes. One concern for researchers is that their code is disorganised and cannot be evaluated or run by laypeople hoping to replicate their efforts. However, even messy code can provide a framework for replication and generate useful dialogue, as seen on platforms like GitHub. Still, well documented code is preferable. Of the 14 (48%) of 29 articles which reported their codes, 13 provided helpful detailed documentation either directly in the file or in supplementary material. We encourage modellers who hope to impact perceptions and policy to follow transparent research practices and release their codes in a timely manner for public evaluation. Many journals ask for transparency statements and encourage scientists to report the details in supplementary documents. Although journals should continue to further enhance their transparency requirements, they cannot control the full transparency of publications. Additionally, peer review cannot be fully relied upon for the most technical aspects of a model, especially if papers do not provide documentation. During crises such as COVID-19, preprints provide fast information delivery. Therefore, journals' transparency policies have minimal effect. Models which were still preprints or project websites satisfied an average of 70% of our transparency criteria, compared with 80% by peer-reviewed articles. The responsibility to provide transparency remains largely with the modellers, even though the peer-review process can help address these omissions. Reporting a fully documented and transparent model can be difficult, but this effort has both tangible and intangible benefits for modellers and policy makers. With the urgency created by a global pandemic, modellers might justify prioritising speed of reporting rather than transparency. However, poor transparency of models that directly impact public health policies (and therefore human lives) can be catastrophic. Model transparency does not necessarily equate to model quality; however, models with little documentation cannot be assessed for quality at all. Hence, all models must be fully transparent for both scientific and ethical purposes. This online publication has been corrected. The corrected version first appeared at thelancet.com/lancetgh on November 18, 2020
  6 in total

1.  Journals unite for reproducibility.

Authors:  Marcia McNutt
Journal:  Science       Date:  2014-11-07       Impact factor: 47.728

Review 2.  Guidelines for Accurate and Transparent Health Estimates Reporting: the GATHER statement.

Authors:  Gretchen A Stevens; Leontine Alkema; Robert E Black; J Ties Boerma; Gary S Collins; Majid Ezzati; John T Grove; Daniel R Hogan; Margaret C Hogan; Richard Horton; Joy E Lawn; Ana Marušić; Colin D Mathers; Christopher J L Murray; Igor Rudan; Joshua A Salomon; Paul J Simpson; Theo Vos; Vivian Welch
Journal:  Lancet       Date:  2016-06-28       Impact factor: 79.321

3.  An empirical assessment of transparency and reproducibility-related research practices in the social sciences (2014-2017).

Authors:  Tom E Hardwicke; Joshua D Wallach; Mallory C Kidwell; Theiss Bendixen; Sophia Crüwell; John P A Ioannidis
Journal:  R Soc Open Sci       Date:  2020-02-19       Impact factor: 2.963

4.  Call for transparency of COVID-19 models.

Authors:  C Michael Barton; Marina Alberti; Daniel Ames; Jo-An Atkinson; Jerad Bales; Edmund Burke; Min Chen; Saikou Y Diallo; David J D Earn; Brian Fath; Zhilan Feng; Christopher Gibbons; Ross Hammond; Jane Heffernan; Heather Houser; Peter S Hovmand; Birgit Kopainsky; Patricia L Mabry; Christina Mair; Petra Meier; Rebecca Niles; Brian Nosek; Nathaniel Osgood; Suzanne Pierce; J Gareth Polhill; Lisa Prosser; Erin Robinson; Cynthia Rosenzweig; Shankar Sankaran; Kurt Stange; Gregory Tucker
Journal:  Science       Date:  2020-05-01       Impact factor: 47.728

5.  Reproducible research practices, transparency, and open access data in the biomedical literature, 2015-2017.

Authors:  Joshua D Wallach; Kevin W Boyack; John P A Ioannidis
Journal:  PLoS Biol       Date:  2018-11-20       Impact factor: 8.029

6.  Early in the epidemic: impact of preprints on global discourse about COVID-19 transmissibility.

Authors:  Maimuna S Majumder; Kenneth D Mandl
Journal:  Lancet Glob Health       Date:  2020-03-24       Impact factor: 26.763

  6 in total
  10 in total

1.  A Systematic Review of Simulation Models to Track and Address the Opioid Crisis.

Authors:  Magdalena Cerdá; Mohammad S Jalali; Ava D Hamilton; Catherine DiGennaro; Ayaz Hyder; Julian Santaella-Tenorio; Navdep Kaur; Christina Wang; Katherine M Keyes
Journal:  Epidemiol Rev       Date:  2022-01-14       Impact factor: 6.222

2.  Accuracy comparison between statistical and computational classifiers applied for predicting student performance in online higher education.

Authors:  Rosa Leonor Ulloa Cazarez
Journal:  Educ Inf Technol (Dordr)       Date:  2022-05-17

3.  Applications of Complex Systems Models to Improve Retail Food Environments for Population Health: A Scoping Review.

Authors:  Megan R Winkler; Yeeli Mui; Shanda L Hunt; Melissa N Laska; Joel Gittelsohn; Melissa Tracy
Journal:  Adv Nutr       Date:  2022-08-01       Impact factor: 11.567

4.  Explaining the Varying Patterns of COVID-19 Deaths Across the United States: 2-Stage Time Series Clustering Framework.

Authors:  Fadel M Megahed; L Allison Jones-Farmer; Yinjiao Ma; Steven E Rigdon
Journal:  JMIR Public Health Surveill       Date:  2022-07-19

5.  Revisiting the standard for modeling the spread of infectious diseases.

Authors:  Michael Nikolaou
Journal:  Sci Rep       Date:  2022-04-30       Impact factor: 4.996

6.  When Do We Need Massive Computations to Perform Detailed COVID-19 Simulations?

Authors:  Christopher B Lutz; Philippe J Giabbanelli
Journal:  Adv Theory Simul       Date:  2021-11-23

Review 7.  COVID-19 collaborative modelling for policy response in the Philippines, Malaysia and Vietnam.

Authors:  Angus Hughes; Romain Ragonnet; Pavithra Jayasundara; Hoang-Anh Ngo; Elvira de Lara-Tuprio; Maria Regina Justina Estuar; Timothy Robin Teng; Law Kian Boon; Kalaiarasu M Peariasamy; Zhuo-Lin Chong; Izzuna Mudla M Ghazali; Greg J Fox; Thu-Anh Nguyen; Linh-Vi Le; Milinda Abayawardana; David Shipman; Emma S McBryde; Michael T Meehan; Jamie M Caldwell; James M Trauer
Journal:  Lancet Reg Health West Pac       Date:  2022-08-11

8.  A meta-epidemiological assessment of transparency indicators of infectious disease models.

Authors:  Emmanuel A Zavalis; John P A Ioannidis
Journal:  PLoS One       Date:  2022-10-07       Impact factor: 3.752

Review 9.  Evolution and Reproducibility of Simulation Modeling in Epidemiology and Health Policy Over Half a Century.

Authors:  Mohammad S Jalali; Catherine DiGennaro; Abby Guitar; Karen Lew; Hazhir Rahmandad
Journal:  Epidemiol Rev       Date:  2022-01-14       Impact factor: 6.222

10.  Early warning signal reliability varies with COVID-19 waves.

Authors:  Duncan A O'Brien; Christopher F Clements
Journal:  Biol Lett       Date:  2021-12-08       Impact factor: 3.703

  10 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.