Literature DB >> 36206207

A meta-epidemiological assessment of transparency indicators of infectious disease models.

Emmanuel A Zavalis1,2, John P A Ioannidis1,3.   

Abstract

Mathematical models have become very influential, especially during the COVID-19 pandemic. Data and code sharing are indispensable for reproducing them, protocol registration may be useful sometimes, and declarations of conflicts of interest (COIs) and of funding are quintessential for transparency. Here, we evaluated these features in publications of infectious disease-related models and assessed whether there were differences before and during the COVID-19 pandemic and for COVID-19 models versus models for other diseases. We analysed all PubMed Central open access publications of infectious disease models published in 2019 and 2021 using previously validated text mining algorithms of transparency indicators. We evaluated 1338 articles: 216 from 2019 and 1122 from 2021 (of which 818 were on COVID-19); almost a six-fold increase in publications within the field. 511 (39.2%) were compartmental models, 337 (25.2%) were time series, 279 (20.9%) were spatiotemporal, 186 (13.9%) were agent-based and 25 (1.9%) contained multiple model types. 288 (21.5%) articles shared code, 332 (24.8%) shared data, 6 (0.4%) were registered, and 1197 (89.5%) and 1109 (82.9%) contained COI and funding statements, respectively. There was no major changes in transparency indicators between 2019 and 2021. COVID-19 articles were less likely to have funding statements and more likely to share code. Further validation was performed by manual assessment of 10% of the articles identified by text mining as fulfilling transparency indicators and of 10% of the articles lacking them. Correcting estimates for validation performance, 26.0% of papers shared code and 41.1% shared data. On manual assessment, 5/6 articles identified as registered had indeed been registered. Of articles containing COI and funding statements, 95.8% disclosed no conflict and 11.7% reported no funding. Transparency in infectious disease modelling is relatively low, especially for data and code sharing. This is concerning, considering the nature of this research and the heightened influence it has acquired.

Entities:  

Mesh:

Year:  2022        PMID: 36206207      PMCID: PMC9543956          DOI: 10.1371/journal.pone.0275380

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

A large number of infectious disease-related models are published in the scientific literature and their production and influence has rapidly increased during the COVID-19 pandemic. Such models can inform and shape policy, and have also been the subject of much debate [1-4], surrounding a range of issues, including their questionable predictive accuracy and their transparency [5-7]. Sharing of data and of code is totally indispensable for these models to be properly evaluated, used, reused, updated, integrated, or compared with other efforts. Without being able to rerun a model, it resembles a black box where blind trust is requested on its function and credibility. Moreover, other features of transparency, such as declaration of funding and of potential conflicts of interest (COI) are also important to have since many of these models may be very influential on deciding policy with major repercussions. Another feature of transparency that may aid reproducibility and trust in these models sometimes is the registration of their protocols, ideally in advance of their conduct. Registration is concept that receives increasing attention in many scientific fields [8-10] as a safeguard of trust. Registration may not be easy or relevant to have for many mathematical models, especially those that are exploratory and iterative [5]. However, it may be feasible and desirable to register protocols about models in some circumstances [5]. There have previously been empirical evaluations of research practices, including documentation and transparency in subfields of mathematical modeling [11-13] that have shown that data and code/algorithm sharing has improved somewhat over time but that it still remains suboptimal. Yet, to our knowledge, in the field of infectious disease modelling there has been no comprehensive, large-scale analysis of such transparency and reproducibility indicators. It would be of interest to explore the state of transparency in this highly popular field, especially in the context of the rapid and massive adoption of mathematical models during the COVID-19 pandemic. Therefore, we decided to evaluate infectious disease modeling studies using large-scale algorithmic extraction of information on several transparency and reproducibility indicators (code sharing, data sharing, registration, funding, conflicts of interest). We compared these features in articles published before and during the pandemic (in 2019 and 2021, respectively) and in articles on COVID-19-related models and models related to other infectious diseases.

Materials and methods

This study is a meta-epidemiological survey of transparency indicators present in four common types of infectious disease models (compartmental, spatiotemporal, agent-based/individual based and time-series) indexed in the PubMed Central Open Access (PMC OA) Subset of PubMed. The study is reported using the STROBE guidelines [14]. The code needed for the analysis of our data used R [15] and Python [16].

Search and screening

We developed a search strategy to identify papers published from 2019 and 2021 in English in PMC OA subset that included models of infectious diseases: model*[tiab] OR forecast*[tiab] OR predict*[tiab]) AND (SIR-models[tiab] OR SIR[tiab] OR SIRS[tiab] OR SEIR[tiab] OR SEIR-model[tiab] OR SIRS-model[tiab] OR agent-based[tiab] OR spatiotemporal[tiab] OR nowcast[tiab] OR backprojection[tiab] OR "traveling waves"[tiab] OR (time series[tiab] OR time-series[tiab])) NOT (rat model*[ti] OR murine model*[ti] OR animal model*[ti] OR mouse model*[ti] OR primate model*[ti]) AND (infect* OR transmi* OR epidem*. The model types that were included were compartmental models, spatiotemporal models, agent-based/ individual-based models and time series models. They were defined as follows: Compartmental models assign subsets of the population to different classes according to their infection status (e.g., susceptible exposed, recovered etc.) and models the population parameters of the disease according to assumed transmission rates between these subsets [17]. Spatiotemporal models explore and predict the temporal and geographical spread of infectious diseases (usually using geographic time series data). Agent-based/ individual based models are computer simulations of the interaction of agents with unique attributes regarding spatial location, physiological traits and/or social behavior [18, 19]. Finally, Time-series models other than spatiotemporal were also included that use trends in number of infected or deaths or any other parameter of interest to predict future trends and numbers of spread [20]. We excluded clinical predictive, prognostic, and diagnostic models and included only models of infectious agents that can infect humans (i.e. both zoonotic diseases as well as diseases exclusive to humans). All screening and analysis was conducted by EAZ in two eligibility assessment rounds. In the first round, eligibility was assessed based on the title and abstract; in the second where the model type and disease type was extracted, eligibility was also assessed by perusing the article in more depth. After this round, in unclear cases EAZ consulted JPAI and these cases were settled with discussion.

Data extraction

For each eligible study, we extracted information on the model type and disease type manually. For model type, whenever cases came up that were not clear-cut EAZ and JPAI conferred as to what category was sensible. Some phylogenetic models were included and classified as spatiotemporal if they had spatiotemporal aspects. When there were multiple model types in a single paper it was classified as ‘Multiple’. For disease, we used categories defined based on the infectious agent of interest that was studied. The “Unspecified” category included studies not mentioning a specific infectious agent but a clinical syndrome (i.e. urinary tract infection or pneumonia etc.), the “General (theoretical models)” category included studies that didn’t model a specific disease (i.e. a theoretic pandemic). Finally, where multiple diseases were mentioned, the papers were categorised in a separate category as ‘Multiple different agents’ (i.e. HIV and tuberculosis). Where vectors of diseases such as mosquitos were modelled to predict spread of multiple diseases, we classified the disease as ‘Vector’. For each eligible article we used PubMed to extract information on metadata that included PMID, PMCID, publication year, journal name and the R package rtransparent [21] to extract the following transparency indicators: (i) code sharing (ii) data sharing (iii) (pre-)registration, (iv) COI and (v) funding statements. rtransparent searches through the full text of the papers for specific words or phrases that strongly suggest that the aforementioned transparency indicators are present in that particular paper. The program uses regular expressions to adjust for variations in expressions. For example, to identify code sharing, rtransparent looks for “code” and “available” as well as the repository “GitHub” and its variations, and in a paper selected [22] from our dataset it finds the following: “the model and code for reproducing all figures in this manuscript from model output are publicly available online (https://github.com/bdi-pathogens/openabm-covid19-model-paper)” The approach has been previously validated and tested in Serghiou et al. [21] across the entire biomedical literature and has a positive predictive value (PPV) of 88.2% (81.7%-93.8%) and negative predictive value (NPV) of 98.6% (96.2–99.9%) for code sharing; 93.0% (88.0%-97.0%) and 94.4% (89.1%-97.0%) for data sharing, 92.1% (88.3–98.6%) and 99.8% (99.7%-99.9%) for registration, 99.9% (99.7%-100.0%) and 96.8% (94.4%-99.1%) for COI disclosure and 99.7% (99.3%-99.9%) and 98.1% (96.2%-99.5%) for funding disclosures. To further validate the performance of the algorithms in detecting code sharing and data sharing reliably, a random sample of 10% of publications that the algorithm identified as sharing code and 10% of those that the algorithm identified as sharing data were manually assessed looking into whether the statements indeed represented true sharing. All papers that were identified by the algorithm to have registration were assessed manually to verify whether registration had been performed. After a suggestion by a reviewer, we also examined manually random samples of 10% of the publications that were found by the algorithm to have not satisfied each indicator. The corrected proportion C(i) of publications satisfying an indicator i was obtained by U(i) × TP + (1 − U(i)) × FN, where U(i) is the uncorrected proportion detected by the automated algorithm, TP is the proportion of true positives (proportion of those manually verified to satisfy the indicator among those identified by the algorithm as satisfying the indicator, and FN is the proportion of false negatives (proportion of those manually found to satisfy the indicator among those categorized by the algorithm not to satisfy the indicator). Moreover, random sample of 10% of papers that were found to contain a COI statement and 10% of those found to include a funding statement were assessed manually to see not only whether such statements were indeed present, but also to assess how many of them contain actual disclosures of specific conflicts or funding sources, respectively, and not just a statement that there are no COIs/funding, e.g. ‘There is no conflict of interest’, No funding was received’ or ‘Funding disclosure is not applicable’. Finally, a random sample of 10% of the negatives for COI and funding were also manually assessed.

Statistical analysis

The primary outcome studied was the percentage of papers that include each of the transparency indicators. We considered three primary comparisons that were conducted using Fisher’s exact tests. All publications in 2019 to all in 2021 (to assess if there is improvement over time) All non-COVID-19 publications in 2019 to the non-COVID-19 publications in 2021 (to assess if there is improvement over time for non-COVID-19 publications) 2021 COVID-19 publications to 2021 non-COVID-19 ones (to assess if COVID-19 papers differ in transparency indicators versus non-COVID-19 papers). Subsequently we also explored whether other factors may have correlated with the transparency indicators using Fisher’s exact tests to see whether there was any statistically significant association (significance level set at 0.005 [23]) when comparing model types, year, disease modelled, as well as journal separately. We had pre-specified that whenever any statistically significant results were found, we would conduct multivariable logistic regressions as well using the transparency indicators as the dependent variable. This was to see if any of the covariates, which for the regression to be able to converge had to be larger groups, were alone associated with our outcome variables. The covariates used were therefore year and disease combined (2019 (baseline), 2021 non-COVID-19, 2021 COVID-19), Journal (PLoS One, Scientific Reports, International Journal of Environmental Research and Public Health, Other (baseline)) and the type of model (with the compartmental models used as a baseline). Statistical significance was claimed for p<0.005 and p-values between 0.005 and 0.05 are considered suggestive, as previously proposed [23].

Deviations from the protocol

We deviated from the protocol in that we didn’t perform chi-square tests due to too low counts in some variables rendering it unreliable, therefore we decided to conduct these analyses using Fisher’s exact tests instead of chi-square tests. The 10% manual assessment of a random sample of articles with COI and funding statements was added post hoc, when we realized that many articles could have such statements, but they might simply state that there was no COI and/or no funding.

Results

Study sample

We screened 2903 records in their titles and abstracts according to the eligibility criteria. 1340 papers were excluded due to ineligibility in the primary survey leaving 1563 records for further scrutiny. 58 were excluded during the second round of screening, i.e., during retrieval of information on model type and disease and 167 were excluded for not being part of the PMC OA subset (Fig 1).
Fig 1

Flow chart for study selection.

Characteristics of eligible papers

Of the 1338 eligible papers (Table 1), 216 had been published in 2019 and 1122 in 2021. 818 (61.1%) were COVID-19 papers and the second largest group contained 130 (9.7%) publications and was the group of General (theoretical models). More than 70 different diseases had altogether been modelled in the eligible publications. The model types were more evenly distributed with the most common model type being compartmental models (N = 511, 39.2%) and time series models (N = 337, 25.2%).
Table 1

Characteristics of eligible studies.

20192021 non-COVID-192021 COVID-19All publications
N (%)N (%)N (%)N (%)
216 articles304 articles818 articles1338 articles
Type of model
Compartmental26 (12.0)91 (29.9)394 (48.0)511 (39.2)
Time series80 (37.0)82 (27.0)175 (21.4)337 (25.2)
Spatiotemporal78 (36.1)90 (29.6)111 (13.6)279 (20.9)
Agent-based31 (14.4)37 (12.2)118 (14.4)186 (13.9)
Multiple1 (0.5)4 (1.3)20 (2.4)25 (1.9)
Type of disease
COVID-190 (0)0 (0)818 (100)818 (61.1)
General33 (15.3)97 (31.9)0 (0)130 (9.7)
Influenza illnesses20 (9.3)20 (6.6)0 (0)40 (3.0)
Malaria15 (6.9)22 (7.2)0 (0)37 (2.8)
Dengue15 (6.9)20 (6.6)0 (0)35 (2.6)
Others133 (61.6)145 (48)0 (0)278 (20.8)
Journal
PLoS One26 (12.0)27 (8.9)62 (7.6)115 (8.6)
Sci Rep20 (9.3)19 (6.3)52 (6.4)91 (6.8)
Int J Environ Res Public Health15 (6.9)21 (6.9)27 (3.3)63 (4.7)
BMC Infect Dis16 (7.4)12 (3.9)10 (1.2)38 (2.8)
PLoS Negl Trop Dis11 (5.1)22 (7.2)0 (0)33 (2.5)
PLoS Comput Biol10 (4.6)10 (3.3)9 (1.1)29 (2.2)
BMC Public Health6 (2.8)9 (3.0)13 (1.6)28 (2.1)
Chaos Solitons Fractals0 (0)5 (1.6)20 (2.4)25 (1.9)
Others112 (52.0)179 (58.9)625 (76.4)916 (68.5)

Transparency indicators

Table 2 shows the transparency indicators overall and in the three main categories based on year and COVID-19 focus. We found that based on the text mining algorithms 288 (21.5%) articles shared code, 332 (24.8%) shared data, 6 (0.4%) used registration, and 1197 (89.5%) and 1109 (82.9%) contained a COI and funding statement, respectively. 919 (68.7%) of publications shared neither data nor code, while 199 (14.9%) of all papers shared both data and code.
Table 2

Key transparency indicators overall and per year/COVID-19 focus.

N = 1338Code sharingData sharingRegistrationCOIFunding
N (%)N (%)N (%)N (%)N (%)
Overall 288 (21.5)332 (24.8)6 (0.4)1197 (89.5)1109 (82.9)
201938 (17.6)59 (27.3)3 (1.4)197 (91.2)202 (93.5)
2021250 (22.3)273 (24.3)3 (0.3)1000 (89.2)907 (80.8)
 COVID-19207 (25.3)199 (24.3)0730 (89.2)635 (77.6)
 non-COVID-1943 (14.1)74 (24.3)3 (1)270 (88.8)272 (89.5)
Fisher’s exact test (p-values)
2019 vs 2021 0.150.350.060.451.0 × 10−6
2019 vs 2021 non-COVID-19 0.330.480.700.460.12
2021 non-COVID-19 vs. COVID-195.1 × 10−510.020.833.5 × 10−5

COI: conflicts of interest

COI: conflicts of interest We found no differences between years and between COVID-19 and non-COVID-19 papers in terms of probability of sharing data, registration, or mentioning of COIs. COVID-19 papers were more likely to share their code openly than the non-COVID-19 publications from the same year (14.1% v. 25.3%, p = 5.1 × 10−5), and they were less likely to report on funding compared with non-COVID-19 papers in the same year (p = 3.5 × 10−5). This led to an overall lower percentage of papers reporting on funding in 2021 compared with 2019 (p = 1.0 × 10−6).

Other correlates of transparency indicators

As shown in Table 3, data sharing varied significantly across journals, e.g. it was 54.8% in PLoS One, but 12.7% in International Journal of Environmental Research and Public Health. Code sharing varied significantly across diseases, e.g. it was most common for dengue and least common for malaria (34.3% v 5.4%); and it varied significantly among types of models, (highest in agent-based models with 33.9% of publications sharing code). Registration was uncommon in all subgroups. COI disclosures were most common in dengue and least common in general models and they also varied by type of model (least common in compartmental models). Funding information was most commonly disclosed in dengue models and least commonly disclosed in general models; it also varied by type of model (being lowest in compartmental models); and by journal.
Table 3

Key transparency indicators per disease type, model type, and journal.

Code sharingData sharingRegistrationCOIFunding
N (%)N (%)N (%)N (%)N (%)
Disease modelled
p (Fisher’s exact test) 7.4 × 10−60.470.0010.012.8 × 10−10
COVID-19207 (25.3)199 (24.3)0 (0)730 (89.2)635 (77.6)
General (theoretical model)31 (23.8)34 (26.2)0 (0)94 (72.3)108 (83.1)
Influenza illnesses6 (15)10 (25)0 (0)38 (95)39 (97.5)
Malaria2 (5.4)7 (18.9)2 (5.4)37 (100)35 (94.6)
Dengue12 (34.3)13 (37.1)0 (0)35 (100)35 (100)
Other diseases30 (10.8)69 (24.8)4 (1.4)263 (94.6)257 (92.4)
Type of model
p (Fisher’s exact test) 0.0010.0060.15<1 × 10−70.008
Compartmental104 (20.4)103 (20.2)0 (0)419 (82)405 (79.3)
Time Series65 (19.3)81 (24)2 (0.6)319 (94.7)276 (81.9)
Spatiotemporal52 (18.6)84 (30.1)3 (1.1)263 (94.3)247 (88.5)
Agent-based63 (33.9)58 (31.2)1 (0.5)173 (93)161 (86.6)
Multiple4 (16)6 (24)0 (0)23 (92)20 (80)
Journal
p (Fisher exact) 0.151.7 × 10−120.112.5 × 10−123.4 × 10−14
PLoS One30 (26.1)63 (54.8)1 (0.9)115 (100)115 (100)
Sci Rep23 (25.3)21 (23.1)1 (1.1)91 (100)70 (76.9)
Int J Environ Res Public Health8 (12.7)7 (11.1)1 (1.6)63 (100)63 (100)
Other journals227 (21.2)241 (22.5)3 (0.3)928 (86.8)861 (80.5)

COI: conflicts of interest

COI: conflicts of interest Multivariable regressions (not shown) showed similar results. Code sharing was more common in COVID-19 models (OR 1.69 (1.13, 2.55) compared to the 2019 baseline) and in agent-based models (OR 2.15 (1.47, 3.14) using compartmental models as the baseline). Data sharing was more common in spatiotemporal (OR 1.90 (1.33, 2.73) and agent-based models (OR 1.80 (1.21, 2.66)) compared to the baseline and also depended substantially on the journal (with PLoS One having OR 4.22 (2.84, 6.32) compared to the baseline of all journals but the top 3). We did not perform multivariable regressions for the presence of COI and funding statements, since these depended almost entirely on the journal (several journals had 100% frequency of having a placeholder for such statements). Registration was too uncommon to subject to multivariable analysis.

Manual validation

We also checked a random sample of 29 (10%) of papers that were found to be sharing code, 33 (10%) of those sharing data, and all 6 that were registered. Of these, 24/29 (82.8%) actually shared code, 29/33 (87.9%) actually shared data and 5/6 (83.3%) were indeed registered. The papers that used registration were two malaria models [24, 25], one vector model [26] (which focused on malaria vectors) one polio (Sabin 2 virus [27]) model and one rotavirus model [28]. The majority were from 2021 [24, 26, 27] and were also malaria models (two malaria and one vector that was essentially malaria [24-26]). A majority was also classified as spatiotemporal [24-26]. We also checked a random sample of 10% of the negatives i.e. the ones that were classified as non-transparent and found that 133/133 (100%) weren’t registered, 95/106 (89.6%) didn’t share code and 75/101 (74.3%) didn’t share data. Therefore, the corrected estimates of the proportions of publications sharing code and sharing data were (0.215 × 0.828) + (0.785 × 0.104) = 26.0% and (0.248 × 0.879) + (0.752 × 0.257) = 41.1%, respectively. The modest number of false-negatives for detecting data sharing through the text mining algorithms reflected mostly situations where it was mentioned that the data can be downloaded through a link, or the reference was in a figure, or the phrasing was interwined and difficult to separate effectively by the text mining algorithm. Finally, of the 120 articles (10%) that text mining found that they contained a COI statement, there was indeed a placeholder for this statement in all articles, but the vast majority of the statements (115 (95.8%)) disclosed no conflict at all. Of the 111 (10%) articles where text mining found that they contained a funding statement, all of them had indeed such a statement, but 13 (11.7%) stated that they had no funding. Examining a random sample of 10% of the negatives regarding COI and funding disclosures we found that 19/23 (82.6%) of funding disclosures and 14/14 (100%) of COI disclosures were true negatives.

Discussion

Analysing 1338 recent articles from the field of infectious disease modelling we found that based on previously validated text mining algorithm less than a quarter of these publications shared code or data, and only 14% shared both. Adding further validation through manual evaluation suggested that data sharing may be modestly more common, but still the majority of these publications did not share their data. This is concerning since it does not allow other scientists to check the models in any depth and it also limits their further uses. Moreover, registration was almost nonexistent. On a positive note, the large majority of models did provide some information on funding and COIs. However, the vast majority of COI statements simply said that there was no conflict. Furthermore, we saw no major differences between 2019 and 2021. COVID-19 and non-COVID-19 papers showed largely similar patterns for these transparency indicators, although the former were modestly more likely to share code and modestly less likely to report on funding. There were some differences for some of the transparency indicators across journals, model types and diseases. Jalali et al. [11] analysed 29 articles on COVID-19 models in 2020 and found that 48% shared code, in 60% data was shared, whilst 80% contained a funding and COI disclosure respectively. Our findings show much lower rates of code sharing and data sharing. The Jalali et al. sample was apparently highly selective as it focused on the most referenced models among a compilation of models by the US Centers for Disease Control [29]. In another empirical assessment of the reproducibility of 100 papers in simulation modelling in public health and health policy published over half a century (until 2016) and covering all applications (not just infectious diseases), code was available for only 2% of publications [30]. Finally, in an empirical evaluation in decision modelling by Emerson et al. [13], when the team tried to get authors of papers to share their code 7.3% of simulation modelling researchers responded and in the end only 1.6% agreed to share their code. This suggests that infectious disease models are not doing worse than other mathematical models, and may be doing even substantially better, but there is still plenty of room for improvement in sharing practices. There have been many initiatives for improving sharing code and better documentation in the modelling community [31-34] as well, repositories for COVID-19 models [35, 36]. The modelling community including COVID-19 [37] modelling has had multiple calls for transparency and the debate of reproducibility has been ongoing for decades [38-40]. Several journals have tried to take steps in enhancing reproducibility. For example, Science changed their policy for code and data sharing to make both essentially mandatory [41]. However, Stodden et al. [42] found no clear improvement after such interventions. Models are published in a vast array of journals and sharing rate as well as reporting and documentation requirements tend to be highly journal specific. The frequency of code and data sharing in our sample was higher than what was documented for the general biomedical literature that was assessed in Serghiou et al. [21] using the same algorithm. COI and funding disclosures were almost equally common. On the other hand, we observed a ten-fold lower registration rate in our sample compared with the overall biomedical literature, which may reflect the difficulty of registering models and the lack of sufficient sensitization of the field to this possibility [5]. We found that essentially 5 of our studies were registered (after validating the initial 6 that we found). Realising that registration may be difficult and even impossible for a large portion of models (exploratory models for instance) [5], it would still be advisable to register confirmatory studies of models that are destined to be used for policy to reduce the “vibration of effects” (the range of possible results obtained with multiple analytical choices) [43, 44]. Otherwise, promising output or excellent fit may in reality be due to bias alone. When the stakes are high and wrong decisions may have grave implications, more rigor is needed. The rates of COI and funding disclosures are satisfactory on face value, considering they both are above 80% both in our sample and across other empirical assessments [11, 21, 45]. This may also be due to the fact that both these types of disclosures have been introduced into many journal’s routinely published items and there is a standard placeholder for them. Typically journals mandate a COI and funding statement. However, the fidelity and completeness of these statements is difficult to probe. We cannot exclude that undisclosed COIs may exist. Our random sample validation found that the COI disclosures almost never mentioned any conflict. Given the policy implications of many models, especially in the COVID-19 era, this pattern may represent under-reporting of conflicts. Funding disclosures were more informative with only 12% stating no funding, but even then unstated sources of funding cannot be excluded.

Limitations

There are limitations in our evaluation. Our sample focused on the PubMed Central Open Access subset and not all PubMed-indexed papers. It is unclear if non-open access papers may be less likely to adopt sharing practices. If so, the proportion of sharing in the total infectious disease modeling literature may be over-estimated. Moreover, much of the COVID-19 literature was not published in the indexed peer-reviewed or indexed literature and therefore may have evaded our evaluation (even though some preprints are indexed in PubMed). If anything, this evading literature may have even less transparency. Second, we used a text-mining approach which has been extensively validated across the entire biomedical literature, but the algorithms may have different performance specifically in the infectious disease modeling field. Nevertheless, in-depth evaluation of random samples of papers suggests that identification of these indicators is quite accurate and false positives are uncommon and well balanced by an almost equal number of false negatives for code sharing. For data sharing, the manual validation found a modest number of publications that had shared data but were not picked as sharing by the algorithm. Therefore, data sharing may occur modestly more frequently than suggested by the automated algorithm, but even then the majority of the publications in this field do not share their data. Third, the presence of a data sharing or code sharing statement doesn’t promise full functionality and the ability to fully reuse the data and code. This can only be decided after spending substantial effort to bring a paper to life based on the shared information. For COI and funding statements, we also only established their existence, but did not appraise in depth the content of these statements, let alone their veracity. Evaluations in other fields suggests that many COIs are undisclosed and funding information is often incomplete [46-48]. Fourth, using only one main reviewer for screening for eligibility may have introduced some errors in the selection of specific studies that were included or not in our analysis. However, identification of eligible studies is quite straightforward given our eligibility criteria and any ambiguous cases were also discussed with the second author. There were a few studies that did not fit squarely in our pre-determined categories, but their number is too small to affect the overall results. Finally, we only assessed a sample that is drawn from two calendar years that are not very far apart, thus major changes might not have been anticipated at least for non-COVID-19 models. Nevertheless, 2021 was a unique year with a pandemic which of course affected the field not merely through inflation of publications [49] but also through specific funder and governmental initiatives and incentives. Therefore, only time will tell if any of the COVID-19 impact on the scientific literature will be long-lasting and if it may also affect the landscape of mathematical modeling in general after the pandemic phases out.

Conclusions

We found that in the highly influential field of infectious disease modeling that relies as much on its assumptions and underlying code and data, transparency and reproducibility have large potential for improvement. Yet, there is a growing literature of recommendations and tutorials for researchers and other stakeholders [50-53], plus the EPIFORGE guidelines [54] which are guidelines for the reporting of epidemic forecasting and prediction research. They all explicitly urge for code sharing, and data sharing and transparency in general. The current lack of transparency may cause problems in the use, reuse, interpretation, and adoption of these models for scientific or policy activities. It also hinders evidence synthesis and attempts to build on previous research to facilitate progress within the field. Improved transparency and reproducibility may help reinforce the legacy of this important field. It can be argued that a mathematical model should not be taken seriously, especially for influential inferences and decisions, without the underlying code and data sources made public. This includes models published by academic journals or unpublished ones that are being used nevertheless to guide health policies or other decisions. One might even suggest banning the publication of models that do not share their data and code. Pre-registration also is highly desirable, when pertinent, and for some targeted uses of models, e.g. making claims for future predictions, it should become a normal expectation.

Transfer Alert

This paper was transferred from another journal. As a result, its full editorial history (including decision letters, peer reviews and author responses) may not be present. 7 Jul 2022
PONE-D-22-14026
A meta-epidemiological assessment of transparency indicators of infectious disease models
PLOS ONE Dear Dr. Ioannidis, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Aug 21 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Chi-Hua Chen, Ph.D. Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. Thank you for stating the following in your Competing Interests section: "I have read the journal's policy and the authors of this manuscript have no competing interests" Please complete your Competing Interests on the online submission form to state any Competing Interests. If you have no competing interests, please state "The authors have declared that no competing interests exist.", as detailed online in our guide for authors at http://journals.plos.org/plosone/s/submit-now This information should be included in your cover letter; we will change the online submission form on your behalf. 3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide. 4. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This is a very interesting and thought provoking article. It is a common sentiment that scientific rigour and due process has been sacrificed during the course of the pandemic over a perceived notion of urgency, or sometimes over the need of beating others to the punch and claiming the glory. Not surprisingly, mathematical models of infectious diseases became darlings of the main steam media due to their striking, often sensational, conclusion which were very suitable as headlines. More importantly, they were often used to guide public health policies that had impact and will have lasting consequences on the society. I congratulate Dr. Ioannidis and Zavalis on their work and my opinion of the paper is positive. The search strategy appears to be robust and they opted to screen the retrieved citations with a single author. This is rather unconventional but unlikely to change the outcomes but it would be worthwhile to explain why they opted to do so. Using an R package to extract the information is wise and likely to reduce any inadvertent mistakes. Verification of the automated extraction with random checks is also good practice. I would have personally opted to verify any positive findings for variables in which the algorithms showed less than 95% PPV but complete random checks are also fine and a different methodology is unlikely to change the conclusions. Findings are self explanatory and unfortunately, quite damning. It is inconceivable that a mathematical model would be published without open access codes and I would not mind if a stronger language was used to criticise these points. While Dr. Ioannidis stated pre-registration may be cumbersome, I believe it is entirely feasible and several items (assumptions, data sources to be used, general structure of the model...) can and should be made public before modeling begins so readers can appreciate the evolution of the algorithm whether it was natural or forced to achieve a specific end. A mathematical model SHOULD NOT be taken seriously, talked about or published WITHOUT the underlying code and data sources are made public. This includes the models published by academic journals or unpublished ones that were used to guide health policies alike. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Erkan Kalafat ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
19 Jul 2022 It is outlined in the Response to Reviewers.docx file. Submitted filename: Response to Reviewers.docx Click here for additional data file. 12 Aug 2022
PONE-D-22-14026R1
A meta-epidemiological assessment of transparency indicators of infectious disease models
PLOS ONE Dear Dr. Ioannidis, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Sep 26 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Chi-Hua Chen, Ph.D. Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2: No ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #2: This review measures the frequency of several indicators of scientific transparency in the infectious disease modeling literature. The authors study a sample of PMC papers from 2019 and 2021. They compare papers from 2019 to 2021, non-covid papers from 2019 to 2021, and non-covid papers from 2021 to covid papers from 2021. This study addresses important and worthwhile research questions. Transparency is vital in mathematical modeling. Infectious disease modeling became far more prominent during the COVID-19 pandemic, and many stakeholders have emphasized the importance of transparency as this transition has taken place. Nevertheless, it is unclear how much these renewed calls for transparency have affected research practices, and examples abound of mathematical models failing to prove useful in policymaking and public health decision-making. The authors build on previous work assessing transparency in biomedical research. They use an established R package to automate the extraction of transparency indicators. They include a concise and clear description of the manual coding process. They use an appropriate p-value for statistical tests. The discussion is appropriate given the results. Results are well framed in the literature on transparency in mathematical and infectious disease modeling. The limitations are mostly comprehensive and defensible. Specific comments: 1. The manual validation process should include some negative examples as well as positive examples. The authors claim that this is unnecessary due to the transparency indicator extraction algorithm's high NPV in prior work. However, the PPV in this study was substantially lower than in their previous paper. The authors rightly acknowledge that this body of research is substantively different from the literature with which the algorithm was developed. Moreover, the conclusions of the paper largely rest on the shockingly low frequencies of many transparency indicators. Without manual review of negatives, however, it is uncertain how frequent false negatives may be. 2. The authors do not address the multiple comparisons problem in their statistical tests. While it seems unlikely that it would affect the results greatly, this study design is certainly subject to this problem. The authors should either adjust for multiple comparisons or defend their decision not to do so. 3. The regression analysis is not adequately described. In methods, the authors state "we would conduct multivariable logistic regressions as well", without describing any of the details of these models (IVs, DVs, etc). In results, the authors state "Multivariable regressions (not shown) showed similar results." Similar in what sense? Including these results could be more convincing than the Fisher's exact tests, since they may be less affected by the multiple comparisons problem. (This, of course, depends on the details of the analysis, which are not clear as written.) 4. There are some run-on sentences and grammatical errors in the manual validation section. 5. In limitations, the authors mention that the sample of PMC papers is not representative of all PubMed indexed papers. Much of the covid modeling literature has been published as preprints or in other non-indexed sources. It would be interesting to compare covid research published in such venues to peer-reviewed covid research. Worth mentioning in the limitations and perhaps a promising follow-up to this paper. 6. The first sentence of the conclusion is confusing. Should it read "…as data" rather than "… and data"? The expectation of pre-registration when using models to make future predictions seems unrealistic. It is unclear to me how this would work. Even in such cases, modelers often don't know the final form the model will take when development begins. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #2: Yes: Alexander Preiss ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 30 Aug 2022 See response to reviewers document that has been attached. Submitted filename: Response to Reviewers.docx Click here for additional data file. 15 Sep 2022 A meta-epidemiological assessment of transparency indicators of infectious disease models PONE-D-22-14026R2 Dear Dr. Ioannidis, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Chi-Hua Chen, Ph.D. Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #2: I thank the authors for addressing my comments, particularly the manual validation of negatives. The conclusions stand despite a modest proportion of false negatives. The correction formula is a clever addition. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #2: Yes: Alexander Preiss ********** 27 Sep 2022 PONE-D-22-14026R2 A meta-epidemiological assessment of transparency indicators of infectious disease models Dear Dr. Ioannidis: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Professor Chi-Hua Chen Academic Editor PLOS ONE
  48 in total

1.  Evaluating Industry Payments Among Dermatology Clinical Practice Guidelines Authors.

Authors:  Jake X Checketts; Matthew Thomas Sims; Matt Vassar
Journal:  JAMA Dermatol       Date:  2017-12-01       Impact factor: 10.282

Review 2.  Toward Standardizing a Lexicon of Infectious Disease Modeling Terms.

Authors:  Rachael Milwid; Andreea Steriu; Julien Arino; Jane Heffernan; Ayaz Hyder; Dena Schanzer; Emma Gardner; Margaret Haworth-Brockman; Harpa Isfeld-Kiely; Joanne M Langley; Seyed M Moghadas
Journal:  Front Public Health       Date:  2016-09-28

Review 3.  Tools and techniques for computational reproducibility.

Authors:  Stephen R Piccolo; Michael B Frampton
Journal:  Gigascience       Date:  2016-07-11       Impact factor: 6.524

4.  Model Registration: A Call to Action.

Authors:  Christopher James Sampson; Tim Wrightson
Journal:  Pharmacoecon Open       Date:  2017-06

5.  Effect estimates of COVID-19 non-pharmaceutical interventions are non-robust and highly model-dependent.

Authors:  Vincent Chin; John P A Ioannidis; Martin A Tanner; Sally Cripps
Journal:  J Clin Epidemiol       Date:  2021-03-26       Impact factor: 6.437

6.  Spatio-temporal analysis and prediction of malaria cases using remote sensing meteorological data in Diébougou health district, Burkina Faso, 2016-2017.

Authors:  Cédric S Bationo; Jean Gaudart; Sokhna Dieng; Mady Cissoko; Paul Taconet; Boukary Ouedraogo; Anthony Somé; Issaka Zongo; Dieudonné D Soma; Gauthier Tougri; Roch K Dabiré; Alphonsine Koffi; Cédric Pennetier; Nicolas Moiroux
Journal:  Sci Rep       Date:  2021-10-08       Impact factor: 4.379

7.  Call for transparency of COVID-19 models.

Authors:  C Michael Barton; Marina Alberti; Daniel Ames; Jo-An Atkinson; Jerad Bales; Edmund Burke; Min Chen; Saikou Y Diallo; David J D Earn; Brian Fath; Zhilan Feng; Christopher Gibbons; Ross Hammond; Jane Heffernan; Heather Houser; Peter S Hovmand; Birgit Kopainsky; Patricia L Mabry; Christina Mair; Petra Meier; Rebecca Niles; Brian Nosek; Nathaniel Osgood; Suzanne Pierce; J Gareth Polhill; Lisa Prosser; Erin Robinson; Cynthia Rosenzweig; Shankar Sankaran; Kurt Stange; Gregory Tucker
Journal:  Science       Date:  2020-05-01       Impact factor: 47.728

8.  Registered Reports: Time to Radically Rethink Peer Review in Health Economics.

Authors:  Philip Clarke; John Buckell; Adrian Barnett
Journal:  Pharmacoecon Open       Date:  2020-03

9.  Transparency assessment of COVID-19 models.

Authors:  Mohammad S Jalali; Catherine DiGennaro; Devi Sridhar
Journal:  Lancet Glob Health       Date:  2020-10-27       Impact factor: 26.763

10.  The rapid, massive growth of COVID-19 authors in the scientific literature.

Authors:  John P A Ioannidis; Maia Salholz-Hillel; Kevin W Boyack; Jeroen Baas
Journal:  R Soc Open Sci       Date:  2021-09-07       Impact factor: 2.963

View more
  1 in total

1.  COVID-19 models and expectations - Learning from the pandemic.

Authors:  John P A Ioannidis; Stephen H Powis
Journal:  Adv Biol Regul       Date:  2022-10-08
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.