| Literature DB >> 35407664 |
Paweł Jemioło1, Dawid Storman2, Patryk Orzechowski1,3.
Abstract
The COVID-19 pandemic has sparked a barrage of primary research and reviews. We investigated the publishing process, time and resource wasting, and assessed the methodological quality of the reviews on artificial intelligence techniques to diagnose COVID-19 in medical images. We searched nine databases from inception until 1 September 2020. Two independent reviewers did all steps of identification, extraction, and methodological credibility assessment of records. Out of 725 records, 22 reviews analysing 165 primary studies met the inclusion criteria. This review covers 174,277 participants in total, including 19,170 diagnosed with COVID-19. The methodological credibility of all eligible studies was rated as critically low: 95% of papers had significant flaws in reporting quality. On average, 7.24 (range: 0-45) new papers were included in each subsequent review, and 14% of studies did not include any new paper into consideration. Almost three-quarters of the studies included less than 10% of available studies. More than half of the reviews did not comment on the previously published reviews at all. Much wasting time and resources could be avoided if referring to previous reviews and following methodological guidelines. Such information chaos is alarming. It is high time to draw conclusions from what we experienced and prepare for future pandemics.Entities:
Keywords: COVID-19; artificial intelligence; diagnosis; medical imaging; methodological credibility; systematic umbrella review
Year: 2022 PMID: 35407664 PMCID: PMC9000039 DOI: 10.3390/jcm11072054
Source DB: PubMed Journal: J Clin Med ISSN: 2077-0383 Impact factor: 4.241
Figure 1PRISMA flow chart.
Detailed characteristics of included reviews.
| Variable | Number (Percentage) | Mean |
|---|---|---|
| Number of reviews with the authors from a specific country | ||
| United States of America | 8 (18%) | NA |
| Australia | 4 (9%) | NA |
| China | 4 (9%) | NA |
| India | 4 (9%) | NA |
| United Kingdom | 3 (7%) | NA |
| Other | 22 (49%) | NA |
| Total number of authors of the reviews | 171 | 8 (1-43) |
| Type of publication | ||
| Journal article (mean IF1: 4.14; range: 0–30.31) | 13 (59%) | NA |
|
| 2 (9%) | NA |
|
| 2 (9%) | NA |
|
| 2 (9%) | NA |
| | 1 (5%) | NA |
|
| 1 (5%) | NA |
|
| 1 (5%) | NA |
|
| 1 (5%) | NA |
|
| 1 (5%) | NA |
|
| 1 (5%) | NA |
|
| 1 (5%) | NA |
| Preprint article | 8 (36%) | NA |
| Conference article | 1 (5%) | NA |
| Was the review specified as systematic by the authors? | ||
| No | 20 (91%) | NA |
| Yes | 2 (9%) | NA |
| Number of reviews that searched a given data source | 50 | 5 (3-7) |
| arXiv | 8 (36%) | NA |
| medRxiv | 6 (27%) | NA |
| Pubmed/Medline | 6 (27%) | NA |
| Google Scholar | 6 (27%) | NA |
| bioRxiv | 5 (23%) | NA |
| IEEE Xplore | 3 (14%) | NA |
| Science Direct | 3 (14%) | NA |
| ACM digital library | 2 (9%) | NA |
| Springer | 2 (9%) | NA |
| MICCAI conference | 1 (5%) | NA |
| IPMI conference | 1 (5%) | NA |
| Embase | 1 (5%) | NA |
| Web of Science | 1 (5%) | NA |
| Elsevier | 1 (5%) | NA |
| Nature | 1 (5%) | NA |
| Number of studies | ||
| Reported by review authors as included | 358 | 51 (20–107) |
| Applicable for this review question (total) | 451 | 21 (1–106) |
| Applicable for this review question (unique only) | 165 | 7.5 (0–11) |
Figure 2Quality graph: our judgements on each AMSTAR 2 item presented as the percentage of all the included studies; * denotes critical domains.
Figure 3Quality of reporting graph: our judgements about each PRISMA-DTA item presented as averages (with 95% confidence intervals—black lines) across all included studies. Different shades of blue are used just to improve the chart’s clarity.
Figure 4The cumulative chart of included, available (by the date), and introduced primary papers among discussed reviews.