Literature DB >> 34383750

Accuracy of novel antigen rapid diagnostics for SARS-CoV-2: A living systematic review and meta-analysis.

Lukas E Brümmer1, Stephan Katzenschlager2, Mary Gaeddert1, Christian Erdmann3, Stephani Schmitz1, Marc Bota4, Maurizio Grilli5, Jan Larmann2, Markus A Weigand2, Nira R Pollock6, Aurélien Macé7, Sergio Carmona7, Stefano Ongarello7, Jilian A Sacks7, Claudia M Denkinger1,8.   

Abstract

BACKGROUND: SARS-CoV-2 antigen rapid diagnostic tests (Ag-RDTs) are increasingly being integrated in testing strategies around the world. Studies of the Ag-RDTs have shown variable performance. In this systematic review and meta-analysis, we assessed the clinical accuracy (sensitivity and specificity) of commercially available Ag-RDTs. METHODS AND
FINDINGS: We registered the review on PROSPERO (registration number: CRD42020225140). We systematically searched multiple databases (PubMed, Web of Science Core Collection, medRvix, bioRvix, and FIND) for publications evaluating the accuracy of Ag-RDTs for SARS-CoV-2 up until 30 April 2021. Descriptive analyses of all studies were performed, and when more than 4 studies were available, a random-effects meta-analysis was used to estimate pooled sensitivity and specificity in comparison to reverse transcription polymerase chain reaction (RT-PCR) testing. We assessed heterogeneity by subgroup analyses, and rated study quality and risk of bias using the QUADAS-2 assessment tool. From a total of 14,254 articles, we included 133 analytical and clinical studies resulting in 214 clinical accuracy datasets with 112,323 samples. Across all meta-analyzed samples, the pooled Ag-RDT sensitivity and specificity were 71.2% (95% CI 68.2% to 74.0%) and 98.9% (95% CI 98.6% to 99.1%), respectively. Sensitivity increased to 76.3% (95% CI 73.1% to 79.2%) if analysis was restricted to studies that followed the Ag-RDT manufacturers' instructions. LumiraDx showed the highest sensitivity, with 88.2% (95% CI 59.0% to 97.5%). Of instrument-free Ag-RDTs, Standard Q nasal performed best, with 80.2% sensitivity (95% CI 70.3% to 87.4%). Across all Ag-RDTs, sensitivity was markedly better on samples with lower RT-PCR cycle threshold (Ct) values, i.e., <20 (96.5%, 95% CI 92.6% to 98.4%) and <25 (95.8%, 95% CI 92.3% to 97.8%), in comparison to those with Ct ≥ 25 (50.7%, 95% CI 35.6% to 65.8%) and ≥30 (20.9%, 95% CI 12.5% to 32.8%). Testing in the first week from symptom onset resulted in substantially higher sensitivity (83.8%, 95% CI 76.3% to 89.2%) compared to testing after 1 week (61.5%, 95% CI 52.2% to 70.0%). The best Ag-RDT sensitivity was found with anterior nasal sampling (75.5%, 95% CI 70.4% to 79.9%), in comparison to other sample types (e.g., nasopharyngeal, 71.6%, 95% CI 68.1% to 74.9%), although CIs were overlapping. Concerns of bias were raised across all datasets, and financial support from the manufacturer was reported in 24.1% of datasets. Our analysis was limited by the included studies' heterogeneity in design and reporting.
CONCLUSIONS: In this study we found that Ag-RDTs detect the vast majority of SARS-CoV-2-infected persons within the first week of symptom onset and those with high viral load. Thus, they can have high utility for diagnostic purposes in the early phase of disease, making them a valuable tool to fight the spread of SARS-CoV-2. Standardization in conduct and reporting of clinical accuracy studies would improve comparability and use of data.

Entities:  

Mesh:

Substances:

Year:  2021        PMID: 34383750      PMCID: PMC8389849          DOI: 10.1371/journal.pmed.1003735

Source DB:  PubMed          Journal:  PLoS Med        ISSN: 1549-1277            Impact factor:   11.069


Introduction

As the COVID-19 pandemic continues around the globe, antigen rapid diagnostic tests (Ag-RDTs) for SARS-CoV-2 are seen as an important diagnostic tool to fight the virus’s spread [1,2]. The number of Ag-RDTs on the market is increasing constantly [3]. Initial data from independent evaluations suggest that the performance of SARS-CoV-2 Ag-RDTs may be lower than what is reported by the manufacturers. In addition, Ag-RDT accuracy seems to vary substantially between tests [4-6]. With the increased availability of Ag-RDTs, an increasing number of independent validations have been published. Such evaluations differ widely in their quality, methods, and results, making it difficult to assess the true performance of the respective tests [7]. To inform decision makers on the best choice of individual tests, an aggregated, widely available, and frequently updated assessment of the quality, performance, and independence of the data is urgently needed. While other systematic reviews have been published, they include data only up until November 2020 [8-11], exclude preprints [12], or were industry sponsored [13]. In addition, only 1 assessed the quality of studies in detail, with data up until November 2020 [7,11]. With our systematic review and meta-analysis, we aim to close this gap in the literature and link to a website (https://www.diagnosticsglobalhealth.org) that is regularly updated.

Methods

We developed a study protocol following standard guidelines for systematic reviews [14,15], which is available in S1 Text. We also completed the PRISMA checklist (S1 PRISMA Checklist). Furthermore, we registered the review on PROSPERO (registration number: CRD42020225140).

Search strategy

We performed a search of the databases PubMed, Web of Science, medRxiv, and bioRxiv using search terms that were developed with an experienced medical librarian (M. Grilli) using combinations of subject headings (when applicable) and text-words for the concepts of the search question. The main search terms were “Severe Acute Respiratory Syndrome Corona-virus 2,” “COVID-19,” “Betacoronavirus,” “Coronavirus,” and “Point of Care Testing.” The full list of search terms is available in S2 Text. We also searched the Foundation for Innovative New Diagnostics (FIND) website (https://www.finddx.org/sarscov2-eval-antigen/) for relevant studies manually. We performed the search up until 30 April 2021. No language restrictions were applied.

Inclusion criteria

We included studies evaluating the accuracy of commercially available Ag-RDTs to establish a diagnosis of SARS-CoV-2 infection, against reverse transcription polymerase chain reaction (RT-PCR) or cell culture as reference standard. We included all study populations irrespective of age, presence of symptoms, or study location. We considered cohort studies, nested cohort studies, case–control or cross-sectional studies, and randomized studies. We included both peer-reviewed publications and preprints. We excluded studies in which patients were tested for the purpose of monitoring or ending quarantine. Also, publications with a population size smaller than 10 were excluded. Although the size threshold of 10 is arbitrary, such small studies are more likely to give unreliable estimates of sensitivity and specificity.

Index tests

Ag-RDTs for SARS-CoV-2 aim to detect infection by recognizing viral proteins. Most Ag-RDTs use specific labeled antibodies attached to a nitrocellulose matrix strip, to capture the virus antigen. Successful binding of the antibodies to the antigen either is detected visually (through the appearance of a line on the matrix strip [lateral flow assay]) or requires a specific reader for fluorescence detection. Microfluidic enzyme-linked immunosorbent assays have also been developed. Ag-RDTs typically provide results within 10 to 30 minutes [6].

Reference standard

Viral culture detects viable virus that is relevant for transmission but is available in research settings only. Since RT-PCR tests are more widely available and SARS-CoV-2 RNA (as reflected by RT-PCR cycle threshold [Ct] value) highly correlates with SARS-CoV-2 antigen quantities, we considered it an acceptable reference standard for the purposes of this systematic review [16]. It is of note that there is currently no international standard for the classification of viral load available.

Study selection and data extraction

Two reviewers (LEB and CE, LEB and SS, or LEB and MB) reviewed the titles and abstracts of all publications identified by the search algorithm independently, followed by a full-text review for those eligible, to select the articles for inclusion in the systematic review. Any disputes were solved by discussion or by a third reviewer (CMD). A full list of the parameters extracted is included in S1 Table, and the data extraction file is available at https://zenodo.org/record/4924035#.YOlzWS223RZ. Studies that assessed multiple Ag-RDTs or presented results based on differing parameters (e.g., various sample types) were considered as individual datasets. At first, 4 authors (SK, CE, SS, and MB) extracted 5 randomly selected papers in parallel to align data extraction methods. Afterwards, data extraction and the assessment of methodological quality and independence from test manufacturers (see below) was performed by 1 author per paper (SK, CE, SS, or MB) and controlled by a second (LEB, SK, SS, or MB). Any differences were resolved by discussion or by consulting a third author (CMD).

Study types

We differentiated between clinical accuracy studies (performed on clinical samples) and analytical accuracy studies (performed on spiked samples with a known quantity of virus). Analytical accuracy studies can differ widely in methodology, impeding an aggregation of their results. Thus, while we extracted the data for both kinds of studies, we only considered data from clinical accuracy studies as eligible for the meta-analysis. Separately, we summarized the results of analytical studies and compared them with the results of the meta-analysis for individual tests.

Assessment of methodological quality

The quality of the clinical accuracy studies was assessed by applying the QUADAS-2 tool [17]. The tool evaluates 4 domains: patient selection, index test, reference standard, and flow and timing. For each domain, the risk of bias is analyzed using different signaling questions. Beyond the risk of bias, the tool also evaluates the applicability of each included study to the research question for every domain. The QUADAS-2 tool was adjusted to the needs of this review and can be found in S3 Text.

Assessment of independence from manufacturers

We examined whether a study received financial support from a test manufacturer (including the free provision of Ag-RDTs), whether any study author was affiliated with a test manufacturer, and whether a respective conflict of interest was declared. Studies were judged not to be independent from the test manufacturer if at least 1 of these aspects was present; otherwise, they were considered to be independent.

Statistical analysis and data synthesis

We extracted raw data from the studies and recalculated performance estimates where possible based on the extracted data. The raw data can be found in S2 Table. We prepared forest plots for the sensitivity and specificity of each test and visually evaluated the heterogeneity between studies. If 4 or more datasets were available with at least 20 positive RT-PCR samples per dataset for a predefined analysis, a meta-analysis was performed. We report point estimates of sensitivity and specificity for SARS-CoV-2 detection compared to the reference standard along with 95% confidence intervals (CIs) using a bivariate model (implemented with the “reitsma” command from the R package “mada,” version 0.5.10). When there were fewer than 4 studies for an index test, only a descriptive analysis was performed, and accuracy ranges are reported. In subgroup analyses where papers presented data only on sensitivity, a univariate random-effects inverse variance meta-analysis was performed (using the “metagen” command from the R package “meta,” version 4.11–0). We predefined subgroups for meta-analysis based on the following characteristics: Ct value range, sampling and testing procedure in accordance with manufacturer’s instructions as detailed in the instructions for use (IFU) (henceforth called IFU-conforming) versus not IFU-conforming, age (<18 versus ≥18 years), sample type, presence or absence of symptoms, symptom duration (<7 days versus ≥7 days), viral load, and type of RT-PCR used. In an effort to use as much of the heterogeneous data as possible, the cutoffs for the Ct value groups were relaxed by 2–3 points within each range. The <20 group included values reported up to ≤20, the <25 group included values reported as ≤24 or <25 or 20–25, and the <30 group included values from ≤29 to ≤33 and 25–30. The ≥25 group included values reported as ≥25 or 25–30, and the ≥30 group included values from ≥30 to ≥35. For the same reason, when categorizing by age, the age group <18 years (children) included samples from persons whose age was reported as <16 or <18 years, whereas the age group ≥18 years (adults) included samples from persons whose age was reported as ≥16 years or ≥18. For categorization by sample type, we assessed (1) nasopharyngeal (NP) alone or combined with other (e.g., oropharyngeal [OP]), (2) OP alone, (3) anterior nasal (AN) or mid-turbinate (MT), (4) a combination of bronchoalveolar lavage and throat wash (BAL/TW), or (5) saliva. Analyses were preformed using R 4.0.3 (R Foundation for Statistical Computing, Vienna, Austria). We aimed to do meta-regression to examine the impact of covariates including symptom duration and Ct value range. We also performed the Deeks test for funnel-plot asymmetry as recommended to investigate publication bias for diagnostic test accuracy meta-analyses [18] (using the “midas” command in Stata, version 15); a p-value < 0.10 for the slope coefficient indicates significant asymmetry.

Sensitivity analysis

Two types of sensitivity analyses were planned: estimation of sensitivity and specificity excluding case–control studies, and estimation of sensitivity and specificity excluding non-peer-reviewed studies. We compared the results of each sensitivity analysis against the overall results to assess the potential bias introduced by considering case–control studies and non-peer-reviewed studies.

Results

Summary of studies

The systematic search resulted in 14,254 articles. After removing duplicates, 8,921 articles were screened, and 266 papers were considered eligible for full-text review. Of these, 148 were excluded because they did not present primary data [13,19-131] or the Ag-RDT was not commercially available [16,132-164], leaving 133 studies to be included in the systematic review (Fig 1) [4,165-296].
Fig 1

PRISMA flow diagram.

Based on Page et al. [297]. Ag-RDT, antigen rapid diagnostic test; IFU, instructions for use; sens., sensitivity; spec., specificity.

PRISMA flow diagram.

Based on Page et al. [297]. Ag-RDT, antigen rapid diagnostic test; IFU, instructions for use; sens., sensitivity; spec., specificity. At the end of the data extraction process, 37 studies were still in preprint form [4,171,173,174,177,180,190,192,201,204,205,207,211,214-216,218,220,222,223,225,227,231,233,234,238,240,244,247,253,257,265,267,284,287,290,293]. All studies were written in English, except for 2 in Spanish [175,280]. Out of the 133 studies, 9 reported analytical accuracy [173,191,198,208,227,256,274,275,282], and the remaining 124 reported clinical accuracy. The clinical accuracy studies were divided into 214 datasets, while the 9 analytical accuracy studies accounted for 63 datasets. A total of 61 different Ag-RDTs were evaluated (48 lateral flow with visual readout and 12 requiring an automated reader), with 56 being assessed in a clinical accuracy study. Thirty-nine studies reported data for more than 1 test, and 19 of these studies conducted a head-to-head assessment, i.e., testing at least 2 Ag-RDTs on the same sample or participant. The reference method was RT-PCR in all except 1 study, which used viral culture [281]. The most common reasons for testing were the occurrence of symptoms (55/19.9% of datasets), screening independent of symptoms (19/6.9%), and close contact with a SARS-CoV-2 confirmed case (10/3.6%). In 79 (28.6%) of the datasets, persons were tested due to more than 1 of these reasons, and for 163 datasets (59.1%), the reason for testing was unclear. In total, 113,242 Ag-RDTs were performed, 112,323 (99.2%) in clinical accuracy studies and 919 (0.8%) in analytical accuracy studies. In the clinical accuracy studies, the mean number of samples per study was 525 (range 16 to 6,954). Only 4,752 (4.2%) tests were performed on pediatric (age group <18 years) samples, and 21,351 (18.9%) on samples from adults (age group ≥18 years). For the remaining 87,139 (76.9%) samples, the age of the persons tested was not specified. Symptomatic patients comprised 36,981 (32.7%) samples; 32,799 (29.0%) samples originated from asymptomatic patients, and for 42,462 (38.4%) samples, the patient’s symptom status could not be identified. The most common sample type evaluated was NP and mixed NP/OP (67,036 samples, 59.2%), followed by AN/MT (27,045 samples, 23.9%). There was substantially less testing done for the other sample types, with 6,254 (5.5%) tests done from OP samples, 1,351 (1.2%) from saliva, and 219 (0.2%) from BAL/TW, and for 11,337 (10.0%) tests, we could not identify the type of sample. Of the datasets assessing clinical accuracy, 89 (41.6%) involved testing according to the manufacturers’ recommendations (i.e., IFU-conforming), while 100 (46.7%) were not IFU-conforming, and for 25 (11.7%) it was unclear. The most common deviations from the IFU were (1) use of samples that were prediluted in transport media not recommended by the manufacturer (80 datasets; 7 unclear), (2) use of banked samples (60 datasets; 14 unclear), and (3) use of a sample type that was not recommended for the Ag-RDT (17 datasets; 8 unclear). A summary of the tests evaluated in clinical accuracy studies, including study identification, sample size, sample type, sample condition, and IFU conformity, can be found in Table 1. The Panbio test by Abbott (Germany; henceforth called Panbio) was reported the most frequently, with 39 (18.2%) datasets and 28,089 (25.0%) tests, while the Standard Q test by SD Biosensor (South Korea; distributed in Europe by Roche, Germany; henceforth called Standard Q) was assessed in 37 (17.3%) datasets, with 16,820 (15.0%) tests performed. Detailed results for each clinical accuracy study are available in S1 Fig.
Table 1

Clinical accuracy data for Ag-RDTs against SARS-CoV-2.

Reference, first author, dataset IDStudy locationSample typeSample conditionIFU-conformingSample sizeSensitivity (95% CI)Specificity (95% CI)
AAZ, COVID-VIRO (LFA)
[287] Schwob, a35.3SwitzerlandNPFreshYes32484.1% (76.9%, 89.7%)100% (98.0%*, 100%*)
Abbott, BinaxNOW (LFA)
[224] Pollock, f17.1USANFreshYes2,30877.4% (72.2%, 82.1%)99.4% (99.0%, 99.7%)
[283] Pilarowski, a29.1USAN/MTFreshYes87857.7% (36.9%*, 76.6%*)100%* (99.6%*, 100%*)
[197] James, f23.1USANFreshYes2,33956.6% (48.3%*, 64.6%*)99.9% (99.6%*, 100%)
[217] Okoye, f51.1USMTFreshYes2,64553.3% (37.9%*, 68.3%*)100% (99.9%, 100%)
Abbott, Panbio (LFA)
[175] Domínguez Fernández, f49.1SpainUnclearFreshUnclear3095.0% (75.1%*, 99.9%*)100% (69.2%*, 100%*)
[250] Alemany, a02.1SpainNPBankedNo91993.4% (91.5%, 95.0%)100% (95.8%, 100%)
[184] FIND, f42.2GermanyNPFreshYes28190.9% (78.3%*, 97.5%*)99.2% (97.0%, 99.9%*)
[276] Merino-Amador, a25.1SpainNPFreshYes95890.5% (87.0%*, 93.4%*)98.8% (97.6%*, 99.5%*)
[267] Krüger, a52.1GermanyNPFreshYes1,03487.5% (79.6%*, 93.2%*)99.9% (99.4%, 100%)
[287] Schwob, a35.2SwitzerlandNPFreshYes27186.1% (78.6%, 91.7%)100% (97.6%*, 100%*)
[235] Stokes, f65.1CanadaNPFreshYes1,64186.2%* (81.5%*, 90.1%*)99.9% (99.5%, 100%)
[252] Berger, a05.1SwitzerlandNPFreshYes53585.5% (78.0%, 91.2%)100% (99.1%, 100%)
[177] Faíco-Filho, f63.1BrazilNPFreshYes12784.3% (73.6%*, 91.9%*)98.2%* (90.6%*, 100%*)
[196] Jääskeläinen, f50.3FinlandNPBankedNo 190 82.9%* (76.0%*, 88.5%*)100% (90.7%*, 100%*)
[247] Abdulrahman, a01.1BahrainAN/MTFreshNo4,18382.1% (79.2%, 84.8%)99.1% (98.8%, 99.4%)
[263] Gremmels, a12.2NetherlandsNPFreshYes20881.0% (69.1%*, 89.8%*)100% (97.5%, 100%)
[214] Ngo Nsoga, f28.1SwitzerlandOPFreshNo40281.0% (74.2%, 86.6%)99.1% (96.9%, 99.9%)
[245] Yin, f82.2BelgiumNPFreshYes10180.8% (68.1%, 89.2%)Not provided
[249] Albert, a03.1SpainNPFreshYes41279.6% (66.5%*, 89.4%*)100% (99.0%, 100%)
[250] Alemany, a02.2SpainAN/MTBankedNo48779.5% (71.0%, 86.4%)98.6%* (96.9%, 99.6%)
[258] Fenollar, a11.1FranceNPFreshYes34175.5% (69.0%*, 81.2%*)94.9% (89.8%*, 97.9%*)
[270] Linares, a20.1SpainNPFreshUnclear25573.3% (60.3%*, 83.9%*)100% (98.1%*, 100%*)
[263] Gremmels, a12.1NetherlandsNPFreshYes1,36772.7%* (64.5%, 79.9%)100% (99.7%, 100%)
[192] Halfon, f18.1FranceNPUnclearNo20072.0% (62.1%*, 80.5%*)99.0% (94.6*, 100%)
[253] Bulilete, a07.1SpainNPFreshYes1,362*71.4% (63.2%*, 78.7%)99.8% (99.4%, 99.9%)
[165] Akingba, f30.1South AfricaNPFreshUnclear657*69.7%* (61.5%*, 77.0%*)99.4%* (98.3%*, 99.9%*)
[174] Del Vecchio, f66.1ItalyUnclearFreshUnclear1,44168.9% (55.7%, 80.1%)99.9% (99.6%, 100%)
[178] Favresse, f31.2BelgiumNPFreshNo 188 67.7% (57.4%, 76.9%)100% (96.1%, 100%)
[257] Drevinek, a10.1Czech RepublicNPFreshYes59166.4% (59.8%*, 72.5%*)100% (99.0%, 100%)
[205] L’Huillier, f72.1SwitzerlandNPFreshYes82265.5%* (56.3%*, 74.0%)99.9%* (99.2%*, 100%)
[221] Pérez-García, f52.2SpainNPBankedNo 320 60.0% (52.2%, 67.4%)100% (97.6%, 100%)
[248] Agulló, a56.1SpainNPFreshYes652*57.6%* (48.7%*, 66.1%*)99.8% (98.9%*, 100%)
[267] Krüger, a52.2GermanyOPFreshNo7450.0% (1.3%, 98.7%)100% (94.9%, 100%)
[286] Schildgen, a33.2GermanyBAL/TWUnclearNo 73 50.0% (34.2%*, 65.8%*)77.4% (58.9%*, 90.4%*)
[292] Torres, a37.1SpainNPFreshYes63448.1% (36.7%*, 59.6%*)100% (99.3%, 100%)
[244] Wagenhäuser, f89.2GermanyOPFreshNo1,02946.7% (24.8%, 69.9%)99.6% (99.0%, 99.9%)
[243] Villaverde, f55.1SpainNPFreshYes1,62045.4% (34.1%, 57.2%)99.8% (99.4%, 99.9%)
[248] Agulló, a56.2SpainAN/MTFreshNo65944.7% (36.1%, 53.6%)100% (99.3%*, 100%)
[279] Olearo, a54.2GermanyOPUnclearNo 184 44.0%* (33.2%*, 55.3%*)100% (96.4%*, 100%)
[170] Caruana, f34.2SwitzerlandNPFreshNo 532 41.2% (32.1%*, 50.8%*)99.5% (98.3%*, 99.9%*)
[167] Baro, f33.1SpainNPBankedNo 286 38.6% (29.1%, 48.8%)99.5% (97.0%, 100%)
[248] Agulló, a56.3SpainSalivaFreshNo61023.1% (16.0%*, 31.7%*)100% (99.2%*, 100%)
[213] Muhi, f90.1AustraliaNPFreshYes2,413Not provided100% (99.7%, 100%)
Abbott, Panbio (nasal sampling) (LFA)
[184] FIND, f42.1GermanyAN/MTFreshYes28186.4% (72.6%*, 94.8%*)99.2% (97.0%, 99.9%*)
Access Bio, CareStart COVID-19 Antigen Test (LFA)
[225] Pollock, f59.1USANFreshYes1,49857.7% (51.1%, 64.1%)98.3% (97.5%, 99.0%)
Assure Tech, Ecotest COVID-19 Antigen Rapid Test (LFA)
[194] Homza, f87.1Czech RepublicNPFreshYes31875.7% (66.5%, 83.5%)96.7% (93.3%, 98.7%)
Becton, Dickinson and Company, BD Veritor (requires reader)
[281] Pekosz, a28.1USNPFreshNo25196.4% (81.7%*, 99.9%*)98.7% (96.1%, 99.7%)
[293] Van der Moeren, a39.1NetherlandsMT/OPBankedNo351*94.1% (71.1%, 100%)100% (98.9%, 100%)
[190] Gomez Marti, f46.2USANFreshUnclearUnknown93.8% (79.2%*, 99.2%*)Not provided
[245] Yin, f82.1BelgiumNPFreshYes17787.7% (80.0%, 92.7%)Not provided
[296] Young, a43.1USNPBankedNo25176.3%* (59.8%*, 88.6%*)99.5%* (97.4%*, 99.9%*)
[202] Kilic, f71.1USANFreshYes1,38466.4% (57.0%, 74.9%)98.8% (98.1%, 99.3%)
[231] Schuit, f64.1NetherlandsNPFreshNo2,67863.9% (57.4%, 70.1%)99.6% (99.3%, 99.8%)
[170] Caruana, f34.4SwitzerlandNPFreshNo 532 41.2% (32.1%*, 50.8%*)99.8%* (98.7%*, 100%*)
Becton, Dickinson and Company, Hometest (LFA)
[234] Stohr, f45.1NetherlandsANFreshUnclear1,60448.9% (41.3%*, 56.5%*)99.9% (99.5%, 100%)
Beijing Savant Biotechnology, SARS-CoV-2 detection kit (LFA)
[295] Weitzel, a41.3ChileNP/OPBankedNo 109 16.7% (9.2%*, 26.8%*)100% (88.8%*, 100%)
Biotime, COVID-19 Antigen Test Cassette (LFA)
[232] Seitz, f68.1AustriaSalivaFreshYes4044.4% (21.5%*, 69.2%*)100% (84.6%*, 100%*)
Bionote, NowCheck (LFA)
[185] FIND, f91.1BrazilANFreshYes21889.9% (81.0%*, 95.5%*)98.6% (94.9%, 99.8%*)
[185] FIND, f91.2BrazilNPFreshYes21889.9% (81.0%*, 95.5%*)98.6% (94.9%, 99.8%*)
[259] FIND, a61.1BrazilNPFreshYes40089.2% (81.5%*, 94.5%*)97.3% (94.8%, 98.8%*)
[228] Rottenstreich, f53.1IsraelNPUnclearUnclear1,32655.6% (21.2%, 86.3%)100% (99.7%, 100%)
Biotical Health, SARS-CoV-2 Ag Card (LFA)
[178] Favresse, f31.1BelgiumNPFreshNo 188 66.7% (56.3%, 76.0%)98.9% (94.1%, 99.9%)
Boditech Medical, iChroma COVID-19 Ag Test (requires reader)
[181] FIND, f39.1SwitzerlandNPFreshYes23273.2% (57.1%*, 85.8%*)100% (98.0%, 100%)
CerTest Biotec, SARS-CoV-2 one step test card (LFA)
[221] Pérez-García, f52.1SpainNPBankedNo 320 53.5% (45.7%, 61.2%)100% (97.6%, 100%)
Coris BioConcept, COVID-19 Ag Respi-Strip (LFA)
[245] Yin, f82.3BelgiumNPFreshYes13580.0% (69.2%, 87.7%)Not provided
[277] Mertens, a48.1BelgiumNPBankedNo32857.6% (48.7%*, 66.1%*)99.5% (97.2%*, 100%*)
[269] Lambert-Niclot, a18.1FranceNPFreshNo13850.0% (39.5%, 60.5%)100% (92.0%*, 100%)
[4] Krüger, a17.3Germany/EnglandNP/OPUnclearNo41750.0% (21.5%*, 78.5%)95.8% (93.4%, 97.4%)
[172] Ciotti, f24.1ItalyNPFreshUnclear5030.8% (17.0%, 47.6%)100% (71.5%, 100%)
[288] Scohy, a34.1BelgiumNPFreshNo14830.2% (21.7%, 39.9%)100% (91.6%*, 100%*)
[294] Veyrenche, a40.1FranceNPFreshNo6528.9%* (16.4%*, 44.3%*)100% (83.2%, 100%)
Denka, Quick Navi (LFA)
[237] Takeuchi, f12.1JapanNPFreshUnclear1,18686.7% (78.6%, 92.5%)100% (99.7%, 100%)
[238] Takeuchi, f60.1JapanANFreshUnclear86272.5% (58.3%, 84.1%)100% (99.5%*, 100%)
DiaSorin, LIAISON SARS-CoV-2 Ag (LFA)
[206] Lefever, f70.1BelgiumNPBankedNo41467.6%* (60.8%*, 74.0%*)100% (98.3%*, 100%)
Dräger, Antigen Test SARS-CoV-2 (LFA)
[218] Osmanodja, f79.1GermanyNP/OPFreshYes37988.6% (78.7%, 94.9%)99.7% (98.2%, 100%)
E25Bio, Rapid Diagnostic Test (LFA)
[223] Pickering, f73.2UKAN/OPBankedNo 200 75.0% (65.3%*, 83.1%*)86% (77.6%*, 92.1%*)
ECO Diagnóstica, COVID-19 Ag (LFA)
[180] Filgueiras, f14.1BrazilNPFreshUnclear15069.1% (55.2%*, 80.9%*)98.8% (93.5%*, 100%)
Fujirebio, ESPLINE SARS-CoV-2 (LFA)
[290] Takeda, a50.1JapanNPUnclearNo16280.6%* (68.6%*, 89.6%*)100%* (96.4%*, 100%*)
[186] FIND, f92.1GermanyNP/OPFreshNo72378.6% (69.8%*, 85.8%*)100% (99.4%, 100%)
[230] Sberna, f83.1ItalySalivaUnclearUnclear1368.1% (2.7%, 17.8%)100% (95.1%, 100%)
Fujirebio, Lumipulse G SARS-CoV-2 Ag (requires reader)
[189] Gili, f57.2ItalyNPBankedNo226100% (96.0%*, 100%*)92.1% (90.7%*, 93.4%*)
[189] Gili, f57.1ItalyNPFreshNo1,73890.5% (82.8%*, 95.6%*)91.6% (85.5%*, 95.7%*)
[193] Hirotsu, f47.1JapanNPBankedNo1,03392.5% (79.6%*, 98.4%*)100%* (99.6%*, 100%*)
[168] Basso, f10.1ItalyNPFreshYes23481.6% (71.9%*, 89.1%*)93.9%* (88.7%*, 97.2%*)
[166] Asai, f74.1JapanSalivaUnclearYes 305 77.8% (65.5%*, 87.3%*)98.3% (95.8%*, 99.5%*)
[168] Basso, f10.2ItalySalivaFreshYes22341.3% (30.4%, 52.8%)98.6% (95.0%, 99.8%)
Guangzhou Wondfo Biotech, 2019-nCoV Antigen Test (LFA)
[183] FIND, f41.1SwitzerlandNPFreshYes32885.7% (73.8%*, 93.6%*)100% (98.7%*, 100%*)
Humasis, COVID-19 Ag Test (LFA)
[169] Bruzzone, f86.2ItalyUnclearBankedNo2185.7% (63.7%*, 97%*)Not provided
Healgen, Rapid COVID-19 Ag Test (LFA)
[178] Favresse, f31.3BelgiumNPFreshNo 188 77.1% (67.4%, 85.1%)96.7% (90.8%, 99.3%)
Innova Medical Group, INNOVA SARS-CoV-2 Antigen Rapid Qualitative Test (LFA)
[223] Pickering, f73.1UKAN/OPBankedNo 200 89.0% (81.2%*, 94.4%*)99.0% (94.6%, 100%)
[195] Houston, f25.1UKNPFreshYes24286.4% (81.9%*, 90.2%*)95.1% (92.7%*, 96.9%*)
[223] Pickering, f73.10UKAN/OPBankedNo 23 82.6% (61.2%*, 95.0%*)Not provided
[223] Pickering, f73.11UKAN/OPBankedNo 23 82.6% (61.2%*, 95.0%*)Not provided
[222] Peto, f21.1UKUnclearUnclearUnclear6,954Not provided99.7% (99.5%*, 99.8%*)
[222] Peto, f21.4UKUnclearUnclearUnclear19878.8% (72.4%, 84.3%)Not provided
[223] Pickering, f73.12UKAN/OPBankedNo 23 78.3% (56.3%*, 92.5%*)Not provided
[223] Pickering, f73.8UKAN/OPBankedNo 110 78.2% (69.3%*, 85.5%*)Not provided
[222] Peto, f21.3UKUnclearUnclearUnclear22370.0% (63.5%, 75.9%)Not provided
[246] Young, f56.1UKNPFreshUnclear80362.1%*(55.3%*, 68.7%*)100% (99.4%, 100%)
[222] Peto, f21.2UKUnclearUnclearUnclear37257.5% (52.3%, 62.6%)Not provided
[179] Ferguson, f85.1UKANFreshYes7203.2% (0.6%, 15.6%)100% (99.5%, 100%)
JOYSBIO Biotechnology, COVID-19 Antigen Rapid Test Kit (LFA)
[182] FIND, f40.1SwitzerlandNPFreshYes26570.5% (54.8%*, 83.2%*)99.1% (96.8%*, 99.9%*)
[194] Homza, f87.2Czech RepublicNPFreshYes22557.8% (46.9%, 68.1%)98.5% (94.8%, 99.8%)
Lab Care Diagnostics, PathoCatch/ACCUCARE SARS-CoV-2 Antigen Test (LFA)
[239] Thakur, f88.1IndiaNPFreshYes67734.5% (24.5%, 45.6%)99.8% (99.1%, 100%)
Lepu Medical Technology, SARS-CoV-2 Antigen Rapid Test Kit (LFA)
[167] Baro, f33.4SpainNPBankedNo 286 45.5% (35.6%, 55.8%)89.2% (83.8%, 93.3%)
Liming Bio, SARS-CoV-2 Ag-RDT (LFA)
[295] Weitzel, a41.2ChileNP/OPBankedNo 19 0% (0%, 29.9%)90.0% (59.6%, 98.2%)
LumiraDx, COVID-19 SARS-CoV-2 Antigen Test (requires reader)
[176] Drain, f43.1UK/USANFreshYes25797.6% (91.6%*, 99.7%*)96.6% (92.6%*, 98.7%*)
[176] Drain, f43.2UK/USNPFreshYes25597.5% (86.8%*, 99.9%*)97.7% (94.7%, 99.2%*)
[204] Krüger, f58.1GermanyMTFreshYes76182.2% (75.0%*, 88.0%*)99.3% (98.3%, 99.7%)
[169] Bruzzone, f86.6ItalyUnclearBankedNo2369.6% (47.1%*, 86.8%*)Not provided
[203] Kohmer, f32.4GermanyNPFreshNo 100 50.0% (38.1%, 61.9%)100% (86.8%, 100%)
[211] Micocci, f77.1UKNPFreshUnclear24175.0%* (34.9%*, 96.8%*)96.1%* (92.7%*, 98.2%*)
MEDsan, SARS-CoV-2 Antigen Rapid Test (LFA)
[279] Olearo, a54.3GermanyOPUnclearNo 184 45.2%* (34.3%*, 56.5%)97.0% (91.5%, 99.4%*)
[244] Wagenhäuser, f89.3GermanyOPFreshYes3,22136.5% (24.7%*, 49.6%*)99.6% (99.3%, 99.8%)
Mologic, COVID-19 Rapid Antigen Test (LFA)
[187] FIND, f93.1GermanyAN/MTFreshYes66590.7% (85.7%*, 94.4%*)100% (99.2%, 100%)
nal von minden, NADAL (LFA)
[188] FIND, f94.1SwitzerlandNPFreshYes46288.4% (78.4%*, 94.9%*)99.2% (97.8%, 99.7%)
[236] Strömer, f11.1GermanyNPBankedNo12463.7%* (54.6%*, 72.2%*)Not provided
[244] Wagenhäuser, f89.1GermanyOPFreshYes80656.5% (34.5%*, 76.8%*)100% (99.5%, 100%)
[203] Kohmer, f32.3GermanyNPFreshNo 100 24.3% (15.1%, 35.7%)100% (86.8%, 100%)
NanoEntek, FREND COVID-19 Ag (requires reader)
[169] Bruzzone, f86.7ItalyUnclearBankedNo6093.3% (83.8%*, 98.2%*)Not provided
NDFOS, ND COVID-19 Ag Test (LFA)
[194] Homza, f87.3Czech RepublicNPFreshYes19170.1% (58.6%, 80.0%)56.1% (46.4%, 65.4%)
Ortho Clinical Diagnostics, VITROS SARS-CoV-2 Antigen Test (requires reader)
[178] Favresse, f31.5BelgiumNPFreshNo 188 83.3% (74.4%, 90.2%)100% (96.1%, 100%)
Precision Biosensor, Exdia COVID-19 Ag (requires reader)
[170] Caruana, f34.3SwitzerlandNPFreshNo 532 48.3% (38.8%*, 57.8%*)99.5% (98.3%*, 99.9%*)
PRIMA Lab, COVID-19 Antigen Rapid Test (LFA)
[169] Bruzzone, f86.3ItalyUnclearBankedNo5066.0% (51.2%*, 78.8%*)Not provided
Quidel, Sofia SARS Antigen FIA (requires reader)
[284] Porte, a32.1ChileNP/OPBankedNo6493.8% (79.2%*, 99.2%*)96.9% (83.8%*, 99.9%*)
[196] Jääskeläinen, f50.1FinlandNPBankedNo 188 80.4% (73.1%*, 86.5%*)100% (91.2%*, 100%*)
[251] Beck, a04.1USNPFreshYes34677.0% (64.5%*, 86.8%*)99.6% (98.1%*, 100%*)
[265] Herrera, a46.1USUnclearUnclearUnclear1,17276.8% (72.6%, 80.5%)99.2% (98.2%, 99.7%)
[190] Gomez Marti, f46.1USMTFreshUnclear42772.0% (56.3%*, 84.7%*)99.7%* (98.6%*, 100%*)
RapiGEN, Biocredit Covid-19 Ag (LFA)
[289] Shrestha, a36.1NepalNPFreshYes11385.0% (71.7%*, 93.8%*)100% (94.6%*, 100%*)
[260] FIND, a62.1BrazilNPFreshYes47674.4% (65.5%*, 82.0%*)98.9%* (97.2%, 99.7%*)
[295] Weitzel, a41.1ChileNP/OPBankedNo 109 62.0% (50.4%*, 72.7%*)100% (88.4%*, 100%)
[233] Shidlovskaya, f61.1RussiaNPFreshYes 106 56.4% (44.7%, 67.6%)100% (87.7%, 100%)
[260] FIND, a62.2GermanyNPFreshYes1,23952.0% (31.3%*, 72.2%*)100% (99.7%, 100%)
[169] Bruzzone, f86.4ItalyUnclearBankedNo2339.1% (19.7%*, 61.5%*)Not provided
[286] Schildgen, a33.1GermanyBAL/TWUnclearNo 73 33.3% (19.6%*, 49.6%*)87.1% (70.2%*, 96.4%*)
[200] Kenyeres, f84.1HungaryNPFreshNo378.1% (1.7%*, 21.9%*)Not provided
R-Biopharm, RIDA QUICK SARS-CoV-2 Antigen (LFA)
[291] Toptan, a55.1GermanyNP/OPBankedNo6777.6% (64.7%*, 87.5%*)100% (66.4%*, 100%*)
[291] Toptan, a55.2GermanyUnclearBankedNo7050.0% (31.9%*, 68.1%*)100% (90.8%*, 100%*)
[203] Kohmer, f32.1GermanyNPFreshNo 100 39.2% (28.0%, 51.2%)96.2% (80.4%, 99.9%)
Roche, Elecsys SARS-CoV-2 Antigen Test (requires reader)
[216] Nörz, f78.1GermanyNP/OPBankedNo3,13960.2% (55.2%, 65.1%)99.9% (99.6%, 100%)
Roche, SARS-CoV-2 Rapid Antigen Test (LFA)
[240] Thell, f81.1AustriaUnclearFreshUnclear59180.3% (74.3%, 85.4%)99.1% (97.4%, 99.8%)
Salofa Oy, Sienna COVID-19 Antigen Rapid Test Cassette (LFA)
[209] Mboumba Bouassa, f67.1FranceNPBankedNo10090.0% (82.4%*, 95.1%*)100% (92.9%*, 100%)
SD Biosensor, Standard F (requires reader)
[284] Porte, a32.2ChileNP/OPBankedNo6490.6% (75.0%*, 98.0%*)96.9% (83.8%*, 99.9%*)
[169] Bruzzone, f86.5ItalyUnclearBankedNo6086.7% (75.4%*, 94.1%*)Not provided
[261] FIND, a63.1BrazilNPFreshYes45377.5% (69.0%*, 84.6%*)97.9% (95.7%, 99.2%*)
[261] FIND, a63.2GermanyNPFreshYes67669.2% (52.4%*, 83.0%*)96.9% (95.2%, 98.0%)
[257] Drevinek, a10.2Czech RepublicNPFreshYes59162.3% (55.6%*, 68.7%*)99.5% (98.0%, 99.9%)
[219] Osterman, f20.1GermanyNP/OPUnclearNo 360 60.9% (53.5%*, 67.8%*)97.8% (95.7%, 99.0%*)
[273] Liotti, a22.1ItalyNPBankedNo35947.1% (37.1%, 57.1%)98.4% (96.0%, 99.6%)
SD Biosensor/Roche, Standard Q (LFA)
[255] Chaimayo, a57.1ThailandNP/OPBankedNo45498.3% (91.1%, 100%)98.7% (97.1%, 99.6%)
[169] Bruzzone, f86.1ItalyUnclearBankedNo1693.8% (71.7%, 98.9%)Not provided
[201] Kernéis, f69.1FranceNPFreshUnclear1,10994.2%* (87.0%*, 98.1%*)99.0% (98.2%*, 99.5%*)
[287] Schwob, a35.1SwitzerlandNPFreshYes33392.9% (86.4%, 96.9%)100% (98.3%*, 100%*)
[215] Nikolai, f35.3GermanyNPFreshYes 96 91.2% (76.3%*, 98.1%*)100% (94.2%, 100%)
[252] Berger, a05.2SwitzerlandNPFreshYes52989.0% (83.7%, 93.1%)99.7% (98.4%, 100%)
[262] FIND, a64.1BrazilNPFreshYes40088.7% (81.1%*, 94.0%*)97.6% (95.2%, 99.0%*)
[286] Schildgen, a33.3GermanyBAL/TWUnclearNo 73 88.1% (74.4%*, 96.0%*)19.4% (7.5%*, 37.5%*)
[207] Lindner, f15.1GermanyNPFreshYes 139 85.0% (70.2%*, 94.3%*)99.1% (94.9%*, 100%*)
[266] Iglὁi, a15.1NetherlandsNPFreshYes97084.9% (79.0%*, 89.8%*)99.5% (98.7%, 99.9%*)
[264] Gupta, a13.1IndiaNPFreshYes33081.8% (71.4%*, 89.7%*)99.6% (97.8%, 99.9%)
[196] Jääskeläinen, f50.2FinlandNPBankedNo 198 81.0% (74.0%*, 86.8%*)100% (91.2%*, 100%*)
[242] Turcato, f09.1ItalyNPFreshUnclear3,41080.3% (74.4%*, 85.3%*)99.1% (98.7%*, 99.4%*)
[272] Lindner, a21.2GermanyNPFreshYes 289 79.5% (63.5%*, 90.7%*)99.6% (97.8%, 100%)
[245] Yin, f82.4BelgiumNPFreshYes6578.3% (58.1%, 90.3%)Not provided
[4] Krüger, a17.1Germany/EnglandNP/OPUnclearNo1,26376.6% (62.0%*, 87.7%*)99.3% (98.6%, 99.7%*)
[212] Möckel, f19.1GermanyNP/OPFreshYes27175.3% (65.0%*, 83.8%*)100% (98.0%*, 100%)
[272] Lindner, a21.1GermanyAN/MTFreshNo 289 74.4% (57.9%*87.0%*)99.2% (97.1%, 99.9%*)
[271] Lindner, a53.1GermanyNPFreshYes 180 73.2%* (57.1%*, 85.8%*)99.3% (96.0%, 100%)
[229] Salvagno, f54.1ItalyNPUnclearNo32172.5% (64.6%, 79.5%)99.4% (96.8%, 100%)
[254] Cerutti, a08.1ItalyNPUnclearNo18572.1% (62.5%*, 80.5%*)100% (95.6%*, 100%*)
[212] Möckel, f19.2GermanyNP/OPFreshYes2,02072.0% (50.6%*, 87.9%*)99.4% (96.9%*, 100%*)
[268] Krüttgen, a16.1GermanyNPBankedNo15070.7% (59.0%*, 80.6%*)96.0% (88.8%*, 99.2%*)
[278] Nalumansi, a27.1UgandaNPFreshYes26270.0% (59.4%*, 79.2%*)92.4%* (87.4%*, 95.9%*)
[220] Pena, f36.1ChileNPFreshYes84269.9% (58.0%*, 80.1%*)99.6% (98.9%, 99.9%)
[178] Favresse, f31.4BelgiumNPFreshNo 188 69.8% (59.6%, 78.8%)100% (96.1%, 100%)
[219] Osterman, f20.2GermanyNP/OPUnclearNo 386 64.5% (58.3%*, 70.3%*)97.7% (95.6%, 98.9%*)
[194] Homza, f87.4Czech RepublicNPFreshYes13961.9% (45.6%, 76.4%)99.0% (94.4%, 100%)
[231] Schuit, f64.2NetherlandsNPFreshYes1,59662.9% (54.0%, 71.1%)99.5% (98.9%, 99.8%)
[226] Ristić, f44.1SerbiaNPFreshUnclear12058.1% (42.1%, 73.0%)100% (95.3%*, 100%*)
[199] Kannian, f26.1IndiaSalivaUnclearNo3755.6%* (35.3%*, 74.5%*)100% (69.2%*, 100%*)
[279] Olearo, a54.1GermanyOPUnclearNo 184 48.8%* (37.7%*, 60.0%*)100% (96.4%*, 100%)
[167] Baro, f33.3SpainNPBankedNo 286 43.6% (33.7%, 53.8%)96.2% (92.4%, 98.5%)
[203] Kohmer, f32.2GermanyNPFreshNo 100 43.2% (31.8%*, 55.3%)100% (86.8%, 100%)
[170] Caruana, f34.1SwitzerlandNPFreshNo 532 41.2% (32.1%*, 50.8%*)99.8%* (98.7%*, 100%*)
[254] Cerutti, a08.2ItalyNPFreshNo14540.0% (5.3%*, 85.3%*)100% (97.4%*, 100%*)
[171] Caruana, f75.1SwitzerlandNPFreshUnclear11628.6% (3.7%*, 71.0%*)98.2% (93.5%*, 99.8%*)
SD Biosensor/Roche, Standard Q (nasal sampling) (LFA)
[215] Nikolai, f35.4GermanyMTFreshYes 96 91.2% (76.3%*, 98.1%*)98.4% (91.3%*, 100%*)
[215] Nikolai, f35.2GermanyMTFreshYes 132 86.1% (70.5%*, 95.3%*)100% (96.2%*, 100%*)
[215] Nikolai, f35.1GermanyANFreshYes 132 86.1% (70.5%*, 95.3%*)100% (96.2%*, 100%*)
[207] Lindner, f15.2GermanyMTFreshYes 180 82.5% (67.2%*92.7%*)100% (96.5%, 100%)
[234] Stohr, f45.2NetherlandsANFreshUnclear1,61161.5% (54.2%*, 68.4%*)99.7% (99.3%, 99.9%)
[271] Lindner, a53.2GermanyANFreshYes 179 80.5% (65.1%*, 91.2%*)98.6% (94.9%, 99.8%*)
Shenzhen Lvshiyuan Biotechnology, Green Spring SARS-CoV-2-Antigen-Schnelltest-Set (LFA)
[223] Pickering, f73.4UKAN/OPBankedNo 200 77.0% (67.5%*, 84.8%*)98.0% (93.0%, 99.8%*)
Shenzhen Bioeasy Biotechnology, 2019-nCov Antigen Rapid Test Kit (requires reader)
[285] Porte, a31.1ChileNP/OPBankedNo12793.9% (86.3%*, 98.0%*)100% (92.1%*, 100%*)
[295] Weitzel, a41.4ChileNP/OPBankedNo 111 85.0% (75.3%*, 92.0%*)100% (88.8%*, 100%)
[280] Parada-Ricart, a58.1SpainNPFreshYes17273.1%* (52.2%*, 88.4%*)85.6%* (78.9%*, 90.9%*)
[4] Krüger, a17.2GermanyNP/OPFreshNo727*66.7% (41.7%, 84.8%)93.1% (91.0%, 94.8%)
Siemens Healthineers, CLINITEST Rapid COVID-19 Antigen Test (LFA)
[241] Torres, f29.1SpainNPFreshYes17880.2% (70.6%*, 87.8%*)100% (95.8%, 100%)
[241] Torres, f29.2SpainNPFreshYes9260.0% (38.7%*, 78.9%*)100% (94.6%, 100%)
[279] Olearo, a54.4GermanyOPUnclearNo 170 54.8%* (43.5%*, 65.7%*)100% (95.8%*, 100%)
[167] Baro, f33.2SpainNPBankedNo 286 51.5% (41.3%, 61.6%)98.4% (95.3%, 99.7%*)
Sugentech, SGTi-flex COVID-19 Ag (LFA)
[233] Shidlovskaya, f61.2RussiaNPFreshYes 106 52.6% (40.9%, 64.0%)96.4% (81.7%, 99.9%)
SureScreen Diagnostics, COVID-19 Rapid Antigen Visual Read (LFA)
[223] Pickering, f73.14UKAN/OPBankedNo 23 74.0%* (51.6%*, 89.8%*)Not provided
[223] Pickering, f73.3UKAN/OPBankedNo 200 65.0% (54.8%*, 74.3%*)100% (96.4%*, 100%*)
[223] Pickering, f73.15UKAN/OPBankedNo 23 65.2% (42.7%*, 83.6%*)Not provided
[223] Pickering, f73.13UKAN/OPBankedNo 23 61.0%* (38.5%*, 80.3%*)Not provided
[167] Baro, f33.5SpainNPBankedNo 286 28.8% (20.2%, 38.6%)97.8% (94.5%, 99.4%)
SureScreen Diagnostics, COVID-19 Rapid Antigen Fluorescent (requires reader)
[223] Pickering, f73.6UKAN/OPBankedNo 200 69.0% (59.0%*, 77.9%*)98.0% (93%, 99.8%*)
[223] Pickering, f73.7UKAN/OPBankedNo 141 60.3% (51.7%*, 68.4%*)Not provided
VivaCheck, VivaDiag SARS-CoV-2 Ag Rapid Test (LFA)
[194] Homza, f87.5Czech RepublicNPFreshYes26841.8% (31.5%, 52.6%)96.0% (92.0%, 98.4%)
Zhuhai Encode Medical Engineering, SARS-CoV-2 Antigen Rapid Test (LFA)
[223] Pickering, f73.5UKAN/OPBankedNo 200 74.0% (64.3%*, 82.3%*)100% (96.4%*, 100%)
[223] Pickering, f73.9UKAN/OPBankedNo 90 74.4% (64.2%*, 83.1%*)Not provided

Datasets with an underlined reference and first author had not undergone peer-review yet at the time of data extraction (1 May 2021). In datasets with an underlined sample size, the samples were used in head-to-head studies, i.e., performing different Ag-RDTs on the same patient.

*Values differ from those provided in the respective paper due to missing or contradictory data. A list including the original data can be found in S2 Table.

AN, anterior nasal; BAL/TW, bronchoalveolar lavage and throat wash; CI, confidence interval; IFU, instructions for use; FIND, Foundation for Innovative New Diagnostics; LFA, lateral flow assay; MT, mid-turbinate; NP, nasopharyngeal; OP, oropharyngeal.

Datasets with an underlined reference and first author had not undergone peer-review yet at the time of data extraction (1 May 2021). In datasets with an underlined sample size, the samples were used in head-to-head studies, i.e., performing different Ag-RDTs on the same patient. *Values differ from those provided in the respective paper due to missing or contradictory data. A list including the original data can be found in S2 Table. AN, anterior nasal; BAL/TW, bronchoalveolar lavage and throat wash; CI, confidence interval; IFU, instructions for use; FIND, Foundation for Innovative New Diagnostics; LFA, lateral flow assay; MT, mid-turbinate; NP, nasopharyngeal; OP, oropharyngeal.

Methodological quality of studies

The findings on study quality using the QUADAS-2 tool are presented in Figs 2 and 3. In 190 (88.8%) datasets a relevant patient population was assessed. However, for only 44 (20.6%) of the datasets was patient selection considered representative of the setting and population chosen (i.e., they avoided inappropriate exclusions and a case–control design, and enrollment occurred consecutively or randomly).
Fig 2

Methodological quality of the clinical accuracy studies: Risk of bias.

Proportion of studies with low, intermediate, high, or unclear risk of bias (percent).

Fig 3

Methodological quality of the clinical accuracy studies: Applicability.

Proportion of studies with low, intermediate, high, or unclear concerns regarding applicability (percent).

Methodological quality of the clinical accuracy studies: Risk of bias.

Proportion of studies with low, intermediate, high, or unclear risk of bias (percent).

Methodological quality of the clinical accuracy studies: Applicability.

Proportion of studies with low, intermediate, high, or unclear concerns regarding applicability (percent). The conduct and interpretation of the index tests was considered to have low risk for introduction of bias in 113 (52.8%) datasets (through, e.g., appropriate blinding of persons interpreting the visual readout). However, for 99 (46.3%) datasets, sufficient information to clearly judge the risk of bias was not provided. In only 89 (41.6%) datasets were the Ag-RDTs performed according to IFU, while 100 (46.7%) were not IFU-conforming, potentially impacting the diagnostic accuracy (for 25 [11.7%] datasets the IFU status was unclear). In 81 (37.9%) datasets, the reference standard was performed before the Ag-RDT, or the operator conducting the reference standard was blinded to the Ag-RDT results, resulting in a low risk of bias. In almost all other datasets (132/61.7%), this risk could not be assessed due to missing data. The applicability of the reference test was judged to be of low concern for all datasets, as cell culture and RT-PCR are expected to adequately define the target condition. In 209 (97.7%) datasets, the sample for the index test and reference test were obtained at the same time, while this was unclear in 5 (2.3%) datasets. All samples included in a dataset were subjected to the same type of RT-PCR in 145 (67.8%) datasets, while different types of RT-PCR were used within the same dataset in 50 (23.4%) datasets. For 19 (8.9%) datasets, it was unclear. Furthermore, for 11 (5.1%) datasets, there was a concern that not all selected patients were included in the analysis. Finally, 32 (24.1%) of the studies received financial support from the Ag-RDT manufacturer, and in another 9 (6.8%) studies, employment of the authors by the manufacturer of the Ag-RDT studied was indicated. Overall, a competing interest was found in 33 (24.8%) of the studies.

Detection of SARS-CoV-2 infection

Out of 214 clinical datasets (from 124 studies), 20 were excluded from the meta-analysis because they included fewer than 20 RT-PCR positive samples. A further 21 datasets were missing either sensitivity or specificity and were only considered for univariate analyses. Across the remaining 173 datasets, including any test and type of sample, the pooled sensitivity and specificity were 71.2% (95% CI 68.2% to 74.0%) and 98.9% (95% CI 98.6% to 99.1%), respectively. If testing was performed in conformity with IFU, sensitivity increased to 76.3% (95% CI 73.1% to 79.2%), while non-IFU-conforming testing had a sensitivity of 65.9% (95% CI 60.6% to 70.8%). Pooled specificity was similar in both groups (99.1% [95% CI 98.8–99.4%] and 98.3% [95% CI 97.7% to 98.8%], respectively).

Analysis of specific tests

Based on 119 datasets with 71,424 tests performed, we were able to perform bivariate meta-analysis of the sensitivity and specificity for 12 different Ag-RDTs (Fig 4). Across these, the pooled estimates of sensitivity and specificity on all samples were 72.1% (95% CI 68.8% to 75.3%) and 99.0% (95% CI 98.7% to 99.2%), respectively, which were very similar to the overall pooled estimates across all meta-analyzed datasets (71.2% and 98.9%, respectively, above).
Fig 4

Bivariate analysis of 12 antigen rapid diagnostic tests.

Pooled sensitivity and specificity were calculated based on reported sample sizes, true positives, true negatives, false positives, and false negatives.

Bivariate analysis of 12 antigen rapid diagnostic tests.

Pooled sensitivity and specificity were calculated based on reported sample sizes, true positives, true negatives, false positives, and false negatives. The highest pooled sensitivity was found for the SARS-CoV-2 Antigen Test by LumiraDx (UK; henceforth called LumiraDx) and the Lumipulse G SARS-CoV-2 Ag by Fujirebio (Japan; henceforth called Lumipulse G), with 88.2% (95% CI 59.0% to 97.5%) and 87.2% (95% CI 78.0% to 92.9%), respectively. The Sofia SARS Antigen FIA by Quidel (California, US; henceforth called Sofia) had a pooled sensitivity of 77.4% (95% CI 74.2% to 80.3%). Of the non-instrument tests, the Standard Q and the Standard Q nasal test by SD Biosensor (South Korea; distributed in Europe by Roche, Germany; henceforth called Standard Q nasal) performed best, with a pooled sensitivity of 74.9% (95% CI 69.3% to 79.7%) and 80.2% (95% CI 70.3% to 87.4%), respectively. The pooled sensitivity for Panbio was 71.8% (95% CI 65.4% to 77.5%). Of all Ag-RDTs, the COVID-19 Ag Respi-Strip by Coris BioConcept (Belgium; henceforth called Coris) had the lowest pooled sensitivity, 40.0% (95% CI 28.7% to 52.4%). The pooled specificity was above 98% for all of the tests, except for the Standard F by SD Biosensor (South Korea) and Lumipulse G, with specificities of 97.7% (95% CI 96.6% to 98.5%) and 96.7% (95% CI 88.6% to 99.1%), respectively. Hierarchical summary receiver operating characteristic values for Standard Q and LumiraDx are available in S2 Fig. Three Ag-RDTs did not have sufficient data to allow for a bivariate meta-analysis, so a univariate analysis was conducted (Fig 5). For the INNOVA SARS-CoV-2 Antigen Rapid Qualitative Test by Innova Medical Group (California, US), this resulted in a pooled sensitivity and specificity of 76.1% (95% CI 68.1% to 84.1%) and 99.4% (95% CI 98.7% to 100%), respectively. For the NADAL by nal von minden (Germany) and the COVID-19 Rapid Antigen Visual Read by SureScreen Diagnostics (UK), sufficient data were available to analyze only sensitivity, resulting in pooled sensitivity estimates of 58.4% (95% CI 29.2% to 87.6%) and 58.0% (95% CI 38.3% to 77.6%), respectively.
Fig 5

Univariate analysis of 3 antigen rapid diagnostic tests.

Pooled sensitivity and specificity were calculated based on reported sensitivity, specificity, and confidence intervals. SureScreen V, SureScreen Diagnostics COVID-19 Rapid Antigen Visual Read.

Univariate analysis of 3 antigen rapid diagnostic tests.

Pooled sensitivity and specificity were calculated based on reported sensitivity, specificity, and confidence intervals. SureScreen V, SureScreen Diagnostics COVID-19 Rapid Antigen Visual Read. The remaining 35 Ag-RDTs did not present sufficient data for univariate or bivariate meta-analysis. However, 9/35 had results presented in more than 1 dataset, and these are summarized in Table 2. Herein, the widest ranges of sensitivity were found for the ESPLINE SARS-CoV-2 by Fujirebio (Japan), with sensitivity reported between 8.1% and 80.7%, and the RIDA QUICK SARS-CoV-2 Antigen by R-Biopharm (Germany), with sensitivity between 39.2% and 77.6%, both with 3 datasets each. In contrast, 2 other tests with 2 datasets each showed the least variability in sensitivity: The Zhuhai Encode Medical Engineering SARS-CoV-2 Antigen Rapid Test (China) reported sensitivity between 74.0% and 74.4%, and the COVID-19 Rapid Antigen Fluorescent by SureScreen Diagnostics (UK) reported sensitivity between 60.3% and 69.0%. However, for both tests, both datasets originated from the same studies. Overall, the lowest sensitivity range was reported for the SARS-CoV-2 Antigen Rapid Test by MEDsan (Germany): 36.5% to 45.2% across 2 datasets. The specificity ranges were above 96% for most of the tests. A notable outlier was the 2019-nCov Antigen Rapid Test Kit by Shenzhen Bioeasy Biotechnology (China; henceforth called Bioeasy), reporting the worst, with a specificity as low as 85.6% in 1 study. Forest plots for the datasets for each Ag-RDT are provided in S3 Fig. The remaining 26 Ag-RDTs that were evaluated in 1 dataset only are included in Table 1 S3 Fig.
Table 2

Summary clinical accuracy data for major Ag-RDTs not included in the meta-analysis.

Manufacturer, Ag-RDTNumber of datasetsSensitivity rangeSpecificity rangeComments
Bionote, NowCheck (LFA)355.6% to 89.9%97.3% to 100%

Two of the studies were IFU-conforming, whereas IFU conformity for the study reporting 55.6% sensitivity was unclear

Denka, Quick Navi (LFA)272.5% to 86.7%100%*

Both studies were conducted on fresh samples, but for the one reporting 72.5% IFU conformity was unclear

Fujirebio, ESPLINE SARS-CoV-2 (LFA)38.1% to 80.7%100%*

The dataset reporting 8.1% sensitivity used saliva samples (not IFU-conforming) and the majority of samples showed a Ct value > 25

JOYSBIO Biotechnology, COVID-19 Antigen Rapid Test Kit (LFA)257.8% to 70.5%98.5% to 99.1%

The datasets used NP and AN samples, respectively; both were performed by IFU on symptomatic people or high-risk contacts

MEDsan, SARS-CoV-2 Antigen Rapid Test (LFA)236.5% to 45.2%97% to 99.6%

Both studies were conducted on OP samples, which is IFU-conforming for this test

R-Biopharm, RIDA QUICK SARS-CoV-2 Antigen (LFA)339.2% to 77.6%96.2% to 100%

Two datasets originate from the same study and no study was conducted as per IFU

The dataset reporting 39.2% included only asymptomatic persons with Ct values between 22.1 and 36.4

Shenzhen Bioeasy Biotechnology, 2019-nCov Antigen Rapid Test Kit (requires reader)466.7% to 93.9%85.6% to 100%

The dataset reporting 85.6% specificity was IFU-conforming

The datasets reporting highest sensitivity were drawn from just symptomatic patients; for the others, symptomatic patients made up more than two-thirds of the study population

SureScreen Diagnostics, COVID-19 Rapid Antigen Fluorescent (requires reader)260.3% to 69.0%98%*

Both datasets originate from the same study and were not IFU-conforming, conducted on stored samples

Zhuhai Encode Medical Engineering, SARS-CoV-2 Antigen Rapid Test (LFA)274.0% to 74.4%100%*

Both datasets originate from the same study, a retrospective head-to-head comparison

Stored AN/MT samples were assessed

*Only 1 dataset for specificity was provided.

Ag-RDT, antigen rapid diagnostic test; AN, anterior nasal; Ct, cycle threshold; IFU, instructions for use; LFA, lateral flow assay; MT, mid-turbinate; NP, nasopharyngeal; OP, oropharyngeal.

Two of the studies were IFU-conforming, whereas IFU conformity for the study reporting 55.6% sensitivity was unclear Both studies were conducted on fresh samples, but for the one reporting 72.5% IFU conformity was unclear The dataset reporting 8.1% sensitivity used saliva samples (not IFU-conforming) and the majority of samples showed a Ct value > 25 The datasets used NP and AN samples, respectively; both were performed by IFU on symptomatic people or high-risk contacts Both studies were conducted on OP samples, which is IFU-conforming for this test Two datasets originate from the same study and no study was conducted as per IFU The dataset reporting 39.2% included only asymptomatic persons with Ct values between 22.1 and 36.4 The dataset reporting 85.6% specificity was IFU-conforming The datasets reporting highest sensitivity were drawn from just symptomatic patients; for the others, symptomatic patients made up more than two-thirds of the study population Both datasets originate from the same study and were not IFU-conforming, conducted on stored samples Both datasets originate from the same study, a retrospective head-to-head comparison Stored AN/MT samples were assessed *Only 1 dataset for specificity was provided. Ag-RDT, antigen rapid diagnostic test; AN, anterior nasal; Ct, cycle threshold; IFU, instructions for use; LFA, lateral flow assay; MT, mid-turbinate; NP, nasopharyngeal; OP, oropharyngeal. In total, 16 studies, accounting for 53 datasets, conducted head-to-head clinical accuracy evaluations of different tests using the same samples from the same participants. These datasets have underlined sample sizes in Table 1; 15 such studies included more than 100 samples, and 1 study included too few samples to draw clear conclusions [286]. Four studies performed their head-to-head evaluation as per manufacturers’ instructions and on symptomatic patients. Across 3 of them, Standard Q (sensitivity 73.2% to 91.2%) and Standard Q nasal (sensitivity 82.5% to 91.2%) showed a similar range of sensitivity [207,215,271]. The fourth reported a sensitivity of 56.4% (95% CI 44.7% to 67.6%) for the Biocredit Covid-19 Ag by RapiGEN (South Korea; henceforth called Rapigen) and 52.6% (95% CI 40.9% to 64.0%) for the SGTi-flex COVID-19 Ag by Sugentech (South Korea) [233]. All other head-to-head comparisons were not IFU-conforming. In one of these, the Rapid COVID-19 Ag Test by Healgen (sensitivity 77.1%) performed better than Standard Q and Panbio (sensitivity 69.8% and 67.7%, respectively) [178]. In contrast to the overall findings of the meta-analysis above, 2 other head-to-head studies found that both Standard Q (sensitivity 43.6% and 49.4%) and Panbio (sensitivity 38.6% and 44.6%) had lower performance than the CLINITEST Rapid COVID-19 Antigen Test by Siemens Healthineers (Germany; henceforth called Clinitest), with reported sensitivity of 51.5% and 54.9% [167,279]. However, another study found both Standard Q and Panbio (sensitivity 81.0% and 82.9%, respectively) to have a higher accuracy than Sofia (sensitivity 80.4%) [196].

Subgroup analyses

The results are presented in Figs 6–10. Detailed results for the subgroup analyses are available in S4–S9 Figs.
Fig 6

Pooled sensitivity by cycle threshold (Ct) values.

Low Ct values are the reverse transcription PCR semi-quantitative correlate for a high virus concentration.

Fig 10

Pooled sensitivity and specificity by age.

Pooled sensitivity by cycle threshold (Ct) values.

Low Ct values are the reverse transcription PCR semi-quantitative correlate for a high virus concentration.

Pooled sensitivity and specificity by sample type.

AN, anterior nasal; MT, mid-turbinate; NP, nasopharyngeal; OP, oropharyngeal.

Subgroup analysis by Ct values

High sensitivity was achieved for Ct value < 20, at 96.5% (95% CI 92.6% to 98.4%). The pooled sensitivity for Ct value < 25 was markedly better, at 95.8% (95% CI 92.3% to 97.8%), compared to the group with Ct value ≥ 25, at 50.7% (95% CI 35.6% to 65.8%). A similar pattern was observed when the Ct values were analyzed using the cutoffs <30 and ≥30, resulting in a sensitivity of 79.9% (95% CI 70.3% to 86.9%) and 20.9% (95% CI 12.5% to 32.8%), respectively (Fig 6). In addition, it was possible to meta-analyze test-specific pooled sensitivity for Panbio: 97.7% sensitivity (95% CI 95.3% to 98.9%) for Ct value < 20, 95.8% (95% CI 92.3% to 97.8%) for Ct value < 25, and 83.4% (95% CI 69.1% to 91.9%) for Ct value < 30. Sensitivity was 61.2% (95% CI 38.8% to 79.7%) for Ct value ≥ 25 and 30.5% (95% CI 16.0% to 50.4%) for Ct value ≥ 30. For the other Ag-RDTs only limited data were available, which are presented in S5 Fig.

Subgroup analysis by IFU conformity

The summary results are presented in Fig 7. When assessing only studies with IFU-conforming testing, pooled sensitivity from 81 datasets with 49,643 samples was 76.3% (95% CI 73.1% to 79.2%). When non-IFU-conforming sampling (75 datasets, 31,416 samples) was performed, sensitivity decreased to 65.9% (95% CI 60.6% to 70.8%).
Fig 7

Pooled sensitivity and specificity by instructions for use (IFU) conformity.

For 5 tests it was possible to calculate pooled sensitivity estimates including only datasets with IFU-conforming testing: Panbio (sensitivity 76.5% [95% CI 69.5% to 82.3%]; 17 datasets, 12,856 samples), Standard Q (sensitivity 79.3% [95% CI 73.5% to 84.1%]; 15 datasets, 6,584 samples), BinaxNOW (sensitivity 61.8% [95% CI 48.0% to 74.0%]; 4 datasets, 8,163 samples), Rapigen (sensitivity 67.1% [95% CI 50.4% to 80.4%]; 4 datasets, 1,934 samples), and Standard Q nasal (sensitivity 83.8% [95% CI 77.8% to 88.4%]; 5 datasets, 683 samples). Specificity was above 98.6% for all tests. In contrast, when the Panbio (14 datasets, 9,233 samples) and Standard Q (14 datasets, 4,714 samples) tests were not performed according to IFU, pooled sensitivity decreased to 64.3% (95% CI 50.9% to 75.8%) and 67.4% (95% CI 57.2% to 76.2%), respectively.

Subgroup analysis by sample type

Most datasets evaluated NP or combined NP/OP swabs (122 datasets and 59,810 samples) as the sample type for the Ag-RDT. NP or combined NP/OP swabs achieved a pooled sensitivity of 71.6% (95% CI 68.1% to 74.9%). Datasets that used AN/MT swabs for Ag-RDTs (32 datasets and 25,814 samples) showed a summary estimate for sensitivity of 75.5% (95% CI 70.4% to 79.9%). This was confirmed by 2 studies that reported direct head-to-head comparison of NP and MT samples from the same participants using the same Ag-RDT (Standard Q), where the 2 sample types showed equivalent performance [271,272]. Analysis of performance with an OP swab (7 datasets, 5,165 samples) showed a pooled sensitivity of only 53.1% (95% CI 40.9% to 65.0%). Saliva swabs (4 datasets, 1,088 samples) showed the lowest pooled sensitivity, at only 37.9% (95% CI 11.8% to 73.5%) (Fig 8).
Fig 8

Pooled sensitivity and specificity by sample type.

AN, anterior nasal; MT, mid-turbinate; NP, nasopharyngeal; OP, oropharyngeal.

We were not able to perform a subgroup meta-analysis for BAL/TW due to insufficient data: There was only 1 study with 73 samples evaluating Rapigen, Panbio, and Standard Q [286]. However, BAL/TW would in any case be considered an off-label use.

Subgroup analysis in symptomatic and asymptomatic patients

Within the datasets possible to meta-analyze, 17,964 (54.1%) samples were from symptomatic, and 15,228 (45.9%) from asymptomatic, patients. The pooled sensitivity for symptomatic patients was markedly different from that of asymptomatic patients: 76.7% (95% CI 70.6% to 81.9%) versus 52.5% (95% CI 43.7% to 61.1%). Specificity was 99% for both groups (Fig 9). Median Ct values differed in symptomatic and asymptomatic patients. For those studies where it was possible to extract a median Ct value, it ranged from 20.5 to 27.0 in symptomatic patients [170,207,226,258,271,272] and from 27.2 to 30.5 in asymptomatic patients [170,201,258].
Fig 9

Pooled sensitivity and specificity by presence of symptoms and symptom duration.

Subgroup analysis comparing symptom duration

Data were analyzed for 5,538 patients with symptoms less than 7 days, but very limited data were available for patients with symptoms ≥7 days (397 patients). The pooled sensitivity for patients with onset of symptoms <7 days was 83.8% (95% CI 76.3% to 89.2%), which is markedly higher than the 61.5% (95% CI 52.2% to 70.0%) sensitivity for individuals tested ≥7 days from onset of symptoms (Fig 9).

Subgroup analysis by age

For adult patients (age ≥ 18 years), it was possible to pool estimates across 3,837 samples, whereas the pediatric group (age < 18 years) included 7,326 samples. Sensitivity and specificity were 64.3% (95% CI 54.7% to 72.9%) and 99.4% (95% CI 98.9% to 99.7%), respectively, in mostly symptomatic patients aged <18 years. In patients aged ≥18 years, sensitivity increased to 74.8% (95% CI 66.5% to 81.6%), while the specificity was similar (98.7%, 95% CI 97.2% to 99.4%) (Fig 10).

Subgroup analysis by type of RT-PCR and viral load

We were not able to perform a meta-analysis for the subgroups by type of RT-PCR or viral load (viral copies/mL) due to insufficient data. In 152 (71.0%) of the datasets only 1 type of RT-PCR was used, whereas 37 (17.3%) of the datasets tested samples in the same dataset using different RT-PCR methods. For 25 (11.7%) of the datasets, the type of RT-PCR was not reported. The Cobas SARS-CoV-2 Test from Roche (Germany) was used most frequently, in 63 (29.4%) of the datasets, followed by the Allplex 2019-nCoV Assay from Seegene in 41 (19.2%) and the SARS-CoV-2 assay from TaqPath in 20 (9.3%) of the datasets. Median sensitivity was 72.4% (range 46.9% to 100%) in samples with viral load > 5 log10 copies/mL, 97.8% (range 71.4% to 100%) for >6 log10 copies/mL, and 100% (range 93.8% to 100%) for >7 log10 copies/mL, showing that the sensitivity increases with increasing viral load.

Meta regression

We were not able to perform a meta-regression due to the considerable heterogeneity in reporting subgroups, which resulted in too few studies with sufficient data for comparison.

Publication bias

The result of the Deeks test (p = 0.001) shows significant asymmetry in the funnel plot for all datasets with complete results. This indicates there may be publication bias from studies with small sample sizes. The funnel plot is presented in S10 Fig.

Comparison with analytical studies

The 9 analytical studies were divided into 63 datasets, evaluating 23 different Ag-RDTs. Only 7 studies reported a sample size, for which 833 (90.6%) samples originated from NP swabs, while for 86 (9.4%) the sample type was unclear. One of the 2 studies not reporting sample size used saliva samples [198], while the other used the sample type specified in the respective Ag-RDT’s IFU [173]. Overall, the reported analytical sensitivity (limit of detection [LOD]) in the studies resembled the results of the meta-analysis presented above. Rapigen (LOD, in log10 copies per swab: 10.2) and Coris (LOD 7.46) were found to perform worse than Panbio (LOD 6.6 to 6.1) and Standard Q (LOD 6.8 to 6.0), whereas Clinitest (LOD 6.0) and BinaxNOW by Abbott (LOD 4.6 to 4.9) performed better [191,256,282]. Similar results were found in another study, where Standard Q showed the lowest LOD (detecting virus up to what is an equivalent Ct value of 26.3 to 28.7), compared to that of Rapigen and Coris (detecting virus up to what is an equivalent Ct value of only 18.4 for both) [208,274,275]. However, another study found Panbio, Standard Q, Coris, and BinaxNOW to have a similar LOD values of 5.0 × 103 plaque forming units (PFU)/mL, but the ESPLINE SARS-CoV-2 by Fujirebio (Japan), the COVID-19 Rapid Antigen Test by Mologic (UK), and the Sure Status COVID-19 Antigen Card Test by Premier Medical Corporation (India) performed markedly better (LOD 2.5 × 102 to 5.0 × 102 PFU/mL) [173]. An overview of all LOD values reported in the studies can be found in S3 Table. When the datasets from case–control studies (25/173) were excluded, the estimated sensitivity did not differ greatly, with a value of 70.9% (95% CI 67.7% to 73.9%), compared to 71.2% (95% CI 68.2% to 74.0%) in the overall analysis, with no change in pooled specificity. When the datasets from preprints (64/173) were excluded, sensitivity decreased slightly, to 67.2% (95% CI 62.9% to 71.3%), compared to the overall analysis.

Discussion

In this comprehensive systematic review and meta-analysis, we have summarized the data of 133 studies evaluating the accuracy of 61 different Ag-RDTs. Across all meta-analyzed samples, our results show a pooled sensitivity and specificity of 71.2% (95% CI 68.2% to 74.0%) and 98.9% (95% CI 98.6% to 99.1%), respectively. Over half of the studies did not perform the Ag-RDT in accordance with the test manufacturers’ recommendation, or the performance was unknown, which negatively impacted the sensitivity. When we considered only IFU-conforming studies, the sensitivity increased to 76.3% (95% CI 73.1% to 79.2%). While we found the sensitivity to vary across specific tests, the specificity was consistently high. The 2 Ag-RDTs that have been approved through the WHO emergency use listing procedure, Abbott Panbio and SD Biosensor Standard Q (distributed by Roche in Europe), have not only drawn the largest research interest, but also perform at or above average when their pooled accuracy is compared to that of all Ag-RDTs (sensitivity of 71.8% for Panbio and 74.9% for Standard Q). Standard Q nasal demonstrated an even higher pooled sensitivity (80.2% compared to the NP test), although this is likely due to variability in the populations tested, as head-to-head performance showed a comparable sensitivity. Three other Ag-RDTs showed an even higher accuracy, with sensitivities ranging from 77.4% to 88.2% (namely Sofia, Lumipulse G, and LumiraDx), but were only assessed on relatively small samples sizes (ranging from 1,373 to 3,532), and all required an instrument/reader. Not surprisingly, lower Ct values, the RT-PCR semi-quantitative correlate for high virus concentration, resulted in significantly higher Ag-RDT sensitivity than higher Ct values (pooled sensitivity 96.5% and 95.8% for Ct value < 20 and <25, respectively, versus 50.7% and 20.9% for Ct value ≥ 25 and ≥30, respectively). This confirms prior data that suggested that antigen concentrations and Ct values were highly correlated in NP samples [16]. Ag-RDTs also showed higher sensitivity in patients within 7 days after symptom onset compared to patients later in the course of the disease (pooled sensitivity 83.8% versus 61.5%), which is to be expected given that samples from patients within the first week after symptom onset have been shown to contain the highest virus concentrations [298]. In line with this, studies reporting an unexpectedly low overall sensitivity either shared a small population size with an on average high Ct value [230,273,288] or performed the Ag-RDT not as per IFU, e.g., using saliva or prediluted samples [167,170,203,248,279]. In contrast, studies with an unusually high Ag-RDT sensitivity were based on study populations with a low median Ct value, between 18 and 22 [189,255,284]. Our analysis also found that the accuracy of Ag-RDTs is substantially higher in symptomatic patients than in asymptomatic patients (pooled sensitivity 76.7% versus 52.5%). This is not surprising as studies that enrolled symptomatic patients showed a lower range of median Ct values (i.e., higher viral load) than studies enrolling asymptomatic patients. Given that other studies found symptomatic and asymptomatic patients to have comparable viral loads [299,300], the differences found in our analysis are likely explained by the varied time in the course of the disease at which testing is performed in asymptomatic patients presenting for one-time screening testing. Because symptoms start in the early phase of the disease, when viral load is still high, studies testing only symptomatic patients have a higher chance of including patients with high viral loads. In contrast, study populations drawn from only asymptomatic patients have a higher chance of including patients at any point of disease (i.e., including late in disease, when PCR is still positive, but viable virus is rapidly decreasing) [301]. With regards to the sampling and testing procedure, we found Ag-RDTs to perform similarly across upper respiratory swab samples (e.g., NP and AN/MT), particularly when considering the most reliable comparisons from head-to-head studies. Similar to previous assessment [7], the methodological quality of the included studies revealed a very heterogenous picture. In the future, aligning the design of clinical accuracy studies with common agreed-upon minimal specifications (e.g., by WHO or the European Centre for Disease Control and Prevention) and reporting the results in a standardized way [302] would improve data quality and comparability. The main strengths of our study lie in its comprehensive approach and continuous updates. By linking this review to our website, https://www.diagnosticsglobalhealth.org, we strive to equip decision makers with the latest research findings on Ag-RDTs for SARS-CoV-2 and, to the best of our knowledge, are the first in doing so. At least once per week the website is updated by continuing the literature search and process described above. We plan to update the meta-analysis on a monthly basis and publish it on the website. Furthermore, our study used rigorous methods as both the study selection and data extraction were performed by one author and independently validated by a second, we conducted blinded pilot extractions before of the actual data extraction, and we prepared a detailed interpretation guide for the QUADAS-2 tool. The study may be limited by the inclusion of both preprints and peer-reviewed literature, which could affect the quality of the data extracted. However, we aimed to balance this potential effect by applying a thorough assessment of all clinical studies included, utilizing the QUADAS-2 tool, and performing a sensitivity analysis excluding preprint manuscripts. In addition, the studies included in our analysis varied widely in the reported range of viral loads, limiting the comparability of their results. To control for this, we analyzed the Ag-RDTs’ performance at different levels of viral load. Finally, even though we are aware that further data exist from other sources, for example from governmental research institutes [303], such data could not be included because sufficiently detailed descriptions of the methods and results are not publicly available.

Conclusion

In summary, it can be concluded that there are Ag-RDTs available that have high sensitivity for the detection of a SARS-CoV-2 infection—particularly when performed in the first week of illness, when viral load is high—and excellent specificity. However, our analysis also highlights the variability in results between tests (which is not reflected in the manufacturer-reported data), indicating the need for independent validations. Furthermore, the analysis highlights the importance of performing tests in accordance with the manufacturers’ recommended procedures, and in alignment with standard diagnostic evaluation and reporting guidelines. The accuracy achievable by the best-performing Ag-RDTs, combined with the rapid turnaround time compared to RT-PCR, suggests that these tests could have a significant impact on the pandemic if applied in thoughtful testing and screening strategies.

Detailed results of the QUADAS-2 assessment.

(PDF) Click here for additional data file.

Hierarchical summary receiver operating characteristic curve for Standard Q Ag-RDT.

(PDF) Click here for additional data file.

Forest plots of all Ag-RDTs.

(PDF) Click here for additional data file.

Forest plots for subgroup analysis by Ct value.

(PDF) Click here for additional data file.

Forest plots for subgroup analysis by Ct value per test.

(PDF) Click here for additional data file.

Forest plots for subgroup analysis by IFU versus non-IFU.

(PDF) Click here for additional data file.

Forest plots for subgroup analysis by sample type.

(PDF) Click here for additional data file.

Forest plots for subgroup analysis by symptomatic versus asymptomatic.

(PDF) Click here for additional data file.

Forest plots for subgroup analysis by symptom duration.

(PDF) Click here for additional data file.

Funnel plot test for all datasets included in the meta-analysis.

(PDF) Click here for additional data file. (DOCX) Click here for additional data file.

List of data items extracted from studies.

(XLSX) Click here for additional data file.

List of original data.

(XLSX) Click here for additional data file.

Summary of analytical studies.

(XLSX) Click here for additional data file.

Study protocol submitted to PROSPERO (registration: CRD42020225140).

(DOCX) Click here for additional data file.

Search strategy.

(DOCX) Click here for additional data file.

QUADAS-2 assessment interpretation guide.

(DOCX) Click here for additional data file. 4 Mar 2021 Dear Dr Denkinger, Thank you for submitting your manuscript entitled "The accuracy of novel antigen rapid diagnostics for SARS-CoV-2: a living systematic review and meta-analysis." for consideration by PLOS Medicine. Your manuscript has now been evaluated by the PLOS Medicine editorial staff as well as by an academic editor with relevant expertise and I am writing to let you know that we would like to send your submission out for external assessment. However, before we can send your manuscript to reviewers, we need you to complete your submission by providing the metadata that is required for full assessment. To this end, please login to Editorial Manager where you will find the paper in the 'Submissions Needing Revisions' folder on your homepage. Please click 'Revise Submission' from the Action Links and complete all additional questions in the submission questionnaire. Please re-submit your manuscript within two working days, i.e. by . Login to Editorial Manager here: https://www.editorialmanager.com/pmedicine Once your full submission is complete, your paper will undergo a series of checks in preparation for external assessment. Feel free to email us at plosmedicine@plos.org if you have any queries relating to your submission. Kind regards, Richard Turner, PhD Senior Editor, PLOS Medicine rturner@plos.org 12 May 2021 Dear Dr. Denkinger, Thank you very much for submitting your manuscript "The accuracy of novel antigen rapid diagnostics for SARS-CoV-2: a living systematic review and meta-analysis." (PMEDICINE-D-21-01004R1) for consideration at PLOS Medicine. Your paper was discussed with an academic editor with relevant expertise and sent to independent reviewers, including a statistical reviewer. The reviews are appended at the bottom of this email and any accompanying reviewer attachments can be seen via the link below: [LINK] In light of these reviews, we will not be able to accept the manuscript for publication in the journal in its current form, but we would like to invite you to submit a revised version that addresses the reviewers' and editors' comments fully. You will appreciate that we cannot make a decision about publication until we have seen the revised manuscript and your response, and we expect to seek re-review by one or more of the reviewers. In revising the manuscript for further consideration, your revisions should address the specific points made by each reviewer and the editors. Please also check the guidelines for revised papers at http://journals.plos.org/plosmedicine/s/revising-your-manuscript for any that apply to your paper. In your rebuttal letter you should indicate your response to the reviewers' and editors' comments, the changes you have made in the manuscript, and include either an excerpt of the revised text or the location (eg: page and line number) where each change can be found. Please submit a clean version of the paper as the main article file; a version with changes marked should be uploaded as a marked up manuscript. In addition, we request that you upload any figures associated with your paper as individual TIF or EPS files with 300dpi resolution at resubmission; please read our figure guidelines for more information on our requirements: http://journals.plos.org/plosmedicine/s/figures. While revising your submission, please upload your figure files to the PACE digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at PLOSMedicine@plos.org. We hope to receive your revised manuscript by Jun 02 2021 11:59PM. Please email us (plosmedicine@plos.org) if you have any questions or concerns. ***Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.*** We ask every co-author listed on the manuscript to fill in a contributing author statement, making sure to declare all competing interests. If any of the co-authors have not filled in the statement, we will remind them to do so when the paper is revised. If all statements are not completed in a timely fashion this could hold up the re-review process. If new competing interests are declared later in the revision process, this may also hold up the submission. Should there be a problem getting one of your co-authors to fill in a statement we will be in contact. YOU MUST NOT ADD OR REMOVE AUTHORS UNLESS YOU HAVE ALERTED THE EDITOR HANDLING THE MANUSCRIPT TO THE CHANGE AND THEY SPECIFICALLY HAVE AGREED TO IT. You can see our competing interests policy here: http://journals.plos.org/plosmedicine/s/competing-interests. Please use the following link to submit the revised manuscript: https://www.editorialmanager.com/pmedicine/ Your article can be found in the "Submissions Needing Revision" folder. To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols Please ensure that the paper adheres to the PLOS Data Availability Policy (see http://journals.plos.org/plosmedicine/s/data-availability), which requires that all data underlying the study's findings be provided in a repository or as Supporting Information. For data residing with a third party, authors are required to provide instructions with contact information for obtaining the data. PLOS journals do not allow statements supported by "data not shown" or "unpublished results." For such statements, authors must provide supporting data or cite public sources that include it. Please let me know if you have any questions, and we look forward to receiving your revised manuscript shortly. Sincerely, Richard Turner, PhD Senior Editor, PLOS Medicine rturner@plos.org ----------------------------------------------------------- Requests from the editors: Please update your search to the end of March 2021, say. Please remove the "Summary" on the title page (this could be repurposed in the author summary, below). Please combine the "Methods" and "Findings" subsections of your abstract. Please add a new final sentence to the combined subsection, which should begin "Study limitations include ..." or similar and should quote 2-3 of the study's main limitations. At line 61, please adapt the style of the "Conclusions" subsection of your abstract as follows: "In this study, we found that ... detected most cases ...", or similar. After the abstract, we will need to ask you to add a new and accessible "Author Summary" section in non-identical prose. You may find it helpful to consult one or two recent research papers in PLOS Medicine to get a sense of the preferred style. Noting the "living" element quoted in your title, we would suggest moving the references to the website, currently at line 105 and in the Methods, to a single mention in the Discussion section. Here we suggest noting how frequently the data will be updated. Please also explain to the editors how you plan to proceed with any future peer-reviewed analyses of these data. Throughout the text, please adapt the style of reference call-outs as follows: "...[12,13]." (noting the absence of spaces within the square brackets). Noting references 17 and 31, for example, please ensure that all citations have full access details. Noting the preprints cited, can DOI numbers, for example, be added so that these can be accessed easily? Thank you for including the PRISMA checklist. May we suggest using PRISMA 2020 (https://doi.org/10.1371/journal.pmed.1003583) in future? Please break the checklist out into a separate attachment, labelled "S1_PRISMA_Checklist" or similar and referred to as such in the main text. In the checklist, please refer to individual items by section (e.g., "Methods") and paragraph number, not by line or page numbers as these generally change in the event if publication. Comments from the reviewers: *** Reviewer #1: Estimated Authors, I've read with great interest your paper on the accuracy of novel antigen rapid diagnostics for SARS-CoV-2. The paper is well written, and addresses both the potential pros and the actuans cons of these new instruments. Materials are clear, tables and images included in the main text are appropriate. Discussion is consistent with the results and analyzes available evidence. I've no further recommendations and suggest the eventual acceptance of this paper. *** Reviewer #2: Alex McConnachie, Statistical Review Brummer et al report a systematic review and meta-analysis of the diagnostic accuracy of rapid antigen tests for SARS-CoV-2. This review considers the use of statistics in the paper. Overall, this is very impressive. It reads quite well, and seems to cover most of the elements that are required. My comments are relatively minor. Line 384 makes a statement that the confidence intervals for the pooled sensitivity in symptomatic and asymptomatic patients are overlapping. I do not like this - overlapping Cis does not necessarily mean that there is no evidence of a difference between the two groups. It would be better to report a p-value, or a confidence interval for the difference in sensitivity. The same principle applies to other subgroup analyses - when looking at subgroups, I want to know whether there is a difference in the quantity being estimated (e.g. sensitivity) between the subgroups. The subheading on line 408 mentions viral load, though the subgroup analysis by viral load is reported in the previous section. Lines 423-426 mention publication bias, but this is very brief. P-values are given, but no interpretation. In what way are the results biased? Can the pooled estimates be corrected for this bias in some way? How does this affect the overall results? Finally, the forest plots do not include pooled estimates, or measures of heterogeneity. Would these be worth showing here? *** Reviewer #3: Review of the paper entitled The accuracy of novel antigen rapid diagnostics for SARS-CoV-2: a living systematic review and meta-analysis. General comment: The topic of this article is important as there is a great need to follow the improvement in the quality of lateral-flow rapid antigen tests for COVID-19, as it has been done for malaria rapid tests by FIND and WHO and other infectious diseases rapid tests by WHO (HIV, HBV..). Having a living review is very innovative and welcome. For such an entreprise, there is a need for a well defined reference method that allows to comparing one brand or test version to another. This method can be based either on samples of patients with a pre-defined distribution (or at least a pre-defined threshold) of viral loads measured by quantitative PCR or on laboratory prepared samples containing defined concentration of antigen. No international standard for the validation of COVID RDT is unfortunately available yet. Therefore, the present review has to target studies that include patients whose viral load distribution is very variable from one study to the other, which limits strongly the comparability between studies and the interpretation of pooled results. Stratifying results by viral load thresholds, as authors have done, is therefore key and the main results should be these stratified results (using also a threshold of <20 would have been ideal) rather than the overall result. Specific comments: Abstract: In "Results", in addition to the pooled sensitivity for Ct-values <30 (significant viral loads), the pooled sensitivity for Ct<25 (moderate and high viral loads) should be added, as it is an important result as mentioned above under general Comment, mentioned by the way by the authors in "Methods". If available, it would be good to also have the result for Ct<20 (high viral loads). In "Results", in order to show the performance of RDTs in usual testing conditions, e.g. using the good quality brands (SD Biosensor/Roche or Abbott), on the best sample type (naso-pharyngeal), during the first week of symptom (vast majority of patients seen in testing centers), it would be good to add the pooled sensitivity found in the corresponding relevant studies, if feasible. In "Conclusion", replace the word "screening" by "diagnosis" as you speak here about symptomatic patients. Also RDT do detect more than "most cases with high viral load". Indeed, they detect most cases with any viral load and the vast majority of patients with significant viral load (a viral load equivalent to a CT of 29 is not a high viral load, it is still a very low viral load as most of these patients have no cultivable virus and are not contagious). Introduction: Line 92: according to most experts (e.g: Mina MJ, et al. Rethinking Covid-19 Test Sensitivity - A Strategy for Containment. New England Journal of Medicine. 2020;383(22):e120), RDT for SARS-CoV-2 are central to the fight against the epidemic, not just complementary. Methods: Line 143-147: a comment on the lack of international standard on the viral load categories that should be studied, or at least the viral load thresholds, which impede accurate comparisons should be added. Results: Line 380-386: Subgroup analysis in symptomatic and asymptomatic patients. Presenting overall results could be misleading as it would suggest that the fact of presenting symptoms or not has a direct influence on the sensitivity, why it could be a confounding factor. Indeed one hypothesis for the observed difference is that the main influencing factor is rather the distribution of viral loads that is different (towards lower loads) in asymptomatic persons than in patients, as well mentioned by authors in the discussion. Therefore it is essential to present results by viral load categories (if available), or at least viral load thresholds, to know if RDT are able to detect significant viral loads (and thus most people who are contagious) also in asymptomatic persons. Line 395-406: Subgroup analysis by Ct-values. As mentioned in the General comment, these results are key and should even be the primary outcome measures of such a review. I would thus propose to put this chapter as second chapter in the overall Results section. Discussion: Line 449: I would also mention here the 2 results with the Ct thresholds of 25 and 30. Conclusion: As commented above, a viral load corresponding to a Ct of up to 30 is not a high viral load. Indeed most studies suggest that the Ct threshold to differentiate between a culturable form a non-culturable virus is around 25 and than above that threshold most people do not seem to be contagious anymore. *** Any attachments provided with reviews can be seen via the following link: [LINK] 13 Jun 2021 Submitted filename: Rebuttal Letter AgRDT SR_MA_v5.docx Click here for additional data file. 8 Jul 2021 Dear Dr. Denkinger, Thank you very much for re-submitting your manuscript "The accuracy of novel antigen rapid diagnostics for SARS-CoV-2: a living systematic review and meta-analysis." (PMEDICINE-D-21-01004R2) for consideration at PLOS Medicine. I have discussed the paper with our academic editor and it was also seen again by one reviewer. I am pleased to tell you that, provided the remaining editorial and production issues are fully dealt with, we expect to be able to accept the paper for publication in the journal. The remaining issues that need to be addressed are listed at the end of this email. Any accompanying reviewer attachments can be seen via the link below. Please take these into account before resubmitting your manuscript: [LINK] ***Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.*** In revising the manuscript for further consideration here, please ensure you address the specific points made by each reviewer and the editors. In your rebuttal letter you should indicate your response to the reviewers' and editors' comments and the changes you have made in the manuscript. Please submit a clean version of the paper as the main article file. A version with changes marked must also be uploaded as a marked up manuscript file. Please also check the guidelines for revised papers at http://journals.plos.org/plosmedicine/s/revising-your-manuscript for any that apply to your paper. If you haven't already, we ask that you provide a short, non-technical Author Summary of your research to make findings accessible to a wide audience that includes both scientists and non-scientists. The Author Summary should immediately follow the Abstract in your revised manuscript. This text is subject to editorial change and should be distinct from the scientific abstract. We hope to receive your revised manuscript within 1 week. Please email us (plosmedicine@plos.org) if you have any questions or concerns. We ask every co-author listed on the manuscript to fill in a contributing author statement. If any of the co-authors have not filled in the statement, we will remind them to do so when the paper is revised. If all statements are not completed in a timely fashion this could hold up the re-review process. Should there be a problem getting one of your co-authors to fill in a statement we will be in contact. YOU MUST NOT ADD OR REMOVE AUTHORS UNLESS YOU HAVE ALERTED THE EDITOR HANDLING THE MANUSCRIPT TO THE CHANGE AND THEY SPECIFICALLY HAVE AGREED TO IT. Please ensure that the paper adheres to the PLOS Data Availability Policy (see http://journals.plos.org/plosmedicine/s/data-availability), which requires that all data underlying the study's findings be provided in a repository or as Supporting Information. For data residing with a third party, authors are required to provide instructions with contact information for obtaining the data. PLOS journals do not allow statements supported by "data not shown" or "unpublished results." For such statements, authors must provide supporting data or cite public sources that include it. To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. Please note, when your manuscript is accepted, an uncorrected proof of your manuscript will be published online ahead of the final version, unless you've already opted out via the online submission form. If, for any reason, you do not want an earlier version of your manuscript published online or are unsure if you have already indicated as such, please let the journal staff know immediately at plosmedicine@plos.org. Please let me know if you have any questions, and we look forward to receiving the revised manuscript. Sincerely, Richard Turner, PhD Senior Editor, PLOS Medicine rturner@plos.org ------------------------------------------------------------ Requests from Editors: We are happy to discuss future publications from this project at a suitable time. Please remove "The" from the start of the title, and capitalize "A" immediately following the colon. We note that you quote the pooled specificity with 95% CI at line 552, for example, and suggest that you also do so in the abstract. In the abstract, we suggest removing "... making it difficult to draw conclusions from.". At line 61 we suggest "... cases of SARS-CoV-2 infection" or similar. At line 181, please make the data extraction file available or remove the statement, noting PLOS' policy on "data not shown" (https://journals.plos.org/plosmedicine/s/data-availability). At line 333, we suggest making that "... a competing interest". At line 337, please make that "fewer than". Please remove the trademarks at lines 386, 411 and 506, and all other instances (including display items). Data "exists" at line 614? At line 618, again we suggest adding "... for detection of SARS-CoV-2 infection" or similar. Please use the general style "... 5 randomly selected papers" throughout the text, although numbers should be spelt out at the start of sentences. In the reference list, please use the journal name abbreviation "PLoS ONE". Can references 271 & 278 be updated? Comments from Reviewers: *** Reviewer #2: Alex McConnachie, Statistical Review I thank the authors for their consideration of my original points. On the issue of talking about overlapping CIs, I have read the Amrhein paper, and whilst I would not consider this to be "guidance", I think the point of the paper is not that p-values or CIs should not be reported, simply that they should not be interpreted as binary indicators of a difference or not. I still think a p-value is a good way to describe the strength of evidence for a difference between subgroups, and a point estimate and CI is a good way to describe the magnitude of a difference. My point was more about whether it is OK to draw conclusions from whether two CIs are overlapping or not. A p-value, or better, an estimate and CI for the difference between subgroups, is a little more informative than a statement about CIs overlapping. However, this is a minor point, and I do not think the authors are interpreting the data incorrectly. I take the point about not reporting heterogeneity estimates for meta analyses of diagnostic tests - I had not appreciated the nuances in this setting. However, the paper does report pooled estimates of sensitivity and specificity, so I am less clear as to why these are not added to the forest plots. Nevertheless, I accept that this is not the norm, so I will not insist on it. I welcome the addition of some commentary on the issue of publication bias, though I note that the authors rely on the "significance" of the Deeks' test p-values as a measure of the magnitude of any bias. There may be the same degree of publication bias for all types of diagnostic test; the fact that the Deeks' tests are not statistically significant within individual subgroups of papers does not mean that there is no publication bias. Looking at the funnel plots, and the Deeks' test regression lines, the extent of publication bias in each subgroup shown is quite plausibly the same. I would remove the comments on lines 520-522 about there being less publication bias for these tests. *** Any attachments provided with reviews can be seen via the following link: [LINK] 14 Jul 2021 Dear Dr Denkinger, On behalf of my colleagues and the Academic Editor, Dr Suthar, I am pleased to inform you that we have agreed to publish your manuscript "Accuracy of novel antigen rapid diagnostics for SARS-CoV-2: a living systematic review and meta-analysis." (PMEDICINE-D-21-01004R3) in PLOS Medicine. Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. Please be aware that it may take several days for you to receive this email; during this time no action is required by you. Once you have received these formatting requests, please note that your manuscript will not be scheduled for publication until you have made the required changes. Prior to final acceptance, we suggest spelling out "Ct value" at first use in the abstract. In the meantime, please log into Editorial Manager at http://www.editorialmanager.com/pmedicine/, click the "Update My Information" link at the top of the page, and update your user information to ensure an efficient production process. PRESS We frequently collaborate with press offices. If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximise its impact. If the press office is planning to promote your findings, we would be grateful if they could coordinate with medicinepress@plos.org. If you have not yet opted out of the early version process, we ask that you notify us immediately of any press plans so that we may do so on your behalf. We also ask that you take this opportunity to read our Embargo Policy regarding the discussion, promotion and media coverage of work that is yet to be published by PLOS. As your manuscript is not yet published, it is bound by the conditions of our Embargo Policy. Please be aware that this policy is in place both to ensure that any press coverage of your article is fully substantiated and to provide a direct link between such coverage and the published work. For full details of our Embargo Policy, please visit http://www.plos.org/about/media-inquiries/embargo-policy/. To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols Thank you again for submitting to PLOS Medicine. We look forward to publishing your paper. Sincerely, Richard Turner, PhD Senior Editor, PLOS Medicine rturner@plos.org
  236 in total

1.  Rapid SARS-CoV-2 antigen detection potentiates early diagnosis of COVID-19 disease.

Authors:  Ying Lv; Yuanyuan Ma; Yanhui Si; Xiaoyi Zhu; Lin Zhang; Haiyan Feng; Di Tian; Yixin Liao; Tiefu Liu; Hongzhou Lu; Yun Ling
Journal:  Biosci Trends       Date:  2021-03-26       Impact factor: 2.400

2.  Field evaluation of COVID-19 antigen tests versus RNA based detection: Potential lower sensitivity compensated by immediate results, technical simplicity, and low cost.

Authors:  Elaine Monteiro Matsuda; Ivana Barros de Campos; Isabela Penteriche de Oliveira; Daniela Rodrigues Colpas; Andreia Moreira Dos Santos Carmo; Luís Fernando de Macedo Brígido
Journal:  J Med Virol       Date:  2021-04-08       Impact factor: 2.327

3.  Detection of the SARS-CoV-2 spike protein in saliva with Shrinky-Dink© electrodes.

Authors:  Julia A Zakashansky; Amanda H Imamura; Darwin F Salgado; Heather C Romero Mercieca; Raphael F L Aguas; Angelou M Lao; Joseph Pariser; Netzahualcóyotl Arroyo-Currás; Michelle Khine
Journal:  Anal Methods       Date:  2021-02-25       Impact factor: 3.532

Review 4.  A critical review of point-of-care diagnostic technologies to combat viral pandemics.

Authors:  Micaela L Everitt; Alana Tillery; Martha G David; Nikita Singh; Aviva Borison; Ian M White
Journal:  Anal Chim Acta       Date:  2020-10-11       Impact factor: 6.558

5.  Diagnostic Accuracy of the Panbio Severe Acute Respiratory Syndrome Coronavirus 2 Antigen Rapid Test Compared with Reverse-Transcriptase Polymerase Chain Reaction Testing of Nasopharyngeal Samples in the Pediatric Population.

Authors:  Serena Villaverde; Sara Domínguez-Rodríguez; Gema Sabrido; Conchita Pérez-Jorge; Marta Plata; María Pilar Romero; Carlos Daniel Grasa; Ana Belén Jiménez; Elena Heras; Antonio Broncano; María Del Mar Núñez; Marta Illán; Paloma Merino; Beatriz Soto; David Molina-Arana; Amanda Bermejo; Pablo Mendoza; Manuel Gijón; Begoña Pérez-Moneo; Cinta Moraleda; Alfredo Tagarro
Journal:  J Pediatr       Date:  2021-01-21       Impact factor: 4.406

6.  Implementing SARS-CoV-2 Rapid Antigen Testing in the Emergency Ward of a Swiss University Hospital: The INCREASE Study.

Authors:  Giorgia Caruana; Antony Croxatto; Eleftheria Kampouri; Antonios Kritikos; Onya Opota; Maryline Foerster; René Brouillet; Laurence Senn; Reto Lienhard; Adrian Egli; Giuseppe Pantaleo; Pierre-Nicolas Carron; Gilbert Greub
Journal:  Microorganisms       Date:  2021-04-10

7.  Evaluation of Lumipulse® G SARS-CoV-2 antigen assay automated test for detecting SARS-CoV-2 nucleocapsid protein (NP) in nasopharyngeal swabs for community and population screening.

Authors:  Alessio Gili; Riccardo Paggi; Carla Russo; Elio Cenci; Donatella Pietrella; Alessandro Graziani; Fabrizio Stracci; Antonella Mencacci
Journal:  Int J Infect Dis       Date:  2021-02-26       Impact factor: 3.623

8.  Immunochromatographic test for the detection of SARS-CoV-2 in saliva.

Authors:  Katsuhito Kashiwagi; Yoshikazu Ishii; Kotaro Aoki; Shintaro Yagi; Tadashi Maeda; Taito Miyazaki; Sadako Yoshizawa; Katsumi Aoyagi; Kazuhiro Tateda
Journal:  J Infect Chemother       Date:  2020-12-23       Impact factor: 2.211

9.  An enzyme-based immunodetection assay to quantify SARS-CoV-2 infection.

Authors:  Carina Conzelmann; Andrea Gilg; Rüdiger Groß; Desiree Schütz; Nico Preising; Ludger Ständker; Bernd Jahrsdörfer; Hubert Schrezenmeier; Konstantin M J Sparrer; Thomas Stamminger; Steffen Stenger; Jan Münch; Janis A Müller
Journal:  Antiviral Res       Date:  2020-07-29       Impact factor: 5.970

View more
  82 in total

1.  Highly Sensitive and Quantitative Diagnosis of SARS-CoV-2 Using a Gold/Platinum Particle-Based Lateral Flow Assay and a Desktop Scanning Electron Microscope.

Authors:  Hideya Kawasaki; Hiromi Suzuki; Kazuki Furuhashi; Keita Yamashita; Jinko Ishikawa; Osanori Nagura; Masato Maekawa; Takafumi Miwa; Takumi Tandou; Takahiko Hariyama
Journal:  Biomedicines       Date:  2022-02-15

Review 2.  Rapid, point-of-care antigen tests for diagnosis of SARS-CoV-2 infection.

Authors:  Jacqueline Dinnes; Pawana Sharma; Sarah Berhane; Susanna S van Wyk; Nicholas Nyaaba; Julie Domen; Melissa Taylor; Jane Cunningham; Clare Davenport; Sabine Dittrich; Devy Emperador; Lotty Hooft; Mariska Mg Leeflang; Matthew Df McInnes; René Spijker; Jan Y Verbakel; Yemisi Takwoingi; Sian Taylor-Phillips; Ann Van den Bruel; Jonathan J Deeks
Journal:  Cochrane Database Syst Rev       Date:  2022-07-22

3.  Guidelines for COVID-19 Laboratory Testing for Emergency Departments From the New Diagnostic Technology Team of the Taiwan Society of Emergency Medicine.

Authors:  Chien-Chang Lee; Yi-Tzu Lee; Chih-Hung Wang; I-Min Chiu; Weide Tsai; Yan-Ren Lin; Chih-Huang Li; Chin Wang Hsu; Pei-Fang Lai; Jiann-Hwa Chen; Jeffrey Che-Hung Tsai; Shih-Hung Tsai; Chorng-Kuang How
Journal:  J Acute Med       Date:  2022-06-01

Review 4.  Two Years into the COVID-19 Pandemic: Lessons Learned.

Authors:  Severino Jefferson Ribeiro da Silva; Jessica Catarine Frutuoso do Nascimento; Renata Pessôa Germano Mendes; Klarissa Miranda Guarines; Caroline Targino Alves da Silva; Poliana Gomes da Silva; Jurandy Júnior Ferraz de Magalhães; Justin R J Vigar; Abelardo Silva-Júnior; Alain Kohl; Keith Pardee; Lindomar Pena
Journal:  ACS Infect Dis       Date:  2022-08-08       Impact factor: 5.578

5.  Comparative analyses of eighteen rapid antigen tests and RT-PCR for COVID-19 quarantine and surveillance-based isolation.

Authors:  Chad R Wells; Abhishek Pandey; Seyed M Moghadas; Burton H Singer; Gary Krieger; Richard J L Heron; David E Turner; Justin P Abshire; Kimberly M Phillips; A Michael Donoghue; Alison P Galvani; Jeffrey P Townsend
Journal:  Commun Med (Lond)       Date:  2022-07-09

Review 6.  Clinical and Genetic Characteristics of Coronaviruses with Particular Emphasis on SARS-CoV-2 Virus.

Authors:  Joanna Iwanicka; Tomasz Iwanicki; Marcin Kaczmarczyk; Włodzimierz Mazur
Journal:  Pol J Microbiol       Date:  2022-06-19

7.  Rapid antigen testing by community health workers for detection of SARS-CoV-2 in Dhaka, Bangladesh: a cross-sectional study.

Authors:  Ayesha Sania; Ahmed Nawsher Alam; A S M Alamgir; Joanna Andrecka; Eric Brum; Fergus Chadwick; Tasnuva Chowdhury; Zakiul Hasan; Davina L Hill; Farzana Khan; Mikolaj Kundegorski; Seonjoo Lee; Mahbubur Rahman; Yael K Rayport; Tahmina Shirin; Motahara Tasneem; Katie Hampson
Journal:  BMJ Open       Date:  2022-06-01       Impact factor: 3.006

8.  Assessment of the Quality of COVID-19 Antigen Rapid Diagnostic Testing in the Testing Sites of Ekiti State, Nigeria: A Quality Improvement Cross-Sectional Study.

Authors:  Olufunmilola Kolude; Eyitayo E Emmanuel; Ayomide O Aibinuomo; Tope M Ipinnimo; Mary O Ilesanmi; John A Adu
Journal:  Cureus       Date:  2022-04-16

9.  Accuracy of rapid point-of-care antigen-based diagnostics for SARS-CoV-2: An updated systematic review and meta-analysis with meta-regression analyzing influencing factors.

Authors:  Lukas E Brümmer; Stephan Katzenschlager; Sean McGrath; Stephani Schmitz; Mary Gaeddert; Christian Erdmann; Marc Bota; Maurizio Grilli; Jan Larmann; Markus A Weigand; Nira R Pollock; Aurélien Macé; Berra Erkosar; Sergio Carmona; Jilian A Sacks; Stefano Ongarello; Claudia M Denkinger
Journal:  PLoS Med       Date:  2022-05-26       Impact factor: 11.613

10.  Low testing rates limit the ability of genomic surveillance programs to monitor SARS-CoV-2 variants: a mathematical modelling study.

Authors:  Alvin X Han; Amy Toporowski; Jilian A Sacks; Mark Perkins; Sylvie Briand; Maria van Kerkhove; Emma Hannay; Sergio Carmona; Bill Rodriguez; Edyth Parker; Brooke E Nichols; Colin A Russell
Journal:  medRxiv       Date:  2022-05-23
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.