Literature DB >> 35536792

Field evaluation of the performance of seven Antigen Rapid diagnostic tests for the diagnosis of SARs-CoV-2 virus infection in Uganda.

Josephine Bwogi1, Tom Lutalo1, Phionah Tushabe1, Henry Bukenya1, James Peter Eliku1, Isaac Ssewanyana2, Susan Nabadda2, Christopher Nsereko3, Matthew Cotten4, Robert Downing1, Julius Lutwama1, Pontiano Kaleebu1,4.   

Abstract

OBJECTIVE: The objective of this study was to evaluate the performance of seven antigen rapid diagnostic tests (Ag RDTs) in a clinical setting to identify those that could be recommended for use in the diagnosis of SARS-CoV-2 infection in Uganda.
METHODS: This was a cross-sectional prospective study. Nasopharyngeal swabs were collected consecutively from COVID-19 PCR positive and COVID-19 PCR negative participants at isolation centers and points of entry, and tested with the SARS-CoV-2 Ag RDTs. Test sensitivity and specificity were generated by comparing results against qRT-PCR results (Berlin Protocol) at a cycle threshold (Ct) cut-off of ≤39. Sensitivity was also calculated at Ct cut-offs ≤29 and ≤33.
RESULTS: None of the Ag RDTs had a sensitivity of ≥80% at Ct cut-off values ≤33 and ≤39. Two kits, Panbio™ COVID-19 Ag and VivaDiag™ SARS-CoV-2 Ag had a sensitivity of ≥80% at a Ct cut-off value of ≤29. Four kits: BIOCREDIT COVID -19 Ag, COVID-19 Ag Respi-Strip, MEDsan® SARS-CoV-2 Antigen Rapid Test and Panbio™ COVID-19 Ag Rapid Test had a specificity of ≥97%.
CONCLUSIONS: This evaluation identified one Ag RDT, Panbio™ COVID-19 Ag with a performance at high viral load (Ct value ≤29) reaching that recommended by WHO. This kit was recommended for screening of patients with COVID -19-like symptoms presenting at health facilities.

Entities:  

Mesh:

Substances:

Year:  2022        PMID: 35536792      PMCID: PMC9089886          DOI: 10.1371/journal.pone.0265334

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

Coronavirus disease 2019 (COVID-19) was first confirmed in Uganda in March 2020 using real-time PCR reverse transcription polymerase chain reaction (qRT-PCR) tests. Over time, the number of specimens tested and the number of COVID-19 cases have increased with a second wave of the epidemic starting in May 2021, and by 6th October 2021, the estimated cumulative PCR tests performed was 1,680,863 while the cumulative number of COVID-19 cases in Uganda stood at 123,445 with 3152 cumulative deaths giving a Case Fatality Rate of 1.1% [1]. However, it is unlikely that all cases were detected since not all suspected cases were tested [1]. There is an increasing demand for testing and sometimes this leads to increased turn-around times (72 hours instead of the desired 24 hours). The qRT-PCR tests are expensive (50–65 USD per test) when personal protective equipment and labour costs are included. In a search for cheaper diagnostic tests and tests that decrease the turn-around time for reporting results compared to qRT-PCR tests [2], the Ministry of Health requested the Uganda Virus Research Institute (UVRI), a COVID-19 national, Africa CDC and WHO reference laboratory to evaluate the performance of COVID-19 Ag RDTs. UVRI had previously evaluated and reported on one Ag RDT, the STANDARD Q COVID-19 Ag Test [3]. Here we provide a performance report for an additional seven Ag RDTs. The seven RDTs that were evaluated were; BIOCREDIT COVID -19 Ag (RapiGEN, INC, Gyeonggi-do, Korea), COVID-19 Ag Respi-Strip (Coris BioConcept, Gembloux, Belgium), PCL COVID 19 Ag Rapid FIA (Inc, Geumcheon-gu, Seoul, Korea)), MEDsan® SARS-CoV-2 Antigen Rapid test (MEDsan®, Hamburg, Germany), Panbio™ COVID-19 Ag Rapid Test (Abbott Rapid Diagnostics Jena, Germany), Novegent COVID-19 Antigen Rapid Test Kit (colloidal gold) (Chongqing Novegent Biotech Co., Ltd, Chongqing, China) and VivaDiag™ SARS-CoV-2 Ag Rapid Test kit (VivaCheck Biotech(Hangzhou) Co., Ltd. Hangzhou, China). According to the manufacturers’ ‘Information for Use’ (IFU) of the different kits, these Ag RDTs were designed to directly detect SARS-CoV-2 antigens in respiratory secretions. In addition, with the exception of PCL, the other six Ag RDTs evaluated were Conformité Européene (CE) marked (S1 Table). Although manufacturer’s reported high sensitivity (60.0–100%) and specificity (97.8–100%) for their Ag RDTs (S1 Table), the limited peer reviewed reports on their performance in clinical settings [4-8] showed conflicting performance. The objective of this study was to evaluate the performance of the above Ag RDTs in clinical settings as compared to qRT PCR for detecting SARS-CoV-2 virus in nasopharyngeal samples in order to recommend Ag RDTs that can be used for COVID-19 diagnosis in Uganda.

Materials and methods

Study sites, study design, and implementation

This was a prospective cross-sectional study from August 2020 to March 2021. The study was carried out at national and regional COVID-19 isolation centres, established by Ministry of Health, that received patients from all over the country and at points of entry into the country. The study enrolled travellers requiring testing at points-of-entry (POE) into the country and patients admitted at isolation centres. At isolation centers we used results from previous PCR tests and records of symptoms in the files to identify participants for recruitment. While at points of entry, participants with and without COVID-19 symptoms were enrolled. Patients that had difficulty in breathing and were receiving oxygen therapy or were on ventilators were excluded from the evaluation. Enrolment of participants was convenient and consecutive from the selected sites. Data were collected from each of the participants and entered in an Excel sheet. Data collected included socio-demographic (age, sex, health unit of isolation or point-of-entry), presence or absence of symptoms, date of admission and date of first symptom(s).

Sample collection and testing

The evaluation of each Ag RDT was performed at different time periods using different sets of samples drawn from participants enrolled at that time. This was because RDTs were received at UVRI from suppliers at different times and in different quantities. Two nasopharyngeal samples were collected: one from each nostril. One sample was tested on the Ag RDT under evaluation and the second sample was used for qRT- PCR test. The sequence of collection of the nasopharyngeal samples for the Ag RDT and qRT- PCR test was alternated. Samples for qRT-PCR and COVID-19 Ag Respi-Strip were collected in a tube containing virus transport media (Hanks balanced salt solution, Fetal bovine serum, gentamycin sulfate and amphotericin B). The samples for the other six Ag RDT (BIOCREDIT COVID -19 Ag, PCL COVID 19 Ag Rapid FIA, MEDsan® SARS-CoV-2 Antigen Rapid test, Panbio™ COVID-19 Ag Rapid Test, Novegent COVID-19 Antigen Rapid Test Kit (colloidal gold), and VivaDiag™ SARS-CoV-2 Ag Rapid Test kit) under evaluation were placed in the buffer provided by the kit manufacturer. Antigen RDT testing was done within one hour of specimen collection without any cooling, following the manufacturer’s instructions. The results of the six Ag RDTs: BIOCREDIT COVID -19 Ag, COVID-19 Ag Respi-Strip, MEDsan® SARS-CoV-2 Antigen Rapid test, Panbio™ COVID-19 Ag Rapid Test, Novegent COVID-19 Antigen Rapid Test Kit (colloidal gold), and VivaDiag™ SARS-CoV-2 Ag Rapid Test kit were visually read by trained UVRI laboratory staff. While, results from the PCL COVID19 Ag Rapid FIA were not visible and thus only read using the PCLOK EZ instrument supplied with the test kit. Interpretation of Antigen RDT results were as per the respective manufacturer’s guidelines. Those reading/interpreting the RDT results sometimes had access to the clinical information of the participant. Samples collected for qRT-PCR were stored at 2°-8°C and transported within one week of collection to UVRI. At UVRI, samples were stored at -80°C for 1 to 28 days (median of 4–7 days) for the six RDTs except for RESPI Strip which ranged 1–37 days (median of 21 days) before qRT-PCR testing. The personnel carrying out the qRT-PCR testing were blinded to the Antigen RDT results. qRT-PCR testing was carried out at the EPI laboratory, one of the UVRI laboratories that tests for SARS-CoV-2, as described below:

RNA extraction

Viral RNA was extracted from the samples using the QIAGEN QIAamp Viral RNA extraction kit (QIAGEN, Hilden, Germany) following the manufacturer’s guidelines.

Real-time reverse-transcription PCR

qRT-PCR testing was carried out using the Charite-Berlin Protocol [9]. This assay was selected as the Standard reference for the kit evaluation because the protocol has good sensitivity with a limit of detection (LoD) of 3.9 RNA copies per reaction for E-gene assay and 3.6 RNA copies per reaction for RdRp gene assay using invitro transcribed RNA identical to 2019 novel coronavirus sequences and specificity of 100% [9]. Screening for SARS-CoV viruses was done using the SuperScript™ III Platinum™ One-Step qRT-PCR kit (Invitrogen, Carlsbad, CA) and LightMix® SarbecoV E-gene primer/probe mix (TIB MOLBIOL, Berlin, Germany) following the manufacturer’s guidelines. All samples that had Ct values <45 in the above assay were subjected to a SARS-CoV-2 confirmatory qRT-PCR using the SuperScript III real-time RT-PCR kit (Invitrogen, Carlsbad, CA) and LightMix® Modular SARS-CoV-2 (COVID-19) RdRp primer/probe mix (TIB MOLBIOL, Berlin, Germany) following the manufacturer’s guidelines. A sample was considered positive if it had a Ct value ≤39 on the confirmatory qRT-PCR. A sample was considered negative if the Ct value was undetermined on the screening qRT-PCR or if it was positive on the screening qRT-PCR but undetermined or >39 on the confirmatory qRT-PCR. The thermal cycling conditions consisted of a reverse transcription reaction at 50°C for 10 minutes, an activation step at 95°C for 10 minutes and 45 cycles at 95°C/15 seconds and 60°C /1 minute. The platform used for the PCR was Applied Biosystems:7500 Real-Time PCR System (Marsiling, Singapore).

Data management and analysis

Ct values were recorded against each sample’s identifier and data entered in an MS Excel sheet which was transformed into a STATA® v15 (StataCorp LP, 4905 Lakeway Drive, College Station, TX, USA) file for analysis. Inconsistency checks were performed using STATA and data cleaning done in collaboration with the field and laboratory staff. Samples with missing Antigen RDT and qRT-PCR results were excluded from the analysis. Univariate analysis was performed to generate descriptive summaries for demographic and sample source characteristics. Frequencies, means, confidence intervals and medians were generated as summary statistics.

Performance evaluation

Sensitivity

Sensitivity was calculated as the number of specimens determined as positive by the Ag RDT under evaluation divided by the number of specimens determined as positive by PCR and expressed as a percentage with confidence intervals.

Specificity

Specificity was calculated as the number of specimens determined as negative by the Ag RDT under evaluation divided by the number of specimens determined as negative by PCR and expressed as a percentage with confidence intervals.

Accuracy

The accuracy was calculated as the proportion of results determined by the Ag RDT under evaluation that agreed with the PCR results and expressed as a percentage with confidence intervals.

False positive rate

The false positive rate was calculated as False Positive/(False Positive + True Negative) and expressed as a percentage.

False negative rate

The false negative rate was calculated as False Negative/(False Negative + True Positive) and was expressed as a percentage. Sensitivity, specificity, accuracy, false positive rate and false negative rate calculations were performed using the proportion command in STATA 15 and confidence intervals produced with the Wilson score method [10]. Sensitivity of the Ag RDTs was also determined for different Ct cut-off values of ≤29, ≤33 and ≤39. Sensitivity and specificity for all the Ag RDTs were also determined according to whether symptom onset was within 7 days of the specimen collection or after 7 days. A sub-analysis (for sensitivity) was also performed for each Ag RDT considering different Ct cut off values following discussions with relevant stakeholders at the Ministry of Health and basing on literature [11-13]; a strong positive was defined as Ct value ≤29, a moderate positive was defined as Ct value 30–37 and a low positive was defined as Ct value 38–39.

Ethical considerations

The study protocol [14] was approved by Uganda Virus Research Institute’s Research Ethics Committee (No. GC/127/20) and the Uganda National Council for Science and Technology (No HS637ES). Informed written consent was obtained from each of the participants before enrolment into the study.

Results

Participant demographics

A total of 1,533 participants were included in the evaluation of the seven Ag kits as detailed in Table 1 below. The mean age group was 36 years, and 1024(66.8%) were males, 493 (32.2%) were females while 16(1.0%) had missing information on gender.
Table 1

Demographic characteristics of participants.

Antigen RDTBIOCREDIT COVID -19 AgCOVID-19 Ag Respi-StripPCL COVID19 Ag Rapid FIAMEDsan® SARS-CoV-2 Antigen Rapid testPanbio COVID-19 Ag Rapid testNovegent COVID-19 Antigen Rapid test kit (colloidal gold)VivaDiag SARS-CoV-2 Ag Rapid Test kit
N 247194172243185229263
Sex
M 165 (66.8)156 (80.4)142(82.6)156 (64.2)136(73.5)132(57.6)137 (52.1)
F 76 (30.8)38 (19.6)30 (17.4)77 (31.7)49(26.5)97(42.4)126 (47.9)
Missing 6 (2.4)10 (4.1)
Age in years
<20 2(0.8)4(2.1)3(1.7)5(2.1)6(3.2)1(0.4)4(1.5)
20–29 68(27.5)54(27.8)49(28.5)39(16.0)53(28.7)28(12.2)77(29.3)
30–39 87(35.2)62(32.0)48(27.9)39(16.0)59(31.9)52(22.7)66(25.1)
40–49 29(11.8)34(17.5)22(12.8)36(14.8)33(17.8)46(20.1)41(15.6)
50 and above 61(24.7)17(8.8)50(29.1)41(16.9)34(18.4)97(42.4)71(27.0)
Missing 23(11.9)83(34.2)5 (2.2)4 (1.5)
Duration of symptoms
0–7 Days 97(39.3)No information32(18.6)68(28.0)117(63.2)137(59.8)235(89.4)
>7 Days 42(17.0)11(6.4)49(20.2)20(10.8)88(38.4)21(8.0)
No symptoms/missing 108(43.7)129(75.0)126(51.9)48(26.0)4(1.8)7(2.7)
PCR (Berlin protocol)
Positive 135 (54.7)67 (34.5)93 (54.1)123 (50.6)85 (46.0)100(43.7)43 (16.4)
Negative 112 (45.3)127 (65.5)79 (45.9)120 (49.4)100(54.1)129(56.3)220 (83.6)

BIOCREDIT COVID -19 Ag test

Participant characteristics

Samples for the evaluation of the BIOCREDIT COVID-19 Ag RDT were collected in August 2020, from 247 participants (S1 Fig) with 122 (49.4%) from Mulago National Referral Hospital (NRH), 27 (10.9%) from Entebbe Regional Referral Hospital (RRH) and 98 (39.7%) from Namboole National Stadium isolation centre. One hundred thirty-five participants (54.7%) were PCR-positive with 48 (36.1%) of these being females. The median age of the participants was 33 years (IQR 28–39 years). There were 97 participants with onset of symptoms ranging between 0–7 days prior to the evaluation, 42 with onset of symptoms more than 7 days prior to the evaluation and 108 participants who were asymptomatic before the Ag RDT test (Table 1).

BIOCREDIT COVID -19 Ag test performance

There were 39 (15.7%) specimens that were BIOCREDIT COVID -19 Ag test positive (S1 Fig). The sensitivity of the test was 27.4% (95% CI: 20.5%– 35.6%) at ≤39 Ct cut-off, 44.7% (95% CI: 33.8–56.3%) at Ct cut-off values ≤33 and 60.0% (95% CI: 45.5%-72.9%) at Ct cut-off values ≤29 (Table 2).
Table 2

Field performance of 7 antigen RDTs for the diagnosis of SARs-CoV-2 virus in Uganda.

Kit nameTotal participants recruitedTest sensitivity (%)Ct ≤ 39 (95% CI)Test sensitivity (%)Ct ≤ 33 (95% CI)Test sensitivity (%)Ct ≤ 29 (95% CI)Test specificity %(95% CI)RDT accuracy (%)Ct ≤ 39 (95% CI)
BIOCREDIT COVID -19 Ag24727.4 (20.5%–35.6%)44.7 (33.8%–56.3%)60 (45.5%–72.9%)98.2 (93.1%–99.6%)59.8 (53.6%–65.8%)
COVID-19 Ag Respi-Strip19419.4 (11.5%–30.9%)35.1 (21.1%–52.4%)46.4 (28.2%–65.7%)99.2 (94.5%–99.9%)71.6 (64.8%–77.6%)
PCL COVID19 Ag Rapid FIA17237.6 (28.2%–48.1%)71.0 (51.8%–84.8%)77.8 (57.0%–90.2%)89.9 (80.8%–94.9%)61.6 (54.1%–68.7%).
MEDsan® SARS-CoV-2 Antigen Rapid test24313.0 (8.1%–20.3%)23.7 (44.7%–74.3%)43.5 (24.0%–65.3%)100 (96.9%–100%)56.0 (49.6%–62.1%)
Panbio COVID-19 Ag Rapid test18549.4 (38.7%–60.1%)72 (57.6%–83.0%)85.7 (66.0%–94.9%)100 (96.4%–100%)76.8 (70.1%–82.3%)
Novegent COVID-19 Antigen Rapid test kit (colloidal gold)22946 (36.3%–56.0%)58.2 (44.5%–70.8%)66.7 (46.0%–82.5%)89.9 (83.3%–94.1%)70.7 (64.5%–76.3%)
VivaDiag COVID-19 Ag26330.2 (18.0%–46.1%)31.6 (18.4%–48.6%)80 (37.8%–96.3%)94.1 (90.1%–96.6%)83.7 (78.6%–87.7%)
Among the participants with symptom onset to date of Ag RDT test between 0–7 days, the sensitivity of the test was 39.3% (95% CI: 27.7%– 52.4%). The sensitivity of the test was 11.1% (3.3%–31.1%) for participants with more than 7 days from symptom onset to date of Ag RDT test: and 21.3% (95% CI: 11.6%–35.8%) for asymptomatic participants. The Ag RDT specificity was 98.2% (95% CI: 93.1%–99.6%). The accuracy of the test was 59.8% (95% CI: 53.6%–65.8%) (Table 2); The false positive rate (FPR) was 5.1% (95% CI: 1.2%– 19.3%) and the false negative rate (FNR) was 72.6% (95% CI: 64.4%– 79.5%) (Table 2). The BIOCREDIT COVID -19 Ag test result was negative for a large proportion of strong, moderate, and low positive specimens by qRT-PCR: 40%, 90% and 100% respectively. This was observed especially for Ct cut-off values between 30–38 (Table 3).
Table 3

COVID-19 Antigen Rapid test results compared to the Ct values on RT-PCR.

COVID-19 Antigen kit resultsqRT-PCR Ct Value
KitResultsStrong Positive(≤29)N (%)Moderate Positive(30–37)N (%)Low Positive(38–39)N (%)
BIOCREDIT COVID-19 AgPositive30 (60)7 (9.6)0 (0.0)
Negative20 (40)66 (90.4)12 (100)
PCL COVID-19Positive21 (77.8)12 (20.7)2 (25.0)
Negative6 (22.2)46 (79.3)6 (75.0)
Respi-Strip COVID-19 AgPositive13 (46.4)0 (0)0 (0)
Negative15 (53.6)37 (100)2 (100)
MEDsan® SARS-CoV-2 Antigen Rapid testPositive10 (43.5)6 (6.5)0 (0.0)
Negative13 (56.5)87 (93.6)7 (100)
Panbio COVID-19 Ag Rapid testPositive24 (85.7)17 (30.9)1 (50.0)
Negative4 (14.3)38 (69.1)1 (50.0)
Novegent COVID-19 Antigen Rapid test kit (colloidal gold)Positive18 (66.8)27 (40.3)1 (16.7)
Negative9 (33.3)40 (59.7)5 (83.3)
VivaDiag COVID-19 AgPositive8 (80.0)5 (15.6)
Negative2 (20.0)27 (84.4)

COVID 19 antigen respi-strip test (Respi-Strip)

Samples for the Respi-Strip evaluation were collected in August 2020, from 194 participants (S2 Fig) with 84 (43.3%) from Mulago NRH, 71 (36.6%) from Malaba POE, 33 (17.0%) from Entebbe RRH and 6 (3.1%) from Mbale RRH. Seventy- six (66) participants had COVID-19 Ag Respi-Strip and PCL COVID19 Ag Rapid FIA. While fifteen (15) participants had both COVID-19 Ag Respi-Strip and BIOCREDIT COVID -19 Ag Test and One hundred and three (103) had only COVID-19 Ag Respi-Strip tests. Sixty-seven (34.5%) participants out of the 194 were positive on PCR with 6 (9.0%) of these positives being females. The median age was 33 years (IQR 28–41 years).

Respi-Strip test performance

There were 14 (7.2%) specimens that were Respi-Strip test positive (S2 Fig). The test showed a sensitivity of 19.4% (95% CI: 11.5%– 30.9%) at Ct values ≤39; 35.1% (95% CI: 21.1–52.4%) at Ct values ≤33 and 46.4% (95% CI: 28.2%-65.7%) at Ct values ≤29 (Table 2). The specificity of the test was 99.2% (95% CI: 94.5%–99.9%). The accuracy of the Respi-Strip test was 71.6% (95% CI: 64.8%–77.6%) at Ct values ≤39 (Table 2); the FPR was 0.8% (95% CI: 0.1%– 5.5%); and the FNR was 80.6% (95% CI: 69.1%– 88.5%) (Table 2). The Respi-Strip determined the largest proportion of strong positives by qRT-PCR as negative (53.6%) and all moderate and low positives by qRT-PCR as negative (100%) (Table 3).

PCL COVID19 Ag Rapid FIA (PCL)

Samples for the evaluation were collected between August and September 2020, from 172 participants (S3 Fig) with 155 (90.1%) from Mulago NRH and 17 (9.9%) from Entebbe RRH. Ninety-three participants (54.1%) were PCR positive with 12 (12.9%) of these being females. The median age was 32 years (IQR 28–39 years). There were 32 participants with onset of symptoms to date of Ag RDT test between 0–7 days. Of these, 19 were PCR-positive. Another 15 participants reported symptom onset of more than 7 days prior to the Ag RDT test with 9 (60%) being qRT-PCR positive (Table 1).

PCL Ag Rapid FIA test performance

There were 43 (25%) specimens that were Ag RDT-positive (S3 Fig). Overall, the PCL Ag test had a sensitivity of 37.6% (95% CI: 28.2%– 48.1%) at Ct values ≤39: 71.0% (95% CI: 51.8%– 84.8%) at Ct values ≤33 and 77.8% (95% CI: 57.0%– 90.2) at Ct values ≤29 (Table 2). The sensitivity of the Ag RDT was 26.3% (95% CI: 10.4%– 52.4%) for participants with symptom onset to date of test within 0–7 days and 11.1% (0.9%–62.6%) for those with symptom onset to date of test greater than 7 days. For asymptomatic participants, sensitivity of the Ag RDT was 46.0% (95% CI: 33.9%–58.7%). The specificity of the test was 89.9% (95% CI: 80.8%–94.9%) (Table 2). The accuracy of the Ag RDT was 61.6% (95% CI: 54.1%–68.7%) (Table 2). The FPR was 10.1% (95% CI: 5.1%– 19.2%) and the FNR was 62.4% (95% CI: 51.9%– 71.8%) (Ct cut-off ≤39) (Table 2). The PCL test determined a large proportion of samples, whether strong or moderate or low positive by qRT-PCR as negative: 22.2%, 79.3% and 75.0% respectively (Table 3).

MEDsan® SARS-CoV-2 Antigen Rapid test

A total of 243 samples (S4 Fig) were collected in November and December 2020 (from 228 participants; some participants gave multiple samples) with 137 samples (56.4%) from Mulago NRH, 10 (4.1%) from Entebbe RRH and 96 (39.5%) from Namboole National Stadium isolation center. Of the 243 samples, 123 (50.6%) were qRT-PCR positive: 37 (30.1%) of these positives were females. The median age of the participants was 37 years (IQR 28–49 years). A total of 75(30%) participants had onset of symptoms 0–7 days before the antigen test (Table 1). There were 25 qRT-PCR positive participants with the period between symptom onset to the date when the antigen test was done ranging between 0–7 days. For twenty- three (23) participants the period was greater than 7 days.

MEDsan® SARS-CoV-2 Antigen Rapid test performance

There were 16 (6.6%) specimens that were MEDsan® SARS-CoV-2 Antigen Rapid test positive (S4 Fig). The MEDsan® SARS-CoV-2 Antigen Rapid test showed a sensitivity of 13.0% (95% CI: 8.1%– 20.3%) at a Ct cut off ≤39; a sensitivity of 23.7% (44.7–74.3%) at Ct values ≤33 and sensitivity of 43.5% (24.0–65.3%) at Ct values ≤29 (Table 2). The Ag RDT had a sensitivity of 12.0% (95% CI: 3.96%– 33.3%) for participants with 0–7 days between symptom onset and the antigen test and 21.7% (6.7%–44.8%) for participants with more than 7 days between symptom onset and the antigen test. The specificity of the test was 100% (95% CI: 96.9%–100%) (Table 2). The accuracy of MEDsan® SARS-CoV-2 Antigen Rapid test was 56.0% (95% CI: 49.6%–62.1%) (Table 2). The FPR was 0.0% (95% CI: 0.0%– 19.4%) and the FNR was 87.0% (95% CI: 79.7%– 91.9%). The MEDsan® SARS-CoV-2 Antigen Rapid test determined a big proportion of strong, moderate and weak positives by qRT-PCR as negative, i.e. 56.5%, 93.6% and 100% respectively (Table 3).

Panbio™ COVID-19 Ag Rapid test

One hundred eight five (185) participants (S5 Fig) were recruited in October 2020, 133 (71.9%) from Namboole Stadium isolation center, 40 (21.6%) from Mulago NRH and 12 (6.5%) from Entebbe RRH. Out of the 185 participants, 85 were qRT-PCR positive and 100 were qRT-PCR negative. One hundred seventeen (117(63.2%)) participants had a range of 0–7 days from symptom onset to the date when the antigen test was done (Table 1). Of these 117, forty-five (45) were positive by qRT-PCR. Twenty (20) participants had symptom onset more than 7 days prior to the test, and 9 out of the 20 (45%) tested positive by qRT-PCR. The majority of the participants were males (73.5%) and the median age was 34 years (IQR 28–44 years) (Table 1).

Panbio™ COVID-19 Ag Rapid test performance

There were 42 (22.7%) specimens that were Panbio™ COVID-19 Ag Rapid test positive (S5 Fig). The Panbio™ COVID-19 Ag Rapid test showed a sensitivity of 49.4% (95% CI: 38.7%– 60.1%) (Table 2) when compared to qRT-PCR test results at a Ct cut off value ≤ 39; a sensitivity of 72% (95% CI: 57.6%–83.0%) at Ct values ≤33 and 85.7% (95% CI: 66.0%–94.9%) at Ct values ≤29 (Table 2). The sensitivity of the Panbio™ COVID-19 Ag Rapid test was 51.1% (95% CI: 36.3%– 65.8%) for participants with symptom onset 0–7 days before the antigen test while the sensitivity of the antigen test was 44.4% (13.4%–80.5%) for participants with symptom onset greater than 7 days before the antigen test. In asymptomatic participants, the sensitivity of the test was 48.4% (95% CI: 30.8%–66.4%). The test had a specificity of 100% (95% CI: 96.4%–100%) (Table 2). The Panbio™ COVID-19 Ag Rapid test showed an accuracy of 76.8% (95% CI: 70.1%–82.3%) (Table 2). The FPR was 0% (95% CI: 0%–8.4%) and the FNR was 50.6% (95% CI: 39.9%– 61.2%). It was observed that the Panbio™ COVID-19 Ag Rapid test is more likely to determine a sample as positive when a sample has abundant target nucleic acid with Ct ≤ 29 (85.7%). However, when a sample is determined as moderate positive, 69% were determined as negative by Panbio™ COVID-19 Ag Rapid test (Table 3).

Novegent COVID-19 antigen rapid test kit (colloidal gold)

A total of 229 participants (S6 Fig) were included in the evaluation: 32 (14.0%) participants from Entebbe RRH, 111 (48.5%) from Namboole National Stadium isolation center, 85 (37.1%) from Mulago RRH and 1 (0.4%) from Kisubi Hospital. The samples for evaluation were collected between January and February 2021. Of these 229 partcipants, 100 were qRT-PCR positive and 129 were qRT-PCR negative (Table 1). There were 137 participants with a period of 0–7 days between symptom onset and the date of the antigen test and 56 of these were qRT-PCR positive. Forty- two (42) qRT-PCR positive participants had symptom onset more than 7 days prior to the test (Table 1). The majority of the participants were males (57.6%) and the median age was 47 years (IQR 34–58 years) (Table 1).

Novegent COVID-19 Antigen Rapid test kit (colloidal gold) performance

There were 59 (25.8%) participants that were Novegent COVID-19 Antigen Rapid test positive (S6 Fig). The test showed a sensitivity of 46% (95% CI: 36.3%– 56.0%) when compared to qRT-PCR test results at a Ct cut off value of ≤ 39;a sensitivity of 58.2% (95% CI: 44.5%–70.8%) at a Ct cut off value of ≤33 and sensitivity of 66.7% (95% CI: 46.0%–82.5%) at a Ct cut-off value of ≤29 (Table 2). Taking into consideration participants with 0–7 days between symptom onset and the date of the antigen test, the Antigen RDT sensitivity was 65.0% (95% CI: 38.6%– 71.4%). For those participants where the period between symptom onset and the date of the antigen test was greater than 7 days the RDT sensitivity was 42.9% (28.4%–58.7%). The test had a specificity of 89.9% (95% CI:83.3%–94.1%) (Table 2). The RDT accuracy was 70.7% (95% CI: 64.5%–76.3%) (Table 2): the FPR was 22.0% (95% CI:13.0%–34.8%) and the FNR was 31.8% (95% CI: 25.1%– 39.2%) (Table 2). Novegent COVID-19 Antigen Rapid test (Colloidal) is more likely to determine a sample as positive (66.7%) if the qRT-PCR result is a strong positive. However, for those samples that were moderately or weakly qRT-PCR positive, this antigen test was more likely to determine them as negative, 59.7% and 83.3% respectively (Table 3).

VivaDiag™ SARS-CoV-2 Antigen Rapid test

In January to March 2021, samples for the evaluation were collected from 263 participants (S7 Fig): 122 (46.4%) from Kiruddu NRH, 59 (22.4%) from CASE Hospital, 77 (29.3%) from Mulago NRH and 5 (1.9%) from Entebbe RRH. Forty- three (16.4%) were positive on qRT-PCR with 21 (48.8%) of these positives being females. The median age of the participants was 36 years, (IQR 28–51 years) (Table 1). There were 235(89.3%) participants with a period ranging between 0–7 days from symptom onset to the date when the antigen test was done(Table 1). Of these 235, thirty-eight (38) were positive by qRT-PCR. Twenty-one participants reported symptom onset more than 7 days prior to the antigen test with 5 (23.8%) determined as qRT-PCR-positive.

VivaDiag™ SARS-CoV-2 Antigen Rapid test performance

There were 26 (7.8%) participants that were VivaDiag™ antigen test positive (S7 Fig). The VivaDiag™ Ag test showed a sensitivity of 30.2% (95% CI: 18.0%– 46.1%) at qRT-PCR Ct values ≤39; a sensitivity of 31.6% (95% CI: 18.4%–48.6%) at qRT-PCR Ct values ≤33 and a sensitivity of 80% (95% CI: 37.8%–96.3%) at qRT-PCR Ct values ≤29 (Table 2). Taking into consideration participants with symptom onset 0–7 days prior to the antigen test, the sensitivity of the antigen test was 26.3% (95% CI: 14.4%– 43.2%). The specificity of the test was 94.1% (95% CI: 90.1%–96.6%) (Table 2). The VivaDiag™ Ag test showed an accuracy of 83.7% (95% CI: 78.6%–87.7%) (Table 2); The FPR was 50.0% (95% CI: 30.5%– 69.5%); and the FNR was 12.7% (95% CI: 9.0%– 17.6%). The VivaDiag™ determined a large proportion of qRT-PCR strong positive cases as positive (80%) though the number of cases in this category was only five. However, VivaDiag™ determined a very large proportion of the qRT-PCR moderately-positive cases (Ct values 30–38) as negative (84.4%) (Table 3).

Discussion

WHO recommends the use of antigen RDTs for SARS-CoV-2 diagnosis because they provide results within 15–30 minutes compared to qRT-PCR tests where results may not be available for more than 24 hours. RDTs cost far less than PCR tests, and do not require specialized laboratories and highly trained staff [2]. The use of accurate COVID -19 antigen RDTs will enable faster identification and management of COVID-19 patients leading to better control of the pandemic. A large number of antigen RDTs have been developed with varying sensitivities and specificities reported by the manufacturer’s IFUs. However, WHO has recommended field evaluations of antigen RDTs before they can be adopted in a national setting [2]. WHO recommends the use of antigen RDTs with a sensitivity of ≥80% and a specificity of ≥ 97% [2]. In this study, none of the seven antigen RDTs evaluated reached the sensitivity reported by the manufacturers, even after considering samples strongly positive by qRT-PCR (Ct values ≤29). These RDTs however had good specificity, with only three showing a specificity slightly lower than that reported in the IFUs. Poor performance in real-world settings compared to that found in the manufacturer’s IFUs has also been reported by others [4,7,8,15-18], emphasizing the need for performance evaluations of antigen RDTs in-country before adoption for use. In our study, we noted that the sensitivity of all seven Ag RDTs improved at lower qRT-PCR Ct values and this has also been observed in other studies [6-8,18-20]. Lower qRT-PCR Ct values in most cases correlate with higher viral loads and hence with increased transmissibility of the virus, usually in the early phases of the infection [21]. There were two antigen RDTs that reached the WHO recommended sensitivity performance of ≥80% at a Ct value of ≤29. These were the Panbio™ COVID-19 Ag Rapid test and the VivaDiag™ SARS-CoV-2 Ag Rapid test with sensitivities of 85.7% and 80% respectively. However, for the VivaDiag, the sample size was very small with only 10 samples at this Ct value. Of the two kits, Panbio™ COVID-19 Ag Rapid test and VivaDiag™ SARS-CoV-2 Ag Rapid test, only the PanBio™ COVID-19 Ag Rapid Test had the WHO recommended specificity performance of ≥ 97%. There was no correlation of antigen RDT performance with the presence or absence of symptoms. High viral loads usually appear in the pre-symptomatic and early symptomatic phases of the illness (within the first 5–7 days of symptom onset), however asymptomatic individuals can also have high viral loads in the early days of infection [22]. The Panbio™ COVID-19 Ag Rapid Test met the sensitivity and specificity performance levels recommended by WHO at qRT-PCR Ct values of ≤29. This antigen RDT together with the Standard Q (by SD Biosensor) antigen RDT we previously evaluated [3], have been recommended in Uganda for COVID-19 diagnostic intervention in a phased approach as more experience and confidence is gained in their use through continuous field evaluation and the generation of additional data. These antigen RDTs have been recommended for rapid screening of the following populations; symptomatic alerts and symptomatic contacts of confirmed cases, patients with COVID -19-like symptoms presenting at health facilities. The Uganda MOH recommends that, any antigen RDT positive case from the above mentioned populations will be considered “COVID– 19 positive” and managed accordingly and will not require additional qRT-PCR confirmation except in special cases such as genomic sequencing or routine quality control monitoring. Furthermore, any antigen RDT negative case with highly suggestive symptoms will be considered “a suspect COVID– 19 case” until confirmed to be uninfected by qRT-PCR and should be managed with enhanced infection prevention control (IPC). Laboratories at health units are expected to obtain/collect an additional specimen from those symptomatic clients who are antigen RDT negative to be sent for PCR testing.

Study limitations

Since samples were collected from different nostrils for the antigen test (BIOCREDIT COVID -19 Ag, MEDsan® SARS-CoV-2 Antigen Rapid test, PCL and Panbio™ COVID-19 Ag Rapid test) and qRT-PCR tests, the two swabs can be considered as two different samples which may lead to variation in the virus content of the collected samples. The Panbio™ COVID-19 Ag Rapid Test IFU recommends using the test within 5 days of symptom onset. However, in the evaluation of this antigen RDT, specimens from some asymptomatic cases were included. In asymptomatic cases it is difficult to determine the stage of the infection, whether early or late, thus affecting the overall interpretation of the results i.e. whether the antigen test result is negative because the client was in the early stage of disease and thus had low levels of antigen or was due to the poor performance of the test kit. Participants in the study were recruited at different time periods thus there was varying prevalence of SARS-CoV-2 which could have affected the specificity of the tests. There was also significant difference in the population mix of the participants for the different tests (male/female, mean age, number of PCR positive and PCR negative participants) which may affect the comparability of the tests. Most participants were tested with only one of the evaluated RDT, implying different populations were used for evaluation of each RDT, thus making it difficult to compare the different RDT performance.

Conclusions

This field evaluation of seven SARS CoV-2 RDTs: COVID-19 Ag Respi-Strip, BIOCREDIT COVID -19 Ag, MEDsan® SARS-CoV-2 Antigen Rapid test, PCL COVID19 Ag Rapid FIA, Panbio™ COVID-19 Ag Rapid test, Novegent COVID-19 Antigen Rapid test kit (colloidal gold), and VivaDiag™ COVID-19 Ag showed poorer performance than that reported by the manufacturers. Only the Panbio™ COVID-19 Ag Rapid Test reached the WHO recommended performance of a sensitivity of ≥ 80% and a specificity ≥ 97% at qRT-PCR Ct values ≤ 29. This RDT can be recommended for COVID-19 infection diagnosis among symptomatic patients in health facilities in order not to delay patient management while waiting for qRT-PCR results.

Flowchart summarizing test results using the BIOCREDIT COVID-19 Ag RDT.

(TIF) Click here for additional data file.

Flowchart summarizing test results using the COVID-19 Ag Respi-Strip.

(TIF) Click here for additional data file.

Flowchart summarizing test results using the PCL COVID 19 Ag Rapid FIA.

(TIF) Click here for additional data file.

Flowchart summarizing test results using the MEDsan® SARS-CoV-2 Antigen Rapid test.

(TIF) Click here for additional data file.

Flowchart summarizing test results using the Panbio™ COVID-19 Ag Rapid test.

(TIF) Click here for additional data file.

Flowchart summarizing test results using the Novegent COVID-19 Antigen Rapid test kit.

(TIF) Click here for additional data file.

Flowchart summarizing test results using the VivaDiag™ SARS-CoV-2 Ag Rapid test kit.

(TIF) Click here for additional data file.

Principal and kit performance of the seven evaluated COVID -19 Rapid Antigen test kits.

(DOCX) Click here for additional data file. 10 Sep 2021
PONE-D-21-24051
Field Evaluation of the Performance of Seven Antigen Rapid Diagnostic Tests for the Diagnosis of SARs-CoV-2 Virus Infection in Uganda
PLOS ONE Dear Dr. Bwogi, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.
Please attend to the all the concerns that have been raised by all the reviewers. Among some of the things that have been highlighted include: 1. Information in the tables must be well presented and numbers must tally with those that are given in the text. The order of presentation of results in the text must follow the order in which they appear in the tables.. 2. Provide more information about the participants in this study and how they were selected. Do they represent the population on which such a the tests are intended to be used on? Was each test applied to every participant and if not, what impact would this have on the comparison of the test results. 3. Provide more information about the validity of the RT-PCR that was used as a Gold Standard in this study. Please submit your revised manuscript by Oct 25 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Martin Chtolongo Simuunza, PhD Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. Please include additional information regarding the survey or questionnaire used in the study and ensure that you have provided sufficient details that others could replicate the analyses. For instance, if you developed a questionnaire as part of this study and it is not under a copyright more restrictive than CC-BY, please include a copy, in both the original language and English, as Supporting Information. 3. Thank you for stating the following in the Competing Interests section: [RD is a consultant at Abbott. Therefore he did not participate in the investigation of Panbio™ COVID-19 Ag Rapid test performance.]. Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests).  If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf. 4. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability. Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized. Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access. We will update your Data Availability statement to reflect the information you provide in your cover letter. 5. One of the noted authors is a group or consortium [EPI Laboratory team and UVRI COVID 19 Technical team.] In addition to naming the author group, please list the individual authors and affiliations within this group in the acknowledgments section of your manuscript. Please also indicate clearly a lead author for this group along with a contact email address. 6. We note that you have included the phrase “data not shown” in your manuscript. Unfortunately, this does not meet our data sharing requirements. PLOS does not permit references to inaccessible data. We require that authors provide all relevant data within the paper, Supporting Information files, or in an acceptable, public repository. Please add a citation to support this phrase or upload the data that corresponds with these findings to a stable repository (such as Figshare or Dryad) and provide and URLs, DOIs, or accession numbers that may be used to access these data. Or, if the data are not a core part of the research being presented in your study, we ask that you remove the phrase that refers to these data. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes Reviewer #3: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: No Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No Reviewer #2: Yes Reviewer #3: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The manuscript is interesting and well written. The authors should acknowledge more limitations of the study. The comparison between the tests is not fair, because the tests are not compared in a randomized way. Sensitivity and specificity can be different if samples come from populations with different disease spectrum/prevalence. The STARD checklist should be used (https://www.equator-network.org/reporting-guidelines/stard). Table 1 should completed with percentages for all the variables (e.g. age groups). Tables should be formatted in a better way. Is the time lag between Antigen Test and PCR a possible explanation for the difference observed with vendors diagnostic performance? Reviewer #2: Reviewer Recommendation and Comments for Manuscript Number PONE-D-21-24051 The comparison of the panel of tests is key to improving the use of these tests in local situation. Such is very important and carrying out such investigations improve the performance as mitigation measures can be put in place in real time. The fact that such tests can be performed at the bedside therefore hasten the recovery process and facilitates a speedy intervention and therefore recovery. Observation 1. There is need for the authors to revisit the figures (number/values) in the text as well as those in the tables in order to align them e.g. the Table 1 says 245 (instead of 249) and Table 2 says 66* while the text in Line No. 220 is referring to sixty seven. 2. The presentation of the Table is little complicated as it is not clarified as to what is presented in the Table and what is not. In would be helpful for the reader to cite the Table or Figure at the first mention. The way the results are presented, the reader will only know where the table being referred to is when they reach the end of the paragraph. 3. In addition, navigating through the text and tables is a little tedious as the data is not presented according to the tables order. In the process, making locating what is being referred to in the text a little hard for the reader. For instance, Lines 192-197 appears to be referring to both Table 1 (asymptomatic participants) and Table 2(sensitivity). It would also be helpful for the reader if the authors can state when the results are "not shown" or are found in the supplementary. Reviewer #3: This is a nice evaluation of seven RDTs for covid. However, diagnostic accuracy depends a lot on the study sample and participant mix. This is information that is lacking in this study and it is what I have most concerns about. 1. Participants were selected from travellers and people in isolation. To what extent are they representative for the population in which the RDTs will be used? For example, the conclusion says: "This RDT can be recommended for use among symptomatic patients in health facilities in order not to delay patient management while waiting for qRT-PCR results". But these are not the participants included. 2. For the reason mentioned in the previous comment, I do not think that this conclusion is valid. So please rephrase. 3. How were particpants selected? In a consecutive way? On particular days? This needs to be clarified. 4. How were the different tests allocated to the right person? At random, or in different time-periods? Please explain. 5. How many samples/tests per participant were included? Did some participants receive more than two tests? Please explain. 6. There are distinct differences in population mix (males/females; cases/controls) between the different tests. This could have influenced the accuracy, but the authors say little about these differences. Maybe this can be added? 7. In general, I think this manuscript would benefit from a better reporting. Please refer to the STARD guidelines for reporting of diagnostic accuracy studies. 8. RT-PCR is the reference standard, but very little information is provided. For example, I can imagine that having only one negative RT-PCR is not very reliable in determining a 'control' participant. Would it be possible to explian a bit more about the reference standard? And could the reference standard have led to bias? (which should then be mentioned in a limitations section in the Discussion) 9. The authors have analyzed the data against different Ct thresholds. As far as I can see, the Ct thresholds are different from what I have seen in other RDT analyses. Would it be possible for the authors to comment on these Ct thresholds? And how do they relate to clinical practice? Why were those Ct's chosen? ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Ngonda Saasa Reviewer #3: Yes: Mariska Leeflang [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. Submitted filename: Reviewer Recommendation and Comments for Manuscript Number PONE.docx Click here for additional data file. 5 Nov 2021 Response to Reviewers PONE-D-21-24051 Field Evaluation of the Performance of Seven Antigen Rapid Diagnostic Tests for the Diagnosis of SARs-CoV-2 Virus Infection in Uganda 1. Information in the tables must be well presented and numbers must tally with those that are given in the text. The order of presentation of results in the text must follow the order in which they appear in the tables. Response: The information in the tables and numbers in text have been reviewed and now tally. The order of the results in the text has been reorganised to follow the order in which they appear in the tables. 2. Provide more information about the participants in this study and how they were selected. Do they represent the population on which such the tests are intended to be used on? Was each test applied to every participant and if not, what impact would this have on the comparison of the test results. Response The participants were enrolled consecutively whenever RDT kits for evaluation were supplied to UVRI. The evaluation did not receive the RDTs at the same time and quantities supplied for the evaluation were different (depended on how many the suppliers would be able to supply). The majority of participants considered for the evaluation were those found admitted with symptoms known to be for COVID-19 patients. Since RDTs were not received at the same time and because of the urgency due to the pandemic, we were not able to evaluate the kits using the same participants as we would have wanted but rather to enrol those who consented to the evaluation. This is further complicated by the acute nature of the disease and rapid changes in viral loads in individuals. We observed that our sample had significantly more men than women and this was also observed in the centres used (more men in health facilities and border crossings). This means that if the RDTs are to be used at health facilities where individuals with COVID-19 like symptoms would go for health care, or at border crossings, then it is more likely that a similar profile of individuals will be the ones to be tested using recommended RDTs. 3. Provide more information about the validity of the RT-PCR that was used as a Gold Standard in this study. Response More information on the sensitivity and specificity of the RT-PCR that was used as the Gold Standard in this study has been included in the manuscript. Line162-166: The protocol has good sensitivity with a limit of detection (LoD) of 3.9 RNA copies per reaction for E-gene assay and 3.6 RNA copies per reaction for RdRp gene assay using in-vitro transcribed RNA identical to 2019 novel coronavirus sequences and specificity of 100% If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. Response Protocol used has been referenced. Not deposited in the protocols.io . When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf Response: Plos One formatting requirements have been addressed 2.Please include additional information regarding the survey or questionnaire used in the study and ensure that you have provided sufficient details that others could replicate the analyses. For instance, if you developed a questionnaire as part of this study and it is not under a copyright more restrictive than CC-BY, please include a copy, in both the original language and English, as Supporting Information. Response: No questionnaire has been included. Because the survey form was in excel. A copy of the Excel dataset used for data collection and which was transformed to STATA for the analysis can be accessed using the link below;- http://eaccr.org/sites/default/files/2021-11/Field%20Evaluation%20of%20the%20Performance%20of%20Seven%20Antigen%20Rapid%20Diagnostic%20Tests.xlsx 3. Thank you for stating the following in the Competing Interests section: [RD is a consultant at Abbott. Therefore, he did not participate in the investigation of Panbio™ COVID-19 Ag Rapid test performance.]. Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests). If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf. Response Updated conflict of interest statement: RD is a consultant at Abbott. Therefore he did not participate in the investigation of Panbio™ COVID-19 Ag Rapid test performance. This does not alter our adherence to PLOS ONE policies on sharing data and materials. 4. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability. Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized. Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access. We will update your Data Availability statement to reflect the information you provide in your cover letter. Response: The minimum Dataset has been made available. It can be accessed using the link below;- http://eaccr.org/sites/default/files/2021-11/Field%20Evaluation%20of%20the%20Performance%20of%20Seven%20Antigen%20Rapid%20Diagnostic%20Tests.xlsx 5. One of the noted authors is a group or consortium [EPI Laboratory team and UVRI COVID 19 Technical team.] In addition to naming the author group, please list the individual authors and affiliations within this group in the acknowledgments section of your manuscript. Please also indicate clearly a lead author for this group along with a contact email address. Response: Line 651-670: The members in EPI Laboratory team and UVRI COVID 19 Technical team have been named in the acknowledgement section. The author affiliation has been included and the Lead authors in the group named. 6. We note that you have included the phrase “data not shown” in your manuscript. Unfortunately, this does not meet our data sharing requirements. PLOS does not permit references to inaccessible data. We require that authors provide all relevant data within the paper, Supporting Information files, or in an acceptable, public repository. Please add a citation to support this phrase or upload the data that corresponds with these findings to a stable repository (such as Figshare or Dryad) and provide and URLs, DOIs, or accession numbers that may be used to access these data. Or, if the data are not a core part of the research being presented in your study, we ask that you remove the phrase that refers to these data. Response: The statement with this reference has been deleted from the manuscript. It was not a core part of the research being presented. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes Reviewer #3: Partly 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: No Reviewer #3: Yes 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No Reviewer #2: Yes Reviewer #3: No Response: The data has been made available. It can be found using the following URL http://eaccr.org/sites/default/files/2021-11/Field%20Evaluation%20of%20the%20Performance%20of%20Seven%20Antigen%20Rapid%20Diagnostic%20Tests.xlsx 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The manuscript is interesting and well written. The authors should acknowledge more limitations of the study. The comparison between the tests is not fair, because the tests are not compared in a randomized way. Sensitivity and specificity can be different if samples come from populations with different disease spectrum/prevalence. The STARD checklist should be used (https://www.equator-network.org/reporting-guidelines/stard). Response: Line 625-631 : Limitations why we could not use the mentioned randomization design have been added in the manuscript. Basically these kits were evaluated as and when the suppliers brought them to the UVRI. This was at different times and in different quantities. Because of the urgency to identify an RDT that would help in the diagnosis of COVID-19 we evaluated as soon as we received the kits and used participant sources established by the Ugandan Ministry of Health (Isolation centres and points of entry) -We have followed the STARD checklist when reporting in the manuscript. Table 1 should be completed with percentages for all the variables (e.g. age groups). Response Table has been completed with percentages for all the variables (age groups and the grouping according to duration of symptoms now have percentages). Tables should be formatted in a better way. Responses Table 1, 2 and 3 have been formatted. Is the time lag between Antigen Test and PCR a possible explanation for the difference observed with vendors diagnostic performance? Response: There is a time lag between the Antigen test and PCR which range from 1 to 37 days with a median of 4-7 days except for RESPI Strip which had a median of 21 days. We think this did not affect the diagnostic performance of the kits since the samples were stored at -80oC during that time. Please see table below: This information has been included in the manuscript. Table : Days between the Rapid Diagnostic Test and the PCR test Ag RDT Median days Mean Min Max Biocredit 4 7.3 1 21 Respi 21 22.7 1 37 PCL 7 9.7 1 24 MedSan 5 4.9 2 12 PanBio 4 3.7 2 7 Colloidal/Novegent 4 4.3 1 11 VivaDiag 5 7.5 2 28 Reviewer #2: Reviewer Recommendation and Comments for Manuscript Number PONE-D-21-24051 The comparison of the panel of tests is key to improving the use of these tests in local situation. Such is very important and carrying out such investigations improve the performance as mitigation measures can be put in place in real time. The fact that such tests can be performed at the bedside therefore hasten the recovery process and facilitates a speedy intervention and therefore recovery. Observation 1. There is need for the authors to revisit the figures (number/values) in the text as well as those in the tables in order to align them e.g. the Table 1 says 245 (instead of 249) and Table 2 says 66* while the text in Line No. 220 is referring to sixty seven. Response: We have aligned the figures in the tables and text. 2. The presentation of the Table is little complicated as it is not clarified as to what is presented in the Table and what is not. In would be helpful for the reader to cite the Table or Figure at the first mention. The way the results are presented, the reader will only know where the table being referred to is when they reach the end of the paragraph. Response We have maintained reference to the table at the end since we did not report per table but per kit. Referring to the table at the beginning for each kit would make the presentation monotonous 3. In addition, navigating through the text and tables is a little tedious as the data is not presented according to the tables order. In the process, making locating what is being referred to in the text a little hard for the reader. For instance, Lines 192-197 appears to be referring to both Table 1 (asymptomatic participants) and Table 2(sensitivity). It would also be helpful for the reader if the authors can state when the results are "not shown" or are found in the supplementary. Response We have rearranged the text to follow the order of the tables presented for the specific kit being evaluated. Reviewer #3: This is a nice evaluation of seven RDTs for covid. However, diagnostic accuracy depends a lot on the study sample and participant mix. This is information that is lacking in this study and it is what I have most concerns about. 1. Participants were selected from travellers and people in isolation. To what extent are they representative for the population in which the RDTs will be used? For example, the conclusion says: "This RDT can be recommended for use among symptomatic patients in health facilities in order not to delay patient management while waiting for qRT-PCR results". But these are not the participants included. Response The participants were enrolled consecutively whenever RDT kits for evaluation were supplied to UVRI. The evaluation did not receive the RDTs at the same time and quantities supplied for the evaluation were different (depended on how many the suppliers would be able to supply). The majority of participants considered for the evaluation were those found admitted with symptoms known to be for COVID-19 patients. Since RDTs were not received at the same time and because of the urgency due to the pandemic, we were not able to evaluate the kits using the same participants as we would have wanted but rather to enrol those who consented to the evaluation. This is further complicated by the acute nature of the disease and rapid changes in viral loads in individuals. We observed that our sample had significantly more men than women and this was also observed in the centres used (more men in health facilities and border crossings). This means that if the RDTs are to be used at health facilities where individuals with COVID-19 like symptoms would go for health care, or at border crossings, then it is more likely that a similar profile of individuals will be the ones to be tested using recommended RDTs. In addition, the Isolation Centers had patients that were admitted from different parts of the country. 2. For the reason mentioned in the previous comment, I do not think that this conclusion is valid. So please rephrase. Response Line 593-604: The conclusion has been rephrased. 3. How were particapants selected? In a consecutive way? On particular days? This needs to be clarified. Response Participants were recruited consecutively, and these were found at the sites of recruitment. Also recruitment for the different kits was as when the kits were received from the suppliers. We have clarified this in the manuscript. 4. How were the different tests allocated to the right person? At random, or in different time-periods? Please explain. Response The different kits were allocated at different time periods as the different tests were received from the suppliers. The dates when the different participants for the different kits were recruited is mentioned in the manuscript 5. How many samples/tests per participant were included? Did some participants receive more than two tests? Please explain. Most participants/samples had only one test. This was because of how we received the samples and also when samples were taken off, they were put in the buffer supplied with that kit. A few samples were used to evaluate more than one test however a comparison of the results has not been included in the manuscript. Additionally, one RDT (RESPI) did not have a buffer but shared a sample with real time PCR. Seventy- six samples had both PCL COVID19 Ag Rapid FIA and COVID-19 Ag Respi-Strip. Fifteen samples had both BIOCREDIT COVID -19 Ag Test and COVID-19 Ag Respi-Strip used. As mentioned above, no comparison between results from these tests has been included in the manuscript. Line 332-335: The information about numbers of samples used to evaluate more than one RDT has been included in the manuscript. 6. There are distinct differences in population mix (males/females; cases/controls) between the different tests. This could have influenced the accuracy, but the authors say little about these differences. Maybe this can be added? Response The population mix (cases/controls ) between the different tests has been added as a study limitation. However, we do not think the difference in male/female mix could have affected the accuracy of the different tests. Response 7. In general, I think this manuscript would benefit from a better reporting. Please refer to the STARD guidelines for reporting of diagnostic accuracy studies. Response We have referred to the STARD guidelines for reporting diagnostic accuracy studies and have made some edits to the manuscripts. 8. RT-PCR is the reference standard, but very little information is provided. For example, I can imagine that having only one negative RT-PCR is not very reliable in determining a 'control' participant. Would it be possible to explian a bit more about the reference standard? And could the reference standard have led to bias? (which should then be mentioned in a limitations section in the Discussion) Response Line 162-166: We have added more information concerning the RT-PCR we used as the reference standard: This assay was selected as the Standard reference for the kit evaluation because the protocol has good sensitivity with a limit of detection (LoD) of 3.9 RNA copies per reaction for E-gene assay and 3.6 RNA copies per reaction for RdRp gene assay using invitro transcribed RNA identical to 2019 novel coronavirus sequences and specificity of 100%. It was also recommended by WHO. Since the specificity of the test was 100%, we believe using only one negative RT-PCR result for the control participants did not cause any bias in our study. 9. The authors have analyzed the data against different Ct thresholds. As far as I can see, the Ct thresholds are different from what I have seen in other RDT analyses. Would it be possible for the authors to comment on these Ct thresholds? And how do they relate to clinical practice? Why were those Ct's chosen? Response The authors chose the Ct-thresholds in reference to literature and after discussions in-country. A Ct ≤ 39: This is the cut off for positivity provided by the Charite Berlin PCR Protocol that we used in the analysis (Corman VM et al., 2020). A Ct≤29 was used because literature showed that patients with Ct value below 30 were found to be more infectious due to the high viral load and these Ct values were more likely to be found in the first days of the infection. Since we are looking for kits that will detect the infection early we decided to include this cut off in our analysis. https://www.sciencemag.org/news/2020/09/one-number-could-help-reveal-how-infectious-covid-19-patient-should-test-results (Bayat SA et al. 2021) A Ct value of ≤ 33 for the E-gene were found not to be contagious. No viruses were cultured from the patients with Ct value greater than 34 in a study carried out in France. Therefore, these were taken to be of less clinical value (La Scola B et al. 2020). Since we still need to identify these patients that is why we had this cut off value although our final report is on RdRp gene Ct value. 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Ngonda Saasa Reviewer #3: Yes: Mariska Leeflang Submitted filename: Response to Reviewers.docx Click here for additional data file. 22 Nov 2021
PONE-D-21-24051R1
Field Evaluation of the Performance of Seven Antigen Rapid Diagnostic Tests for the Diagnosis of SARs-CoV-2 Virus Infection in Uganda
PLOS ONE Dear Dr. Bwogi, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. The reviewers are of the view that you did not adequately atted to their concerns. Please pay partcular attention to those raised by reviewer No. 3. Attend to all the concerns raised and return a revised manuscript as advised in this letter. Please submit your revised manuscript by Jan 06 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Martin Chtolongo Simuunza, PhD Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: (No Response) Reviewer #2: All comments have been addressed Reviewer #3: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Partly ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: No ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Thanks for having addressed all my comments. I think that the manuscript is improved but there is still one thing missed. STARD checklist should be uploaded as supplementary material and the authors should explicitly state in what page/section the checklist item has been covered. https://www.equator-network.org/wp-content/uploads/2015/10/STARD-2015-Checklist.docx Reviewer #2: The authors have adequately addressed the concerns. The tables have beed realigned and some of the inconsistencies in the numbers in the table and text have been attended to. Reviewer #3: The authors have improved their manuscript. However, there are a few remaining comments that have not or insufficiently have been addressed. 1. The selection criteria have not been clearly described. The authors now describe that participants were enrolled consecutively, but it is not clear in what setting the study was done or whether the patients enrolled in this study were representative. Also, the authors write that they included cases and controls, but that is ambiguous language. Please check the STARD guidelines about what to report about the selection criteria and the selection process. 2. The authors state that they followed the STARD guidelines, but there is still information missing. Please check. Also, the STARD guidelines recommend a flow chart to describe visually how patients enroll in the study and the tests they undergo and what the test results are. Such a flow chart would be really helpful in this case. 3. From the response to my comment about the timing of the tests, I understand that every patient is tested by only one of the index tests (and the reference standard). So patients enrolled at the start of the study will have received a different test than patients enrolled at the end of the study. That makes a comparisons between tests a bit difficult. So actually, authors should have mentioned this as a limitation. (see also my previous comment 6) 4. The authors provided now more information about the reference standard, but not how it was really used to make a diagnosis (e.g. for a participant who does not have COVID-19, should the test have been negative twice?). ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: Yes: Mariska Leeflang [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
20 Jan 2022 Responses to Reviewers PONE-D-21-24051R1 Field Evaluation of the Performance of Seven Antigen Rapid Diagnostic Tests for the Diagnosis of SARs-CoV-2 Virus Infection in Uganda 1. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. Response: References have been reviewed and changes have been made to reference No 6 and 7. Reference no 6. Bulilete O, Lorente P, Leiva A, Carandell E, Oliver A, Rojo E, et al. Evaluation of the Panbio rapid antigen test for SARS-CoV-2 in primary health care centers and test sites. medRxiv. 2020. Replaced with more up to date reference below: Bulilete O, Lorente P, Leiva A, Carandell E, Oliver A, Rojo E, Pericas P, Llobera J, COVID-19 Primary Care Research Group. Panbio™ rapid antigen test for SARS-CoV-2 has acceptable accuracy in symptomatic patients in primary health care. Journal of Infection. 2021 Mar 1;82(3):391-8. Reference no 7.Olearo F, Noerz D, Heinrich F, Sutter JP, Roedel K, Schultze A, et al. Handling and accuracy of four rapid antigen tests for the diagnosis of SARS-CoV-2 compared to RT-qPCR. medRxiv. 2020. Replaced with the reference below: The reference above is no longer on the website. The reference available is the one below Olearo F, Nörz D, Heinrich F, Sutter JP, Roedl K, Schultze A, Zur Wiesch JS, Braun P, Oestereich L, Kreuels B, Wichmann D. Handling and accuracy of four rapid antigen tests for the diagnosis of SARS-CoV-2 compared to RT-qPCR. Journal of Clinical Virology. 2021 Apr 1;137:104782. 2. Thanks for having addressed all my comments. I think that the manuscript is improved but there is still one thing missed. STARD checklist should be uploaded as supplementary material and the authors should explicitly state in what page/section the checklist item has been covered. Response The STARD checklist has been attached to the rebuttal letter. In the checklist we have indicated the line no where the checklist has been covered. We have not included the STARD checklist in the supplementary material since we could not reference it in the manuscript 3. The selection criteria have not been clearly described. The authors now describe that participants were enrolled consecutively, but it is not clear in what setting the study was done or whether the patients enrolled in this study were representative. Also, the authors write that they included cases and controls, but that is ambiguous language. Please check the STARD guidelines about what to report about the selection criteria and the selection process. Response Line 84-93. More information has been included in the section of the selection criteria and process. Line 27-28. Cases and Controls have been replaced with PCR positive and PCR negative participants. 4. The authors state that they followed the STARD guidelines, but there is still information missing. Please check. Also, the STARD guidelines recommend a flow chart to describe visually how patients enroll in the study and the tests they undergo and what the test results are. Such a flow chart would be really helpful in this case. Response • We have reviewed the STARD guidelines to guide us on inclusion of missing information in the manuscript. The STARD checklist has been attached to indicate where the different sections of the STARD checklist are in the paper. • Flow charts showing patient enrollment and tests have been placed as figures in the supplementary material. 5. From the response to my comment about the timing of the tests, I understand that every patient is tested by only one of the index tests (and the reference standard). So patients enrolled at the start of the study will have received a different test than patients enrolled at the end of the study. That makes a comparisons between tests a bit difficult. So actually, authors should have mentioned this as a limitation. (see also my previous comment 6) Response Line 491-493. “Difficult in comparison of the tests because of timing of the tests and thus every patient tested with one of the index tests” has been included in the study limitation. 6. The authors provided now more information about the reference standard, but not how it was really used to make a diagnosis (e.g. for a participant who does not have COVID-19, should the test have been negative twice?). Response : Line 137-146: This has now been described: A sample was considered positive if it had a Ct value ≤39 on the confirmatory qRT-PCR. A sample was considered negative if the Ct value was undetermined on the screening qRT-PCR or if it was positive on the screening qRT-PCR but undetermined or >39 on the confirmatory qRT-PCR. Submitted filename: Responses to Reviewers.docx Click here for additional data file. 1 Mar 2022 Field Evaluation of the Performance of Seven Antigen Rapid Diagnostic Tests for the Diagnosis of SARs-CoV-2 Virus Infection in Uganda PONE-D-21-24051R2 Dear Dr. Bwogi, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Martin Chtolongo Simuunza, PhD Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: (No Response) Reviewer #3: I have no further comments, because all my previous comments have been addressed in one way or another. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #3: Yes: Mariska M.G. Leeflang 23 Mar 2022 PONE-D-21-24051R2 Field Evaluation of the Performance of Seven Antigen Rapid Diagnostic Tests for the Diagnosis of SARs-CoV-2 Virus Infection in Uganda Dear Dr. Bwogi: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Martin Chtolongo Simuunza Academic Editor PLOS ONE
  17 in total

1.  Two-sided confidence intervals for the single proportion: comparison of seven methods.

Authors:  R G Newcombe
Journal:  Stat Med       Date:  1998-04-30       Impact factor: 2.373

2.  Can the cycle threshold (Ct) value of RT-PCR test for SARS CoV2 predict infectivity among close contacts?

Authors:  Soha Al Bayat; Jesha Mundodan; Samina Hasnain; Mohamed Sallam; Hayat Khogali; Dina Ali; Saif Alateeg; Mohamed Osama; Aiman Elberdiny; Hamad Al-Romaihi; Mohammed Hamad J Al-Thani
Journal:  J Infect Public Health       Date:  2021-08-13       Impact factor: 7.537

3.  Development and Potential Usefulness of the COVID-19 Ag Respi-Strip Diagnostic Assay in a Pandemic Context.

Authors:  Pascal Mertens; Nathalie De Vos; Delphine Martiny; Christian Jassoy; Ali Mirazimi; Lize Cuypers; Sigi Van den Wijngaert; Vanessa Monteil; Pierrette Melin; Karolien Stoffels; Nicolas Yin; Davide Mileto; Sabrina Delaunoy; Henri Magein; Katrien Lagrou; Justine Bouzet; Gabriela Serrano; Magali Wautier; Thierry Leclipteux; Marc Van Ranst; Olivier Vandenberg
Journal:  Front Med (Lausanne)       Date:  2020-05-08

4.  Implementation of rapid SARS-CoV-2 antigenic testing in a laboratory without access to molecular methods: Experiences of a general hospital.

Authors:  Laurent Blairon; Alain Wilmet; Ingrid Beukinga; Marie Tré-Hardy
Journal:  J Clin Virol       Date:  2020-05-30       Impact factor: 3.168

5.  Viral dynamics in asymptomatic patients with COVID-19.

Authors:  Rui Zhou; Furong Li; Fengjuan Chen; Huamin Liu; Jiazhen Zheng; Chunliang Lei; Xianbo Wu
Journal:  Int J Infect Dis       Date:  2020-05-11       Impact factor: 3.623

6.  Detection of 2019 novel coronavirus (2019-nCoV) by real-time RT-PCR.

Authors:  Victor M Corman; Olfert Landt; Marco Kaiser; Richard Molenkamp; Adam Meijer; Daniel Kw Chu; Tobias Bleicker; Sebastian Brünink; Julia Schneider; Marie Luisa Schmidt; Daphne Gjc Mulders; Bart L Haagmans; Bas van der Veer; Sharon van den Brink; Lisa Wijsman; Gabriel Goderski; Jean-Louis Romette; Joanna Ellis; Maria Zambon; Malik Peiris; Herman Goossens; Chantal Reusken; Marion Pg Koopmans; Christian Drosten
Journal:  Euro Surveill       Date:  2020-01

7.  COVID-19 Response in Sub-Saharan Low-Resource Setting: Healthcare Soldiers Need Bullets.

Authors:  Denis Mukwege; Guy-Bernard Cadière; Olivier Vandenberg
Journal:  Am J Trop Med Hyg       Date:  2020-06-20       Impact factor: 2.345

8.  Handling and accuracy of four rapid antigen tests for the diagnosis of SARS-CoV-2 compared to RT-qPCR.

Authors:  Flaminia Olearo; Dominik Nörz; Fabian Heinrich; Jan Peter Sutter; Kevin Roedl; Alexander Schultze; Julian Schulze Zur Wiesch; Platon Braun; Lisa Oestereich; Benno Kreuels; Dominic Wichmann; Martin Aepfelbacher; Susanne Pfefferle; Marc Lütgehetmann
Journal:  J Clin Virol       Date:  2021-03-03       Impact factor: 3.168

9.  Panbio™ rapid antigen test for SARS-CoV-2 has acceptable accuracy in symptomatic patients in primary health care.

Authors:  Oana Bulilete; Patricia Lorente; Alfonso Leiva; Eugenia Carandell; Antonio Oliver; Estrella Rojo; Pau Pericas; Joan Llobera
Journal:  J Infect       Date:  2021-02-13       Impact factor: 6.072

Review 10.  Implementing COVID-19 (SARS-CoV-2) Rapid Diagnostic Tests in Sub-Saharan Africa: A Review.

Authors:  Jan Jacobs; Vera Kühne; Octavie Lunguya; Dissou Affolabi; Liselotte Hardy; Olivier Vandenberg
Journal:  Front Med (Lausanne)       Date:  2020-10-30
View more
  1 in total

1.  Are SARS-CoV-2 rapid antigen tests useful for the control of latest variants spreading?

Authors:  Nadia Marascio; Angela Quirino; Giuseppe Guido Maria Scarlata; Giorgio Settimo Barreca; Aida Giancotti; Angelo Giuseppe Lamberti; Luigia Gallo; Fabio Foti; Domenico Luca Laurendi; Daniela Dattola; Antonino Marsico; Antonia La Rocca; Giovanni Matera
Journal:  Infez Med       Date:  2022-09-01
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.