Literature DB >> 33983971

Evaluation of the test accuracy of a SARS-CoV-2 rapid antigen test in symptomatic community dwelling individuals in the Netherlands.

Nathalie Van der Moeren1, Vivian F Zwart1, Esther B Lodder2, Wouter Van den Bijllaardt1, Harald R J M Van Esch1, Joep J J M Stohr1, Joost Pot2, Ineke Welschen2, Petra M F Van Mechelen2, Suzan D Pas1, Jan A J W Kluytmans1,3.   

Abstract

BACKGROUND: SARS-CoV-2 real-time reverse transcriptase polymerase chain reaction (qRT-PCR) is well suited for the diagnosis of clinically ill patients requiring treatment. Application for community testing of symptomatic individuals for disease control purposes however raises challenges. SARS-CoV-2 rapid antigen tests might offer an alternative, but quality evidence on their performance is limited.
METHODS: We conducted an evaluation of the test accuracy of the 'BD Veritor System for Rapid Detection of SARS-CoV-2' (VRD) compared to qRT-PCR on combined nose/throat swabs obtained from symptomatic individuals at Municipal Health Service (MHS) COVID-19 test centers in the Netherlands. In part one of the study, with the primary objective to evaluate test sensitivity and specificity, all adults presenting at one MHS test center were eligible for inclusion. In part two, with the objective to evaluate test sensitivity stratified by Ct (cycle threshold)-value and time since symptom onset, adults who had a positive qRT-PCR obtained at a MHS test center were eligible.
FINDINGS: In part one (n = 352) SARS-CoV-2 prevalence was 4.8%, overall specificity 100% (95%CI: 98·9%-100%) and sensitivity 94·1% (95%CI: 71·1%-100%). In part two (n = 123) the sensitivity was 78·9% (95%CI: 70·6%-85·7%) overall, 89·4% (95% CI: 79·4%-95·6%) for specimen obtained within seven days after symptom onset and 93% (95% CI: 86%-97.1%) for specimen with a Ct-value below 30.
INTERPRETATION: The VRD is a promising diagnostic for COVID-19 testing of symptomatic community-dwelling individuals within seven days after symptom onset in context of disease control. Further research on practical applicability and the optimal position within the testing landscape is needed.

Entities:  

Mesh:

Year:  2021        PMID: 33983971      PMCID: PMC8118553          DOI: 10.1371/journal.pone.0250886

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Background

Accurate and sustainable test strategies are essential for the control of SARS-CoV-2 [1]. The current test used to establish an acute SARS-CoV-2 infection in The Netherlands is real-time reverse transcriptase polymerase chain reaction (qRT-PCR). This test is highly sensitive and specific and therefore well suited for the diagnosis of clinically ill patients. However, application of the test for large-scale community testing of symptomatic individuals for disease control purposes raises substantial challenges. qRT-PCR can only be performed in specialised laboratories, has a relatively long turnaround time (TAT) and depends on the availability of scarce extraction and PCR reagents and disposables. The massive qRT-PCR demand created by community screening, greatly burdens microbiological laboratories and puts routine clinical diagnostic care at risk. Furthermore, logistic and administrative challenges lead to substantial delays in testing and reporting of the results. Rapid testing and reporting are however key in the control of SARS-CoV-2 community spread [2]. COVID-19 community screening requires a low-cost diagnostic test with a short TAT which can be performed close to the community. Lateral flow assay (LFA) SARS-CoV-2 antigen tests can be performed at point of care, give results within 15–30 minutes and are relatively inexpensive to produce [3, 4]. Numerous SARS-CoV-2 LFA are available, but quality data on their performance is limited. The available studies are often based on remnant laboratory samples and contain little information on the clinical setting or disease stage. The current literature is insufficient to determine whether SARS-CoV-2 rapid antigen test can be useful in clinical practice and prospective evaluation of the antigen tests in clinically relevant settings is needed [5]. Hence, this evaluation of the test accuracy of the ‘BD Veritor System for Rapid Detection of SARS-CoV-2’ (VRD) when performed on combined nose/throat swabs obtained from symptomatic individuals at two COVID-19 test centers of the Dutch Municipal Health Service (MHS).

Methods

Objectives

The primary study objective of the prospective performance evaluation in part one of the study was to determine the specificity and sensitivity on clinical samples of the VRD compared to qRT-PCR. Secondary objective was to evaluate the concordance between visual interpretation of VRD test results and analysis using the reading device provided by the manufacturer, the BD Veritor Analyzer (VA). The primary objective of part two of the study was to determine the sensitivity for different Ct-value groups (Ct <20, Ct 20–25, Ct 25–30 and Ct ≥30) and different intervals since symptom onset (< 7 days, ≥ 7 days).

Setting

COVID-19 testing of non-hospitalized symptomatic patients in the Netherlands is coordinated by the MHS. Individuals can–provided they state to have COVID-19 like symptoms (rhinitis, cough, elevated temperature (not further specified), shortness of breath or sudden loss of sense of taste or smell)—make an appointment at a regional MHS test center. These criteria remained unchanged during the study period. A single swab is used to collect the specimen from throat and nose and is sent to the microbiological laboratory for qRT-PCR. Swabs are obtained by specifically trained MHS employees, who do not always have a medical background. Individuals with a positive qRT-PCR result are informed by a MHS employee and approached with a questionnaire for the purpose of source- and contact- tracing. The study was conducted from September 26th to October 7th 2020 in the region West-Brabant, the Netherlands. The local MHS had three operational test centers during the study, conducting 1200 SARS-CoV-2 qRT-PCRs daily. Because of logistic reasons (travel distance to the laboratory), part one of the study was performed at one MHS center (Breda). As a portion of the samples of one of the three test centers were sent to an external laboratory, samples from the two centers (Breda and Roosendaal) were considered for part two of the study. In the third week of September 2020 5–6% of individuals tested at a West Brabant MHS test center had a positive qRT-PCR (data on file).

BD veritor system for rapid detection of SARS-CoV-2 (VRD)

The VRD is a chromatographic lateral flow immunoassay for the detection of SARS-CoV-2 nucleocapsid antigens in respiratory specimen. The manufacturer reports a test specificity of 100% (95%CI: 98%-100%) and a sensitivity of 84% (95%CI: 67%-93%) compared to qRT-PCR as a reference standard during the first 5 days after disease onset. The test was validated by the manufacturer for use on superficial nasal specimen. The manual prescribes interpretation of the results after 15 minutes with a reading device provided by the manufacturer (VA) [3]. Nevertheless, a test and control line can be seen with the naked eye.

Real-time reverse transcriptase PCR

Two CE-IVD labelled qPCR platforms were used according to manufacturer’s protocols. Firstly, the Cobas 6800 (Roche) platform using Cobas® SARS-CoV-2–192 PCR assay (Roche diagnostics), detecting RdRp and E-genes. Secondly, the m2000SP and m2000RT platform (Abbott) was used in combination with the Abbott mSample prep. System kit and the Abbott Real Time SARS-CoV-2 Amplification Reagent kit, detecting both E-gene and N-gene. Swabs for qRT-PCR were stored in a 1:1 lysis buffer: virus transport medium.

Patient recruitment

In part one of the study all adults (≥18 years) presenting at the MHS test center Breda for a COVID- 19 test between September 28 and 30 2020 were invited to participate. Individuals who were able and willing to give verbal informed consent were included. In part two of the study, adults (≥18 years) who had been tested at one of the two included MHS test centers between September 26 and October 6 and had a positive qRT-PCR were approached by a MHS employee and invited to participate. Individuals who were able and willing to give verbal informed consent and who confirmed to be or have been symptomatic and who had a positive qRT-PCR at the moment of the home visit (see below ‘study procedure’) were included.

Ethics

The planning, conduct and reporting of the study was in line with the Declaration of Helsinki, as revised in 2013. The study was registered at the Netherlands Trial Register (identification number NL9018). In part one of the study, individuals were informed about the study through local media, by MHS communication channels (full participant information letter on the website) and by information signs at the participating test centers. Verbal informed consent was obtained separately by two independent MHS employees. No written informed consent was obtained as this would have compromised the strictly needed high flow of individuals being tested at the test centers (3 minutes per client). In part two of the study, potential participants were informed about the study and asked for verbal informed consent a first time by telephone. Verbal informed consent was obtained by a different MHS employee a second time during a home visit before obtainment of the study samples. No written informed consents were obtained as handling of documents obtained from confirmed infectious participants was considered a potential safety hazard. The study protocol was submitted at the medical ethical board ´Medical research Ethics Committees United´ (MEC-U) and was granted an exemption of the Dutch medical scientific research act (WMO).

Study procedure

In part one of the study, one swab was used to obtain a specimen from the throat and nasal cavity up to the nasal bridge for routine qRT-PCR in accordance with the Dutch national COVID-19 test protocol. In addition to and directly following this first swab, the same MHS employee obtained an additional swab to acquire a specimen from the throat and the superficial nasal cavities (bilateral, 2·5 cm proximal from the nostril) for VRD. The swabs for VRD were immediately deposited in dry in sterile test tubes and stored and transported on dry ice until processing at the laboratory. The VRD were performed by trained laboratory technicians within 6 hours after obtainment of the sample. Samples were left 15 minutes at room temperature before analysis in accordance with the manufacturer’s operating procedure. Test results were read visually after 15 minutes and thereafter with the VA. No clinical information or information on qRT-PCR results were available to the technicians performing the VRD. Information on the first day of illness was subtracted later on from the MHS files intended for source and contact tracing. In part two of the study, participants were visited at home by MHS employees within 72 hours after their initial positive qRT-PCR at the MHS test center. Analogous to the procedure in part one of the study, specimens for both qRT-PCR and VRD were obtained, stored and analyzed. Only the results of visual interpretation were withheld. In addition, participants were asked what the day of symptom onset was and whether they still had symptoms at the time of the home visit.

Sample size

The sample size calculation was based on an expected sensitivity of 80% in accordance with the performance data reported by the manufacturer [3]. Based on a margin of error of 7%, type I error of 5% and power of 80%, we aimed to include 125 qRT-PCR positive samples.

Analysis

The primary outcome of part one of the study was the VRD sensitivity and specificity on clinical samples compared to qRT-PCR, based on interpretation of the results with the naked eye and with the VA. Furthermore, overall positive predictive value (PPV) and negative predictive value (NPV) were calculated for a population prevalence of 10 and 20% using Medcalc version 19.6.4. For part two of the study the primary outcome was VRD sensitivity compared to qRT-PCR stratified by Ct-value category (Ct<20, Ct20-25, Ct25-30 and Ct≥30) and time since symptom onset (< 7 days, ≥ 7 days). The 7-day cut-off was based on the results of Bullard et al. showing no viral growth in Vero cells in samples obtained over 8 days after symptom onset [5]. Differences between groups were compared using chi-squared tests with n-1 correction for categorical variables. Clopper–Pearson Exact confidence intervals were calculated for sensitivity and specificity. All data were analyzed using Excel, Medcalc version 19.6.4. and SPSS version 24.

Results

In part one of the study 354 individuals, men and women aged 18 years and above, who presented at the test center were initially included. A diagram of the flow of participants is shown in Fig 1. Two (0·6%) specimens with a negative VRD result were excluded because qRT-PCR could not be recovered (error in sample number registration). 17 samples had detectable SARS-CoV-2 RNA, resulting in a prevalence of 4.8 per 100 participants. Amongst the 17 qRT-PCR positive specimen 12 (70·6%) were obtained within seven days after disease onset, one (5·9%) was obtained later and for four specimens (23·5%) the time since symptom onset could not be determined. One qRT-PCR negative specimen rendered an uninterpretable and invalid VRD result by respectively visual interpretation and interpretation with the analyzer. VRD was positive for 16 specimens based on visual interpretation and for 18 specimens based on interpretation with the VA. The two samples which were positive with the analyzer and negative by visual reading had a negative qRT-PCR. All 16 samples which were positive based on visual interpretation were qRT-PCR positive. Specificity was 100% (95%CI: 98·9%-100%) based on visual interpretation and 99·4% (95%CI: 97·9%-100%) based on interpretation with the analyzer, the sensitivity was 94·1% (95%CI: 71·1%-100%) (Table 1). The single qRT-PCR positive sample that was tested negative with VRD had a Ct-value of 32·7 and unknown time since symptom onset.
Fig 1

Diagram for the flow of participants for part 1 of the study (prospective cohort).

Table 1

VRD performance compared to qRT-PCR in study part one.

Visual interpretationInterpretation with analyzer
Total (n)352352
Invalid11
True positive (n)1616
False positive(n)02
True negative (n)334332
False negative (n)11
Sensitivity (%) [95% CI]94·1% [71·1%-100%]94·1% [71·1%-100%]
Specificity (%) [95% CI]100% [98·9%-100%]99·4% [97·9%-100%]
For the prevalence of 4.8% in the study cohort, the positive predictive value (PPV), based on visual interpretation of the test results, was 100% and negative predictive value (NPV) was 99.7% (95% CI 98.1%-99.7%). always 100% as specificity in this case was 100%. The NPV was above 98% for a population prevalence up to 20%, PPV was always 100% as specificity was 100% (Table 2).
Table 2

Negative predictive values (NPV) and positive predictive values (PPV) based on visual interpretation of VRD results for different population prevalence.

Population prevalence
4.8%10%20%
NPV (%) [95% CI]99.7% [98.1%-99.7%]99.4% [95.8%-99.9%]98.6% [91.0%-99.8%]
PPV (%)100%100%100%
In part two of the study, initially 132 participants were eligible for inclusion. Three individuals were excluded as they stated not to have been symptomatic at any point in time. One of them was tested qRT-PCR and VRD positive, one qRT-PCR positive and VRD negative and one was tested negative in both. Furthermore, six (4·5%) symptomatic individuals had a negative qRT-PCR at time of the home visit, all of them had a negative VRD (Fig 2). The ages of the 123 finally included individuals varied from 18 to 83 years (Mean = 44, SD = 16). All but one Ct-value were obtained by the Roche 6800 qRT-PCR. The one exception which was tested on the Abbott platform and had a Ct-value below 20. The sensitivity of the VRD in symptomatic individuals was 78·9% (95%CI: 70·6%-85·7%). When stratified by Ct-value category, sensitivity was found to be higher in the lower Ct-value categories (higher viral loads) compared to the highest Ct-value category (Ct<20 100% (95%CI: 83·2%-100%) (p<0.001), Ct20-25 93·3% (95%CI: 81·7%-98·6%) (p<0.001), Ct25-30 88·2% (95%CI: 72·6%-96·7%) (p<0.001), Ct>30 20·8% (95%CI: 7·1%-42·2%)). When subdivided in time since symptom onset shorter than seven days or seven days or more, clinical sensitivity was higher for those specimens obtained within seven days after symptom onset overall and for every Ct-value category (p = 0.002) (Table 3).
Fig 2

Diagram for the flow of participants for part 2 of the study (qRT-PCR positive participants only).

Table 3

Test results of 123 qRT-PCR positive specimen of symptomatic individuals from study part two.

Days since symptom onsetCt-value categoryqRT-PCR + samples (n)VRD + (n)VRD—(n)Sensitivity (%) [95% CI]
< 7 daysCt < 2017170100% [80·1%-100%]
Ct 20–2529290100% [88·1%-100%]
Ct 25–301211191·7% [61·5%-99·8%]
Ct ≥ 3082625·0% [3·2%-65·1%]
Overall6659789·4% [79·4%-95·6%]
CT < 305857198·3% [90·8%-100%]
≥ 7 daysCt < 20330100% [29·2%-100%]
Ct 20–251613381·3% [54·4%-96·0%]
Ct 25–302219386·4% [65·1%-97·1%]
Ct ≥ 301631318·8% [4·1%-45·7%]
Overall57381966·7% [52·9%-78·6%]
CT < 304135685·4% [70·8%-94·4%]

Ct-value: cycle threshold value, qRT-PCR: real-time reverse transcriptase polymerase chain reaction, VRD: ‘BD Veritor System for Rapid Detection of SARS-CoV-2’.

Ct-value: cycle threshold value, qRT-PCR: real-time reverse transcriptase polymerase chain reaction, VRD: ‘BD Veritor System for Rapid Detection of SARS-CoV-2’.

Discussion

We found an overall clinical specificity of 100% (95% CI: 98·9%-100%) and sensitivity of 94·1% (71·1%-100%) of the VRD, when results were interpreted visually, compared to qRT-PCR based on the prospective cohort in part one of the study. For the cohort in part two of the study, sensitivity was higher for lower Ct-value categories (p< 0.001) and for specimen obtained within the first days after disease onset (p = 0.002). For specimen obtained within seven days after symptom onset the sensitivity was 89·4% (95%CI 79·4%-95·6%) overall and 98·6% (98·3% (95%CI 90·8%-100%) for samples with qRT-PCR Ct-value beneath 30. To our knowledge no independent validation reports on the VRD have been published to date. Although numerous SARS-CoV-2 antigen tests are available on the market, quality data on test performance is limited. A review identified 5 performance evaluation studies evaluating a total of 8 SARS-CoV-2 antigen tests. The reported average specificity of 99·5% (95% CI: 98·1% to 99·9%) was in line with the results of our study. Sensitivity varied strongly across studies (from 0% to 94%) with an average of 56·2% (95% CI: 29·5% - 79·8%). The included studies were performed on remnant specimen stored in virus transport medium and often contained little information on days since disease onset and the clinical setting they were obtained in, all possibly explaining the discrepancy with the observed clinical sensitivity of the VRD in this study [5]. Preliminary results of two performance evaluation studies of the Panbio Antigen test (Abbott) with to this study similar protocols performed on a total of 1397 samples were largely in line with the results observed here: an overall specificity of 100% and sensitivity of 73·2% (Utrecht) and 81·8% (Aruba). Similar to the VRD, the Panbio Antigen test was reported to perform better for lower Ct-value categories [6]. The prospective design of part one of the study and the obtainment of samples in the target setting of potential use are great assets of this study. As waiting times to make an appointment at a COVID-19 test center were long during the study period due to the great demand, no specimen collected within two days after disease onset could be included. Although we expect this group to have high viral loads, we cannot ascertain this assumption. The lack of data on this early window is a limitation of the study. As samples for part two of the study were gathered during a house visit 24–48 hours after the initial positive test, participants in this cohort were likely to be further in the disease process on average compared to the population presenting at the MHS health centers. When stratifying the results of part two of the study by time since symptom onset and Ct-value category (Table 3), the numbers of participants per stratum were relatively small leading to broad confidence intervals. COVID-19 infectivity peaks during the period shortly before and after the onset of symptoms when also maximal viral loads in upper respiratory tract material are measured [7, 8]. In this context the test performance for specimen with a qRT-PCR Ct-value beneath 30 was calculated. As this cut off was based on the obtained data, it is to be confirmed by prospective evaluations. In order to optimize standardization, specimens were transported to the laboratory where the VRD was performed by trained technicians. The final objective is however to perform the VRD at the COVID-19 test centers by MHS personnel. Performance of the test by trained laboratory technicians might overestimate the test accuracy in the definitive clinical setting. Furthermore, in order to perform the VRD at the laboratory, samples were stored and transported on dry ice. Partial destruction of antigen due to freezing could thereby not be excluded and could have resulted in an underestimation of the clinical test sensitivity. Following the study, the routine use of the VRD -with performance of the test at the center on fresh material—was implemented at one MHS test center (Breda). During a follow-up period after this implementation, samples for both qRT-PCR and VRD of 979 individuals were obtained and analyzed. 161 included samples were qRT-PCR positive and 817 qRT-PCR negative. 128 true positive, 2 false positive, 815 true negative, 33 false negative and one uninterpretable VRD result were observed, resulting in an overall clinical sensitivity of 79·5% (95%CI 72·4% -96·8%) and specificity of 99·8% (95%CI 99·1% -100%). Likewise, the clinical sensitivity for samples (n = 132) with a Ct-value beneath 30 93·2% (95%CI 87·5% -96·8%) was comparable with the results found during our study. The presence of COVID-19 like symptoms is a pre-requisite to be tested at a MHS test-center. As clients make their own appointment through a digital system, we cannot exclude a small number of asymptomatic individuals amongst the included individuals in part one of the study. In part two of the study three asymptomatic subjects were excluded. Because the high client turn-over at the MHS test-centers (3 minutes per test) could not be compromised, no information on non-participants was gathered. As a consequence, systematic difference with participants could not be excluded. The current reference standard for diagnosis of an active SARS-CoV-2 infection is qRT-PCR. This highly sensitive and specific test is optimal for the diagnosis of clinically ill patients with a possible indication for treatment and individuals working in or staying at high-risk settings for outbreaks with severe consequences (e.g. long-term care facilities and hospitals). qRT-PCR is however less suited for large scale testing of symptomatic community dwelling individuals for the purpose of disease control. The immense qRT-PCR demand created, greatly stresses microbiologic laboratories and the logistic and administrative challenges intrinsic to qRT-PCR lead to substantial waiting times to get tested and to receive results. Rapid testing and feedback are however essential for control of SARS-CoV-2 community spread [2]. LFA SARS-CoV-2 antigen tests, low-cost, rapid diagnostic tests that can be performed close to the community, could potentially offer an alternative [3, 4]. For subjects tested within 7 days after symptom onset, the negative predictive value was 98% for a test-population with a 20% prevalence. This value increases when the test-population prevalence becomes lower. At the time of writing, a second wave of COVID-19 infections was observed in the Netherlands with a prevalence of 10% to 20% in the test populations. In a questionnaire performed by the Dutch National Institute for Public Health and the Environment (RIVM) amongst 50.000 citizens in June 2020 only 12% of the interviewees that developed symptoms reported to have been tested [9]. When 10% of COVID-19 infected individuals are tested with a 100% sensitive test, 900 in 1000 infected individuals will remain undetected. This strongly supports the use of additional tests with slightly lower sensitivity. We believe the beneficial effect of optimizing test accessibility, as well geographically as in time, on the willingness to get tested will outweigh the limited decrease in test sensitivity by far. Furthermore, COVID-19 infectivity and viral load in the upper respiratory tract generally peak around the time of symptom onset and decrease gradually during the following days [8, 10]. Infected individuals should be detected in this first timeframe in order to optimize the effect of quarantine measures and contact tracing. For the purpose of COVID-19 control, it is preferential to test early on with suboptimal analytical sensitivity for low viral loads, rather than using a 100% sensitive test only later on in the disease process [2]. In conclusion, the VRD is a promising diagnostic test for testing of symptomatic community-dwelling individuals within seven days after symptom onset for the purpose of disease control. Performance of the test on a large scale is however likely to impose specific logistic challenges. Furthermore, the optimal position of the test within the current testing landscape is to be determined. Further research to practical applicability, appropriate test populations, indications and settings and the potential impact on disease control is needed. S1. Cross tabulation of the BD Veritor System for Rapid Detection of SARS-CoV-2 (VRD) compared for qRT-PC based on visual interpretation of the results (a) and interpretation with the BD Veritor Analyzer (b) excluding Invalid test results (n = 1). S2. Anonymized data of study Part One. S3. Anonymized data of study Part Two. (XLSX) Click here for additional data file. 22 Dec 2020 PONE-D-20-36174 Performance evaluation of a SARS-CoV-2 rapid antigen test: test performance in the community in the Netherlands PLOS ONE Dear Dr. Nathalie Van der Moeren, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Several major limitations have been highlighted that we would like you to consider in your revision. As per the detailed peer review feedback concerns have been raised including limitations of  the study design presented (combining single cohort design and cases only cohort), better clarity of methods used, better explanation of how the sample size was calculated and how the results and discussions of limitations have been reported. In addition, please recheck the STARD reporting guidelines to check if the relevant items have been reported in your study. Do consider revising your study's title and acknowledging published systematic reviews on the accuracy of SARS-CoV-2 antigen tests in your discussion as the reviewers have recommended. We will appreciate your responses to all the major and minor comments highlighted by the three peer-reviewers. Please submit your revised manuscript by 1st February 2021. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Eleanor Ochodo, M.D., PhD Academic Editor PLOS ONE Journal requirements: When submitting your revision, we need you to address these additional requirements. 1.) Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2.) Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please describe how verbal consent was documented and witnessed, and why written consent was not obtained. If your study included minors, state whether you obtained consent from parents or guardians. 3.) To comply with PLOS ONE submission guidelines, in your Methods section, please provide additional information regarding your statistical analyses. In addition, please report your p-values to support your claims. For more information on PLOS ONE's expectations for statistical reporting, please see https://journals.plos.org/plosone/s/submission-guidelines.#loc-statistical-reporting.” 4.) PLOS ONE requires experimental methods to be described in enough detail to allow suitably skilled investigators to fully replicate and evaluate your study. See https://journals.plos.org/plosone/s/submission-guidelines#loc-materials-and-methods for more information. To meet PLOS ONE submission guidelines, in your Methods section, please provide a more detailed description of your RT-qPCR methodology, including the primer sequences used. 5.) We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide. 6.) Please include a caption for figure 1, 2 and 3. Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly Reviewer #3: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: No Reviewer #3: No ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: No Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Thank you for the opportunity to review this paper which reports the result of community-based COVID-19 antigen testing in two cohort of symptomatic participants. The study follows good methods for diagnostic test evaluation and on the whole is reported according to the STARD reporting guideline extension for DTA studies. I have identified a couple of areas that the authors may want to consider expanding on, however overall this is a welcome addition to the literature on the potential role of antigen testing for COVID-19. Pg 7 -Title – could more clearly identify the study as reporting measures diagnostic test accuracy however this is covered in the Abstract so probably sufficient Pg 8, line 44-49. Abstract – Results – It is not sufficiently clear that the main result presented in the Abstract reports data from a single cohort in part 1 (people presenting for testing at a single centre) combined with a cases only cohort in part 2 (people with a positive PCR who then agreed to have a subsequent second set of swabs for antigen testing and repeat PCR). Combining data in this way creates a sample that is enriched with PCR positive cases, and opens the study to biases associated with diagnostic case control studies with artificially inflated prevalence of disease (see comment below regarding this point). As currently written, the study could be mis-identified as one single gate study that includes all participants meeting criteria for testing. Pg 10, line 106-7. Methods – Setting – A summary of the criteria for testing in place at the time of the study would be helpful, particular as national criteria for testing may change over time. Pg 10, line 121-123. Methods - BD Veritor System – Could mention that the manufacturer states that the Analyzer “must be used for interpretation of all test results”. Pg 11, line 148-149. Methods – Study procedure – Given the evidence for variation in test accuracy according to who obtained the swabs for testing, please describe whether the GGD employees obtaining the swabs were trained health care workers or non health care workers. Pg 11, line 166-167. Sample size – Should provide some indication of how the target sample sizes were derived. Pg 12 , line 161-162. Please report whether visual or analyzer based results were used for part 2 of the study Pg 12, line 182-183. Results – some baseline characteristics of included participants should be reported. Pg 13, line 202-203. Results and Figure 2. What is of key interest here is the number of participants who had a positive PCR test, the number who were invited to have a home visit, and in particular, the number who consented to the home visit and were tested. It is possible that there are systematic differences between those who agreed to a home visit and those who did not, e.g. they could be older or younger, they may have been more likely to be experiencing continued symptoms or to have more severe symptoms, and this in turn could affect the observed sensitivity of the antigen test. Ideally the authors should report the full participant flow in Figure 2, and tabulate key characteristics according to those who consented or and who did not so that the reader can judge the generalisability of the population. Pg 14, line 234. Results and Table 3. As per the point made above, I have concerns about combining data for cases in part 1 with part 2 of the study in Table 3, given the different participant flow. I am not sure it adds much to Table 2 anyway, as the pattern in results is the same with only small differences in point estimates and CIs Pg 14, line 235-239. Results. PPV is only 100% if the results using visual inspection of the assay are used. As mentioned above this is against the manufacturers instructions for use and this needs to be explicit in this sentence. Pg 14, line 235-239. Results. This paragraph may be difficult for readers to follow without some graphical explanation or tabulation of data. It might be more useful to include this as a replacement for Table 3. I would also suggest using alternative estimates of test sensitivity for calculating NPV. The use of test sensitivity at <7days and Ct <30 appears highly selective to make the data more favourable to the test. In the field, one does not know what an individual’s Ct value will be - some cases of SARS-CoV-2 infection do not achieve high viral loads but that does not necessarily mean that individuals are not infectious. There is also the caveat made above that these results from people agreeing to a second test may not apply more widely. I agree that antigen testing has a role to play for community testing of symptomatic people however suggest the authors take care with over-interpreting the data. Pg 15-18. Discussion. On the whole I agree with the points made in the discussion and the authors cover the majority of limitations in their data. The results for routine use of Ag testing in the community are interesting and potentially reassuring in regard to the concerns that I have raised about the participants in part 2 of the study. Pg 17, There are a couple of spelling errors on line 291 - trough instead of through and line 296 our instead of or Pg 18, line 329-335. Fully agree with the authors conclusions which are in line with the data presented. Reviewer #2: The aim of this study was to determine the clinical performance of the SARS-CoV-2 rapid antigen test ‘BD Veritor System for Rapid Detection of SARS-CoV-2’. The authors found a high specificity and a reasonable sensitivity. This study is of high importance and clinical relevant in the current pandemic. However, we found the method section unclear and we have concerns about the study designs that were not addressed in the discussion section. They should explain their limitations and the impact of these limitations on the results of the study in the discussion section. Below we give specific comments for the manuscript. Title The title includes two times performance. The title could be adjusted to: Clinical performance of a SARS-Cov-2 rapid antigen test in a Dutch community. Background - Clear background with a clear description of what is known and what they will add to the literature. Detailed remarks - Line 79: qualitative data is not correct as this involves non-numerical data to understand the concepts and opinions. Maybe the authors mean quality data? The term qualitative should be removed. (see also line 85 and 118) - Line 79: It is unclear what is meant by performance. The authors determine the diagnostic accuracy (sensitivity, specificity, negative and positive predictive value). It is more clear when the authors uses the term diagnostic accuracy (or clinical performance) and specify this in the introduction. - Line 84 to 87: sentence is not clear and should be rewritten - Line 87: GGD is a Dutch abbreviation and I am not sure if this is known in other countries. Therefore, use the abbreviation of MHS (Municipal Health Service) or explain the Dutch abbreviation GGD (Gemeentelijke Gezondheids Dienst). Methods - Major problem is that they did not use one large prospective cohort study in which they could determine the sensitivity and specificity in the same cohort. The second cohort in the study was based on the selection of confirmed cases, which could overestimate the sensitivity of the test. In addition, the sample size was not specified. How did you determine to include 300 negatives for part one and 100 positives for part two of the study? The following reference could be used for determining the sample size: Buderer NM. Statistical methodology: I. Incorporating the prevalence of disease into the sample size calculation for sensitivity and specificity. Acad Emerg Med 1996;3:895e900. - Explain your choices in the methods in more detail. On what criteria were the various Ct values groups based, why did you used 7 days as a threshold while the manufacturer uses 5 days after disease onset, why were only two of the three test centres included for this study (or was only Breda included)? - The two or three parts of the study are not very clear throughout the manuscript and the text on these parts are not consistent throughout the method section. For example, what will be determined in each part and at which center are the patients recruited. - Why were the VRD performed by trained laboratory technicians and not by the GGD personal. Trained laboratory technicians could make fewer mistakes compared to the GGD personal, which will overestimate the sensitivity and specificity in clinical practice. Detailed remarks - Line 98: explain abbreviation Ct-value (cycle threshold) when this is used for the first time in your manuscript. - Line 106: second time that GGD/MHS abbreviation is explained. This is unnecessary as this is already specified in the introduction. - Line 106: What were the COVID-19 like symptoms? - Line 110: BCO (Bron contact Onderzoek) is also a Dutch abbreviation. Change this to the English abbreviation. - Line 134 and 139: Was verbal informed consent enough? Was a written informed consent not necessary? - Line 148: reference of Dutch national COVID-19 test protocol is missing. - Line 170: part three of the study is not explained before and it is unclear what is meant by this. - Line 174: why was the NPV and PPV not calculated with the prevalence of study one (2x2 table), which represent the prevalence in the patients suspected of COVID-19 presenting at the GGD/MHS? This could be added to table 1. - Line 176: explain abbreviation VA. Results - A table with baseline characteristics of the patient population is missing. This could provide insight in the tested population (e.g. what were the COVID-19 like symptoms) - The subgroups in table 2 had very small numbers, resulting in large 95% confidence intervals. This should be mentioned in the discussion section. - The number of patients included in part two is not clear: Figure 2 mentioned 129, table 2 mentioned 123 and the abstract also mentioned 123. - Why were part one and two combined? This is not explained in the method section. Detailed remarks - Figure 1: how many patients were asked to participate in the study and did not give informed consent? - Line 203: mention that the three asymptomatic patients were excluded from to analysis of part two and explain in the methods why you excluded these patients. - Table 2: the abbreviations used were not explained underneath the table. - Line 239: 95% confidence intervals of PPV and NPV are missing. In addition, it is not clear how NPV was calculated and this should be explained in the method section. Discussion - They mention that the literature on clinical performance is scares, however there is a Cochrane review on point of care tests for SARS-CoV-2 infection that included 22 publications and this should be discussed in the discussion section: Dinnes, J., Deeks, J.J., Adriano, A., Berhane, S., Davenport, C., Dittrich, S., Emperador, D., Takwoingi, Y., Cunningham, J., Beese, S. and Dretzke, J., 2020. Rapid, point‐of‐care antigen and molecular‐based tests for diagnosis of SARS‐CoV‐2 infection. Cochrane Database of Systematic Reviews, (8). - They mention that strengths of the study are the prospective design and large sample size. However, only part one had a prospective design, while part two had more of a case-control design that could overestimate the sensitivity. In addition, they did not include one large prospective cohort. When a sample size calculation would be performed according to Buderer 1996, slightly more than 1000 patients would be needed. Therefore, they cannot argue that they include a large number of patients. In addition, the number of patients in the subgroup analysis were very small resulting in low precision. - RT-PCR is not the gold standard, but more a reference standard. There are also guidelines and clinical presentations used as a reference standard. This issue could be discussed in the discussion section. Detailed remarks - Line 246: It is not common to refer to a figure in the discussion section. - Line 282 to 289: it is not common to include new results of a follow-up study in the discussion section. Why is this not included in the methods and results section? - Line 294 to 308: This is repetition of information that was also stated in the introduction and could be deleted. - Line 315 to 322: The use of a study performed in the United States is not clear, as the situation is quite different in the United States compared to the Netherlands. Reviewer #3: The authors report a diagnostic performance study comparing SARS-CoV-2 rapid antigen testing to batched laboratory RT-PCR testing using 2 commercial PCR assays. The study reports on a high prevalence setting in a symptomatic population presenting to an outpatient testing facility in the Netherlands in the early fall of 2020. Two cohorts are used, one prospectively collected, blinded for the outcome and one cohort assessing test characteristics in prior test positives only. They also aim to report on agreement between the application of an automated reader compared to manually reading of the test result. The manuscript is in general replicable and reasonably well described. It however combines a technically very sound study design with a more flawed second arm (positives only) and the second agreement rates are not quantified. It is unclear on which basis the sample size is calculated and this sample size remains rather small. The main comment is the duality of the study question and objective: it is unclear if the authors try to answer a more population health testing in a clinical sick population (which this study cannot) or can agree that their data are those from a clinical diagnostic setting and the conclusions should remain focused to that extend (as clearly stated in their objective section – but circled around both in the abstract, introduction and discussion section). A further elaboration on counterfactual further implications is possible, but should be left for the discussion section only to be appropriate. However, diagnostic performance studies on test positives and negatives remain rare, more high quality quantitative data are necessary, so I thank the authors for the effort of writing up their findings. Major comments: -The article turns back and forward between the antigen test being a test for clinical practice vs for community testing. Community testing, at this point, has however rather the connotation of screening for infection and infectiousness and being used as a public health intervention tool (non-pharmaceutical intervention). This is not what the study is investigating. This study looks at the performance in a cohort of ill and symptomatic individuals to make or refute the diagnosis of SARS-CoV-2 infection and COVID-19 disease. There is lack of evidence of the use of antigen testing both in a (primary care) clinical setting and in a public health focused context. This study is performed in a population with a prevalence of 29% and all patients are or are supposed to be symptomatic and thus not a representative nor generalizable population for the population health test setting. The objective is clearly stated as to assess clinical spec and sens – for disease diagnosis amongs symptomatic individuals. How the data can be interpreted in the broader picture of community testing in the PH sense would best be kept for in the discussion (and not in the background – introduction – of neither the manuscript nor the abstract – given it gives a false expectation of the setting and results)– given this study is limited in its evidence to that extend. In addition, the patient group that was assessed for, as is mentioned, the sens, is clinically a group further away from how a rapid tests would be applied in the public health setting – given those individuals are already at the further right side of their infection time-curve. This study thus gives mainly data regarding this right tail of infected and symptomatic individuals. -What is meant with clinical sensitivity is not completely clear. -A reference to the symptoms that were accepted/suggested to being tested in the region at the time of testing is informative and can be added in the references. -Data on the patient population characteristics are missing: any age distribution? -A cut off for timing was 7 days after onset of symptoms: there is no explanation why 7 days was chosen – given this categories a continuous variable into a binary variable and leads to loss of information, at minimum a reason for this choice should be made (this can be based on evidence on infectiousness or Ct evolution and discussed with your Fig 3). -The specimen sample type is mentioned in the manuscript – but preferably also has a place in the abstract, mainly given the importance of specimen type in the accuracy of diagnostic tests. -The STARD checklist should be completed. Major comments on the elements that are part of STARD are (additional others are integrated in the detailed comments): *Incomplete reporting of elements in the title *Introduction: the intended use and clinical role are, as mentioned prior, mixed and it is best to improve clarity. *Reference standard misses detail, as well participant description Detailed comments: Title: It is key that this was investigated in symptomatic individuals – and this information is best included in the title. The term “community testing” is a bit misleading – given it is in a primary care setting that was serving clinical diagnosis. Mainly because already in the introduction community testing is described as a general testing strategy. Abstract: -“Application for large- scale community testing for disease control purposes “ This is also not what this article is about – rather a place in the discussion. -“ to qRT-PCR “: best to add if in house or a commercial assay. -What is low Ct value: best to give cut off -what is magical about 7 days: is this 7 days because days since onset of symptoms was investigated as a continuous variable? (to add in text) Background & Methods: -line 65: to establish COVID-19 infection: best to add “acute” and is not COVID-19 infection, but SARS-CoV-2 infection, given COVID-19 is the disease. -line 69: the PCR technology is not limited to specialized laboratories – the batched testing is, however point-of-care PCR instruments and assays are available – when receiving a CLIA waiver they can be used close to the patient. The sample can be precurred everywhere – even by self-sampling. It is the whole cascade with mostly high-throughput batched PCR and non-self sampling that do not allow for broader community based testing and that is not focused on clinical diagnosis but on community screening in the context of a public health intervention that is referred to but not the real setting of your work. -line 72: the word “pressurizes”: I understand the meaning, but it is not a term used as such: better to replace by “stresses or burdens”. -line 74: rapid testing: rapid testing and reporting (if you indeed want the whole circle to be happening fast). -line 79: what is meant with qualitative? (might be literal translation) What is probably meant and needed: high quality quantitative data -line 80: missing “the” in front of clinical setting -line 86: “respiratory specimen” is rather broad: at least upper resp tract specimen and better to name which specimen type is used (saliva vs nose vs throat vs NPS…): This is relevant information that needs to be mentioned here in this last paragraph where the objective and PICOT of the study is stated. (it does come back in your manuscript later, which is appreciated). -line 98: what is means with clinical sens for the Ct values? How is clinical sens here defined? -line 108: missing a prior to specimen -line 112: an additional word about: testing strategy that was in place at that moment; is this rural – city? – population catchment area? -line 120: the manufacturer does not report a CI? -line 124: check English expression: visible to the naked eye -line 128: the reference standard is insufficiently described. The described platforms are not assays – a platform cannot have a target. On a platform one can use an assay – it is thus necessary to name the assay. Additional: reference to where the assay is described – approved by authorities? – some reference to its performance – mainly if validated for the specimen type and performance for the in the study used specimen collection process. -line 151: one time “in” too many -line 166-167: when there is mentioning of the sample size: why was this sample size chosen if such a specific number. To elaborate. -line 175: “for a range of prevalence” : this is the methods section, so one can as well be precise and quote the range. As well: it is stated that agreement is assessed: using which methods? The analysis section misses the methods used to assess agreement and the calculations. More info on the methods is needed here. How are CI calculated? Results -line 184: re-write - more literal translation -line 189: thank you to report on uninterpretable data – important to mention and that this proportion was very small. -line 209: Best to refer to: PCR pos - the pos rate by VRD was x among PCR positives. -line 224: it is unwarranted and incorrect to merge the 2 cohorts and give the PPV and NPV – which merges 2 groups with a very different pre-test probability – which, and thank you to report is unrealistically high: 60%. (latter comment on line 234 and on). -line 225: plural of specimen is specimens -line 239: here it is clear that you are assessing its performance in a high prevalence setting – thus clinical setting – because the prevalence reported on is still too high for general screening, where it will be lower than 10%. The NPV of course will be even more improved, given very low prevalence. Discussion: -line 264: the prospective arm here is overstated. It remains unclear if this is your target setting or not. -line 273: “As this cut off was based…”: unclear what the message is of this sentence. -line 282: A key element to mention is if the selection criteria to be tested were the same at the Breda sample in this later period – where we know that testing criteria have shifted over time. This discussion and clarity about test criteria might have a better place earlier in the discussion. -line 319: What is written supports the opportunity and the counterfactual of: with more tests, even with lower sens, this strategy will capture an absolute total of infected individuals that is larger compared to those now. It is appreciated that this is now discussed purely in the scheme of the symptomatic non-tested individuals – where this manuscript has the data to support its use in that specific group (compared to not having data to support PH community screening). This sentence might need some re-formulation however. -line 326-328: is there data to back this up? Reference? At least some more elaborated discussion – reference – opinion – study – even modelling… -line 330: this conclusion is not warranted that this test has – had an impact on disease control – the study did not look at this outcome in any way – but assumingly this is why it is written that it is promising – but the conclusion of your study should rather be based on what your data support. Rather as well a proposal on how further research can be meaningful and feasibly be performed and implemented can be of added value. -line 331: Performance: might it be that rather is meant “Implementation of the test”… Tables and figures: -Table 1: this is not a table that shows a real comparison – given it purely lists the effect estimates – it is not a real agreement evaluation neither. Cross tabulation is necessary. -Figure 1: the legend and info below a flow diagram and all tables and figures should be complete and self-explanatory. Part one better to be replaced by prospective cohort… (readers might be only looking at the graphs). As well: potentially eligible: they were or they were not – their status of being eligible was not potential: it was real. Or what is the potential of being eligible? ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Dr Jac Dinnes Reviewer #2: Yes: Gea A. Holtman Reviewer #3: Yes: Joanna Merckx [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 12 Feb 2021 Dear Editor, We would like to thank you for the thorough evaluation of our manuscript 'Performance evaluation of a SARS-CoV-2 rapid antigen test : test performance in the community in the Netherlands' (PONE-D-20-36174) and the opportunity to resubmit a revised copy. We are very grateful for the valuable comments and feedback from the editor and reviewers, which we believe have resulted in a greatly improved revised manuscript. Please find the responses to the points raised by the academic editor and the reviewers - following the original comment in italics - beneath. Two versions of the revised manuscript, one with and one without track changes, were uploaded alongside with this document. Thank you for your consideration of our revised manuscript. Sincerely yours, Nathalie Van der Moeren on behalf of the co-authors Points raised by the Academic Editor 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The manuscript was adapted in accordance with the PLOSONE style requirements. 2. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please describe how verbal consent was documented and witnessed, and why written consent was not obtained. If your study included minors, state whether you obtained consent from parents or guardians. In the first part of the study, individuals were informed about the study trough local media, by MHS communication channels (full participant information letter on the website, …) and by information signs at the participating test centres. Verbal informed consent was obtained separately by two independent MHS employees. No written informed consent was obtained as this would have compromised the strictly needed high flow of individuals being tested at the test centres. (3 minutes per client). In the second part of the study, potential participants were informed about the study and asked for verbal informed consent a first time by telephone. Verbal informed consent was obtained by a different MHS employee a second time during a home visit before obtainment of the study samples. No written informed consents were obtained as handling of documents obtained from confirmed infectious participants was considered a potential safety hazard. This information was added to the manuscript (line 157-167) No minors were included as stated in the paragraph ‘Patient recruitment’. (line 143) 3. To comply with PLOS ONE submission guidelines, in your Methods section, please provide additional information regarding your statistical analyses. In addition, please report your p-values to support your claims. For more information on PLOS ONE's expectations for statistical reporting, please see https://journals.plos.org/plosone/s/submission-guidelines.#loc-statistical-reporting.” Thank you for this valuable remark, we elaborated on the statistical analysis used (lines 200-211) and added p-values to support our claims (lines 257-259, 262) 4. PLOS ONE requires experimental methods to be described in enough detail to allow suitably skilled investigators to fully replicate and evaluate your study. See https://journals.plos.org/plosone/s/submission-guidelines#loc-materials-and-methods for more information. To meet PLOS ONE submission guidelines, in your Methods section, please provide a more detailed description of your RT-qPCR methodology, including the primer sequences used. Thank you for this valid remark, we elaborated on the RT-qPCR methods used. (line 135-140) As commercial kits were used and primer sequences are propriatory information of the manufacturer, we are not able to provide more information of the primer sequences used. 5. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide. We would like to change our Data Availability statement. We would like to add the anonymised data to the manuscript as two supplementary tables, which were uploaded alongside with this ‘Responses to the Authors’ letter. 6. Please include a caption for figure 1, 2 and 3. The captions for figure 1,2 and 3 were added to the manuscript at the bottom of the appropriate paragraphs. Points raised by Reviewer 1 1. Pg 7 -Title – could more clearly identify the study as reporting measures diagnostic test accuracy however this is covered in the Abstract so probably sufficient. The title was adjusted (Line 1-2) 2. Pg 8, line 44-49. Abstract – Results – It is not sufficiently clear that the main result presented in the Abstract reports data from a single cohort in part 1 (people presenting for testing at a single centre) combined with a cases only cohort in part 2 (people with a positive PCR who then agreed to have a subsequent second set of swabs for antigen testing and repeat PCR). Combining data in this way creates a sample that is enriched with PCR positive cases, and opens the study to biases associated with diagnostic case control studies with artificially inflated prevalence of disease (see comment below regarding this point). As currently written, the study could be mis-identified as one single gate study that includes all participants meeting criteria for testing. We are grateful for this very valuable remark. We clarified the nature and existence of the two cohorts in the abstract and removed the part where both cohorts were combined from the abstract, methods and results section. 3. Pg 10, line 106-7. Methods – Setting – A summary of the criteria for testing in place at the time of the study would be helpful, particular as national criteria for testing may change over time. Individuals can – provided they state to have COVID-19 like symptoms (rhinitis, cough, elevated temperature (not further specified), shortness of breath or sudden loss of sense of taste or smell) - make an appointment at a regional MHS test centre. These criteria remained unchanged during the whole study period. This information was added to the manuscript (line 108-112) 4. Pg 10, line 121-123. Methods - BD Veritor System – Could mention that the manufacturer states that the Analyzer “must be used for interpretation of all test results”. The sentence stating the manufacturer instruction to use the reader was adapted to ‘The manual prescribes interpretation of the results after 15 minutes with a reading device provided by the manufacturer (VA).’ (line 132-133) 5. Pg 11, line 148-149. Methods – Study procedure – Given the evidence for variation in test accuracy according to who obtained the swabs for testing, please describe whether the GGD employees obtaining the swabs were trained health care workers or non health care workers. The GGD employees were specifically trained to obtain nasopharyngeal samples, but were as a rule no trained healthcare workers. This information was added to the manuscript (line 113-114). 6. Pg 11, line 166-167. Sample size – Should provide some indication of how the target sample sizes were derived. We would like to thank the reviewer for this valuable remark, the methods used to determine the sample size were added to the manuscript. (line 195-197) 7. Pg 12 , line 161-162. Please report whether visual or analyzer based results were used for part 2 of the study. In part two of the study only results of the visual interpretation were used, this was added to the manuscript. (line 190) 8. Pg 12, line 182-183. Results – some baseline characteristics of included participants should be reported. For part one of the study the included participants were men and women aged 18 years and above, we do unfurtunately not have any more demographic data available. For part two of the study the participants ages varied from 18 to 84 years (M = 44, SD=16). The available demographic data was added to the manuscript.(line 215 and 251-252) 9. Pg 13, line 202-203. Results and Figure 2. What is of key interest here is the number of participants who had a positive PCR test, the number who were invited to have a home visit, and in particular, the number who consented to the home visit and were tested. It is possible that there are systematic differences between those who agreed to a home visit and those who did not, e.g. they could be older or younger, they may have been more likely to be experiencing continued symptoms or to have more severe symptoms, and this in turn could affect the observed sensitivity of the antigen test. Ideally the authors should report the full participant flow in Figure 2, and tabulate key characteristics according to those who consented or and who did not so that the reader can judge the generalisability of the population. We are thankful for this valid remark, unfortunately however this information was not gathered as the high turnover of clients at the MHS test centres (1 test every 3 minutes) could not be compromised. The lack of data on non-participants was added as a limitation to the discussion section of the article. (line 333-335) 10. Pg 14, line 234. Results and Table 3. As per the point made above, I have concerns about combining data for cases in part 1 with part 2 of the study in Table 3, given the different participant flow. I am not sure it adds much to Table 2 anyway, as the pattern in results is the same with only small differences in point estimates and Cis We are grateful for this justified remark, the section on the combined data (including table 3) was deleted from the abstract, methods, results and discussion section. 11. Pg 14, line 235-239. Results. PPV is only 100% if the results using visual inspection of the assay are used. As mentioned above this is against the manufacturers instructions for use and this needs to be explicit in this sentence. This information was emphasized in the manuscript. (line 275) 12. Pg 14, line 235-239. Results. This paragraph may be difficult for readers to follow without some graphical explanation or tabulation of data. It might be more useful to include this as a replacement for Table 3. I would also suggest using alternative estimates of test sensitivity for calculating NPV. The use of test sensitivity at <7days and Ct <30 appears highly selective to make the data more favourable to the test. In the field, one does not know what an individual’s Ct value will be - some cases of SARS-CoV-2 infection do not achieve high viral loads but that does not necessarily mean that individuals are not infectious. There is also the caveat made above that these results from people agreeing to a second test may not apply more widely. I agree that antigen testing has a role to play for community testing of symptomatic people however suggest the authors take care with over-interpreting the data. We would like to thank the reviewer for this helpful comment. We added a table with the PPV and NPV (table 2) and used the overall test sensitivity estimate from Part one of the study instead of the sensitivity based on the combined data from part three. (line 237-244) 13. Pg 15-18. Discussion. On the whole I agree with the points made in the discussion and the authors cover the majority of limitations in their data. The results for routine use of Ag testing in the community are interesting and potentially reassuring in regard to the concerns that I have raised about the participants in part 2 of the study. We would like to thank the reviewer for this remark. 14. Pg 17, There are a couple of spelling errors on line 291 - trough instead of through and line 296 our instead of or Thank you, corrections were made. 15. Pg 18, line 329-335. Fully agree with the authors conclusions which are in line with the data presented. We would like to thank the reviewer for this positive feedback. Points raised by Reviewer 2 The aim of this study was to determine the clinical performance of the SARS-CoV-2 rapid antigen test ‘BD Veritor System for Rapid Detection of SARS-CoV-2’. The authors found a high specificity and a reasonable sensitivity. This study is of high importance and clinical relevant in the current pandemic. However, we found the method section unclear and we have concerns about the study designs that were not addressed in the discussion section. They should explain their limitations and the impact of these limitations on the results of the study in the discussion section. Below we give specific comments for the manuscript. 1. The title includes two times performance. The title could be adjusted to: Clinical performance of a SARS-Cov-2 rapid antigen test in a Dutch community. The title was adjusted according to this remark and the points raised by reviewer one and three. (line 1-2) 2. Clear background with a clear description of what is known and what they will add to the literature. We would like to thank the reviewer for this positive feedback. 3. Line 79: qualitative data is not correct as this involves non-numerical data to understand the concepts and opinions. Maybe the authors mean quality data? The term qualitative should be removed. (see also line 85 and 118) Thank you for this helpful remark, the sentences were adapted accordingly. (line 35,79,283) 4. Line 79: It is unclear what is meant by performance. The authors determine the diagnostic accuracy (sensitivity, specificity, negative and positive predictive value). It is more clear when the authors uses the term diagnostic accuracy (or clinical performance) and specify this in the introduction. The term performance was replaced by test accuracy. (line 1, 37, 85, ..) 5. Line 84 to 87: sentence is not clear and should be rewritten The sentence was adapted. (line 85-88) 6. Line 87: GGD is a Dutch abbreviation and I am not sure if this is known in other countries. Therefore, use the abbreviation of MHS (Municipal Health Service) or explain the Dutch abbreviation GGD (Gemeentelijke Gezondheids Dienst). The abbreviation GGD was replaced by MHS throughout the manuscript. 7. Major problem is that they did not use one large prospective cohort study in which they could determine the sensitivity and specificity in the same cohort. The second cohort in the study was based on the selection of confirmed cases, which could overestimate the sensitivity of the test. We are grateful for this justified remark. We deleted the part in which the results of part one and two of the study were combined and clarified the characteristics of the two cohorts with specific objectives in the abstract and the methods section. 8. In addition, the sample size was not specified. How did you determine to include 300 negatives for part one and 100 positives for part two of the study? The following reference could be used for determining the sample size: Buderer NM. Statistical methodology: I. Incorporating the prevalence of disease into the sample size calculation for sensitivity and specificity. Acad Emerg Med 1996;3:895e900. We would like to thank the reviewer for this valuable remark, the methods used to determine the sample size were different from the table stated in Budere et al and were added to the manuscript. (line 195-197) 9. Explain your choices in the methods in more detail. On what criteria were the various Ct values groups based, why did you used 7 days as a threshold while the manufacturer uses 5 days after disease onset, why were only two of the three test centres included for this study (or was only Breda included)? The cut-off of 7 days was based on the results of Bullard et al. (Bullard, J. et al. Predicting infectious SARS-CoV-2 from diagnostic samples. Clin. Infect. Dis. 71, 2663–2666 (2020)) that showed no viral growth after incubation on Vero cells in positive samples obtained more than 8 days after symptom onset. This information was added to the manuscript. (line 206) Only one test centre was included in part one of the study because of logistic reasons (logistic set up in one place, distance from the laboratory to the test centre), in the second part of the study all qRT-PCR positive samples from test centres in the region were included provided that the qRT-PCR was performed in a Microvida laboratory. This was clarified in the manuscript. (line 120-124) 10. The two or three parts of the study are not very clear throughout the manuscript and the text on these parts are not consistent throughout the method section. For example, what will be determined in each part and at which centre are the patients recruited. Thank you for this valuable remark. Part three of the study (combined data) was deleted from the manuscript. The cohorts and objectives of part one and two were clarified in het abstract and method section. We clarified in which centres patients were recruited (line 120-124) 11. Why were the VRD performed by trained laboratory technicians and not by the GGD personal. Trained laboratory technicians could make fewer mistakes compared to the GGD personal, which will overestimate the sensitivity and specificity in clinical practice. In consultation with the MHS there was chosen to use trained laboratory technicians at the start of the practical implementation of the CRD at the MHS test centres. This is why trained laboratory technicians were chosen in the study. As in time the goal would be to have the test performed by MHS personnel, this is however a valuable point, it was added as a limitation to the discussion section. (line 314-316) 12. Line 98: explain abbreviation Ct-value (cycle threshold) when this is used for the first time in your manuscript. The explanation of the abbreviation was added to the manuscript. (line 43) 13. Line 106: second time that GGD/MHS abbreviation is explained. This is unnecessary as this is already specified in the introduction. The second explanation of the abbreviation was removed from the manuscript. 14. Line 106: What were the COVID-19 like symptoms? In the MHS guidelines these are described as rhinitis, cough, elevated temperature (not specified), shortness of breath or sudden loss of sense of taste or smell. This information was added to the manuscript. (line 109) 15. Line 110: BCO (Bron contact Onderzoek) is also a Dutch abbreviation. Change this to the English abbreviation. The abbreviation was deleted from the manuscript. 16. Line 134 and 139: Was verbal informed consent enough? Was a written informed consent not necessary? For part one of the study verbal informed consent was obtained separately by two independent MHS employees. Written informed consent could not be obtained as this would have compromised the strictly needed high flow of individuals being tested at the test centres (3 minutes per client). In the second part of the study, potential participants were asked for verbal informed consent a first time by telephone and a second time before obtainment of the study sample. No written informed consents were obtained as handling of documents obtained from confirmed infectious participants was considered a potential safety hazard. The protocol as such was granted an exemption of the Dutch medical scientific research act (WMO). This information was also added to the manuscript (line 158-168) 17. Line 148: reference of Dutch national COVID-19 test protocol is missing. The reference number of the Netherlands Trial Register (NL9018) was added to the manuscript. (line 156-15) 18. Line 170: part three of the study is not explained before and it is unclear what is meant by this. Thank you for this valuable remark, references to part three of the study (combined data part one and two) were removed from the manuscript (see also above) 19. Line 174: why was the NPV and PPV not calculated with the prevalence of study one (2x2 table), which represent the prevalence in the patients suspected of COVID-19 presenting at the GGD/MHS? This could be added to table 1. Thank you for this valid and helpful remark, NPV and PPV for the prevalence found in cohort one (4.8%) was added. NPV and PPV for a population prevalence of 10 and 20% was calculated based on the sensitivity and specificity estimates found in part one of the study (line 237-241) 20. Line 176: explain abbreviation VA. Thank you for this remark, the abbreviation was however already explained in lines 100-101 as ‘reading device provided by the manufacturer’. 21. A table with baseline characteristics of the patient population is missing. This could provide insight in the tested population (e.g. what were the COVID-19 like symptoms) For part one of the study the included participants were men and women aged 18 years and above, we do unfortunately not possess any more demographic data. For part two of the study the participants ages varied from 18 to 84 years (M = 44, SD=16). The available demographic data was added to the manuscript.(line 215 and 251-252) 22. The subgroups in table 2 had very small numbers, resulting in large 95% confidence intervals. This should be mentioned in the discussion section. Thank you for this valid remark, the point was added to the limitations in the discussion section. (line 305-307) 23. The number of patients included in part two is not clear: Figure 2 mentioned 129, table 2 mentioned 123 and the abstract also mentioned 123. 129 symptomatic participants were included of which 123 still had a positive qRT-PCR at the moment of the second sampling. This was emphasized in the table caption and the result section. (line 247-250) 24. Why were part one and two combined? This is not explained in the method section We are grateful for this valid and valuable remark, part three of the study was removed from the manuscript (abstract, methods, results and discussion) 25. Figure 1: how many patients were asked to participate in the study and did not give informed consent? Unfortunately, this information was not gathered as the high turnover of clients at the MHS test centres (1 test every 3 minutes) could not be compromised. We realise this is a limitation of our study, the lack of data on non-participants was added to the limitations in the discussion section. (line 333-335) 26. Line 203: mention that the three asymptomatic patients were excluded from to analysis of part two and explain in the methods why you excluded these patients. Being or haven been symptomatic was an inclusion criterium for part two of the study, this was clarified in the method section (line 150) 27. Table 2: the abbreviations used were not explained underneath the table. The explanations of the abbreviations were added underneath the table. 28. Line 239: 95% confidence intervals of PPV and NPV are missing. In addition, it is not clear how NPV was calculated and this should be explained in the method section. The PPV and NPV were calculated using Medcalc, this information and 95% CI for NPV were added to the manuscript. (line 204) 29. They mention that the literature on clinical performance is scares, however there is a Cochrane review on point of care tests for SARS-CoV-2 infection that included 22 publications and this should be discussed in the discussion section: Dinnes, J., Deeks, J.J., Adriano, A., Berhane, S., Davenport, C., Dittrich, S., Emperador, D., Takwoingi, Y., Cunningham, J., Beese, S. and Dretzke, J., 2020. Rapid, point‐of‐care antigen and molecular‐based tests for diagnosis of SARS‐CoV‐2 infection. Cochrane Database of Systematic Reviews, (8). We discussed the results of Dinnes et al. in the discussion section of the original manuscript (reference 5, lines 406-413). The review in question only included 5 studies on rapid antigen tests, the remaining studies were on rapid molecular tests which we found were out of the scope of the discussion of our manuscript. Furthermore, the included studies were often performed on remnant specimen stored in virus transport medium and often contained little information on days since disease onset and the clinical setting they were obtained in. We changed the sentences stating ‘literature is scarce’ to ‘quality literature is limited’. (lines 36, 284,..) 30. They mention that strengths of the study are the prospective design and large sample size. However, only part one had a prospective design, while part two had more of a case-control design that could overestimate the sensitivity. In addition, they did not include one large prospective cohort. When a sample size calculation would be performed according to Buderer 1996, slightly more than 1000 patients would be needed. Therefore, they cannot argue that they include a large number of patients. In addition, the number of patients in the subgroup analysis were very small resulting in low precision. We are grateful for this valid remark. The differences in design between part one and two of the study were emphasized throughout the manuscript (see above). The discussion paragraph on strengths of the study was limited to ‘The prospective design of part one of the study and and the obtainment of samples in the target setting of potential use are great assets of this study.’ (line 296) The valuable remark on the small numbers of the participants in the strata was also addressed in the discussion section (line 305-307) 31. RT-PCR is not the gold standard, but more a reference standard. There are also guidelines and clinical presentations used as a reference standard. This issue could be discussed in the discussion section. Thank you for this valuable remark, adjustments were made in the manuscript. (line 130, 333) 32. Line 246: It is not common to refer to a figure in the discussion section. The reference to the figure was removed from the discussion. 33. Line 282 to 289: it is not common to include new results of a follow-up study in the discussion section. Why is this not included in the methods and results section? After implementation, there was a short period in which qRT-PCR were performed alongside the VRD. This was not in a study setting, but a way to monitor the performance in the first period after implementation. 34. Line 294 to 308: This is repetition of information that was also stated in the introduction and could be deleted. Thank you for this remark, we substantially shortened the paragraph. 35. Line 315 to 322: The use of a study performed in the United States is not clear, as the situation is quite different in the United States compared to the Netherlands. The reference was deleted from the manuscript. Points raised by Reviewer 3 The authors report a diagnostic performance study comparing SARS-CoV-2 rapid antigen testing to batched laboratory RT-PCR testing using 2 commercial PCR assays. The study reports on a high prevalence setting in a symptomatic population presenting to an outpatient testing facility in the Netherlands in the early fall of 2020. Two cohorts are used, one prospectively collected, blinded for the outcome and one cohort assessing test characteristics in prior test positives only. They also aim to report on agreement between the application of an automated reader compared to manually reading of the test result. The manuscript is in general replicable and reasonably well described. It however combines a technically very sound study design with a more flawed second arm (positives only) and the second agreement rates are not quantified. It is unclear on which basis the sample size is calculated and this sample size remains rather small. The main comment is the duality of the study question and objective: it is unclear if the authors try to answer a more population health testing in a clinical sick population (which this study cannot) or can agree that their data are those from a clinical diagnostic setting and the conclusions should remain focused to that extend (as clearly stated in their objective section – but circled around both in the abstract, introduction and discussion section). A further elaboration on counterfactual further implications is possible, but should be left for the discussion section only to be appropriate. However, diagnostic performance studies on test positives and negatives remain rare, more high quality quantitative data are necessary, so I thank the authors for the effort of writing up their findings. We would like to thank the reviewer for this general comment. We will elaborate on the concerns on ‘the duality of the study question’ in the response to comment 1. 1. The article turns back and forward between the antigen test being a test for clinical practice vs for community testing. Community testing, at this point, has however rather the connotation of screening for infection and infectiousness and being used as a public health intervention tool (non-pharmaceutical intervention). This is not what the study is investigating. This study looks at the performance in a cohort of ill and symptomatic individuals to make or refute the diagnosis of SARS-CoV-2 infection and COVID-19 disease. The target group of the MHS test centres are individuals who are symptomatic, but do not require hospitalisation or attendance by a physician. The goal is in other words to detect infectious individuals, promptly quarantine them and start contact tracing. This is why the tests are performed by the Dutch MHS. Testing of clinically ill patients who need medical attention is the responsibility of family doctors and hospitals. The MHS test centres are in other words a public health intervention tool and not a centre for the clinical diagnosis of patients in order to direct treatment. 2. There is lack of evidence of the use of antigen testing both in a (primary care) clinical setting and in a public health focused context. This study is performed in a population with a prevalence of 29% and all patients are or are supposed to be symptomatic and thus not a representative nor generalizable population for the population health test setting. The objective is clearly stated as to assess clinical spec and sens – for disease diagnosis amongs symptomatic individuals. How the data can be interpreted in the broader picture of community testing in the PH sense would best be kept for in the discussion (and not in the background – introduction – of neither the manuscript nor the abstract – given it gives a false expectation of the setting and results) given this study is limited in its evidence to that extend. In addition, the patient group that was assessed for, as is mentioned, the sens, is clinically a group further away from how a rapid tests would be applied in the public health setting – given those individuals are already at the further right side of their infection time-curve. This study thus gives mainly data regarding this right tail of infected and symptomatic individuals. We would like to thank the reviewer fort this remark. The study part with combined data was deleted from the manuscript, resulting in one prospective cohort in part one of the study with a prevalence of 4.8% and a second cohort in part two with known qRT-PCR positive participants only. We think it is a valuable comment that the individuals from the latter cohort are more on the right tale of the symptomatic individuals due to the delay in obtainment of the second test during the home visit, we added this concern to the discussion section of the paper. (302-304) Furthermore, we emphasized the difference between the two cohorts throughout the revised manuscript. 3. What is meant with clinical sensitivity is not completely clear. Thank you for this justified remark. With clinical sensitivity and specificity, we refer to sensitivity and specificity on clinical samples, as opposite to analytical performance. We realise this might be an unclear formulation. We deleted the terms clinical sensitivity and specificity from the manuscript and replaced them with sensitivity/specificity on clinical samples. 4. A reference to the symptoms that were accepted/suggested to being tested in the region at the time of testing is informative and can be added in the references. This information was added to the manuscript. (lines 109-110) 5. Data on the patient population characteristics are missing: any age distribution? Thank you for this valuable remark. For part one of the study the included participants were men and women aged 18 years and above, we do unfortunately not have any more demographic data available. For part two of the study the participants ages varied from 18 to 84 years (M = 44, SD=16). The available demographic data was added to the manuscript.(line 215 and 251-252) 6. A cut off for timing was 7 days after onset of symptoms: there is no explanation why 7 days was chosen – given this categories a continuous variable into a binary variable and leads to loss of information, at minimum a reason for this choice should be made (this can be based on evidence on infectiousness or Ct evolution and discussed with your Fig 3). The cut-off of 7 days was based on the results of Bullard et al. (Bullard, J. et al. Predicting infectious SARS-CoV-2 from diagnostic samples. Clin. Infect. Dis. 71, 2663–2666 (2020)) that showed no viral growth after incubation on Vero cells in positive samples obtained more than 8 days after symptom onset. This information was added to the manuscript. (line 206-208) 7. The specimen sample type is mentioned in the manuscript – but preferably also has a place in the abstract, mainly given the importance of specimen type in the accuracy of diagnostic tests. This information was added to the abstract. (line 38) 8. The STARD checklist should be completed. Major comments on the elements that are part of STARD are (additional others are integrated in the detailed comments): *Incomplete reporting of elements in the title *Introduction: the intended use and clinical role are, as mentioned prior, mixed and it is best to improve clarity. *Reference standard misses detail, as well participant description * The title was adapted accordingly to ‘Evaluation of the test accuracy of a SARS-CoV-2 rapid antigentest in symptomatic community dwelling individuals in the Netherlands’ (line 1 and 2) * For the reply to this comment we kindly refer to the response to comment one. * Thank you for this valid remark, we elaborated on the RT-qPCR methods used. (line 136-141) 9. It is key that this was investigated in symptomatic individuals – and this information is best included in the title. The term “community testing” is a bit misleading – given it is in a primary care setting that was serving clinical diagnosis. Mainly because already in the introduction community testing is described as a general testing strategy. ‘Symptomatic’ was added to individuals in the title (line 1 and 2) As explained in the response to comment one, the goal of testing at the MHS test centres is prompt isolation and initialisation of contact tracing, patients requiring medical attention are tested by the MD they attend. 10. “Application for large- scale community testing for disease control purposes “ This is also not what this article is about – rather a place in the discussion. We would kindly like to refer to the answer formulated on comment one. 11. “ to qRT-PCR “: best to add if in house or a commercial assay. Thank you for this valid remark. Commercial assays were used, we added this information to the manuscript and elaborated further on the RT-qPCR methods used. (line 136-141) 12. What is low Ct value: best to give cut off Thank you for this remark, the sentence was deleted from the manuscript. 13. What is magical about 7 days: is this 7 days because days since onset of symptoms was investigated as a continuous variable? (to add in text) The cut-off of 7 days was based on the results of Bullard et al. (Bullard, J. et al. Predicting infectious SARS-CoV-2 from diagnostic samples. Clin. Infect. Dis. 71, 2663–2666 (2020)) that showed no viral growth after incubation on Vero cells in positive samples obtained more than 8 days after symptom onset. This information was added to the manuscript. (line 206-208) 14. line 65: to establish COVID-19 infection: best to add “acute” and is not COVID-19 infection, but SARS-CoV-2 infection, given COVID-19 is the disease. Thank you for these valid remarks, the manuscript was adapted. (line 65) 15. line 69: the PCR technology is not limited to specialized laboratories – the batched testing is, however point-of-care PCR instruments and assays are available – when receiving a CLIA waiver they can be used close to the patient. The sample can be precurred everywhere – even by self-sampling. It is the whole cascade with mostly high-throughput batched PCR and non-self sampling that do not allow for broader community-based testing and that is not focused on clinical diagnosis but on community screening in the context of a public health intervention that is referred to but not the real setting of your work. We have discussed this remark in our group and we are unsure what the point is the reviewer puts forward here, therefore we were not able to formulate a response. 16. line 72: the word “pressurizes”: I understand the meaning, but it is not a term used as such: better to replace by “stresses or burdens”. Thank you for this correction, this was replaced in the manuscript. (line 72) 17. line 74: rapid testing: rapid testing and reporting (if you indeed want the whole circle to be happening fast). ‘Reporting’ was added to the manuscript. (line 75) 18. line 79: what is meant with qualitative? (might be literal translation) What is probably meant and needed: high quality quantitative data Thank you for this correction, quantitative was replaced by high quality. (lines 35, 72, 283) 19. line 80: missing “the” in front of clinical setting ‘The’ was added in the manuscript. (line 81) 20. line 86: “respiratory specimen” is rather broad: at least upper resp tract specimen and better to name which specimen type is used (saliva vs nose vs throat vs NPS…): This is relevant information that needs to be mentioned here in this last paragraph where the objective and PICOT of the study is stated. (it does come back in your manuscript later, which is appreciated). Thank you for this valuable remark, this sentence was rewritten in accordance with the comments of one of the other reviewers. 21. line 98: what is means with clinical sens for the Ct values? How is clinical sens here defined? We have discussed this remark in our group and we are unsure what the point is the reviewer puts forward here, therefore we were not able to formulate a response. 22. line 108: missing a prior to specimen ‘ The’ was added. (line 112) 23. line 112: an additional word about: testing strategy that was in place at that moment; is this rural – city? – population catchment area? All community dwelling individuals with covid-19 like symptoms (cfr supra) are tested at the MHS test centres, clinically ill patients requiring medical attention are tested by the M.D. they attend. In the test centres individuals from the cities they were localised in, as well ass induvial from the more rural surroundings are tested. The total test capacity was of the tree test centres was 1200 tests per day. 24. line 120: the manufacturer does not report a CI The 95% CI were added to the manuscript. (line 129) 25. line 124: check English expression: visible to the naked eye This was adapted in the manuscript. (line 133) 26. line 128: the reference standard is insufficiently described. The described platforms are not assays – a platform cannot have a target. On a platform one can use an assay – it is thus necessary to name the assay. Additional: reference to where the assay is described – approved by authorities? – some reference to its performance – mainly if validated for the specimen type and performance for the in the study used specimen collection process. Thank you for this valuable remark, we added the used assays to the manuscript and elaborated further on the RT-qPCR methods used. (line 136-141) 27. line 151: one time “in” too many The paragraph was rewritten. 28. line 166-167: when there is mentioning of the sample size: why was this sample size chosen if such a specific number. To elaborate. We would like to thank the reviewer for this valuable remark, the methods used to determine the sample size were added to the manuscript. (line 195-197) 29. line 175: “for a range of prevalence” : this is the methods section, so one can as well be precise and quote the range. As well: it is stated that agreement is assessed: using which methods? The analysis section misses the methods used to assess agreement and the calculations. More info on the methods is needed here. How are CI calculated? ‘For a range of prevalence was changed by ‘for a population prevalence of 10% and 20%’. Furthermore, we elaborated on the analytical methods used. (Line 200-211) 30. line 184: re-write - more literal translation Thank you for this correction, the sentence was rewritten as ‘Two (0·6%) specimens with a negative VRD result were excluded because qRT-PCR could not be recovered (error in sample number registration).’ (line 217-218) 31. line 189: thank you to report on uninterpretable data – important to mention and that this proportion was very small. 32. line 209: Best to refer to: PCR pos - the pos rate by VRD was x among PCR positives. We have discussed remarks 31 and 32 in our group and we are unsure what the point is the reviewer puts forward here, therefore we were not able to formulate a response. 33. line 224: it is unwarranted and incorrect to merge the 2 cohorts and give the PPV and NPV – which merges 2 groups with a very different pre-test probability – which, and thank you to report is unrealistically high: 60%. (latter comment on line 234 and on). Thank you for this justified and valuable remark, the paragraphs on the combined data were removed from the manuscript. The reference to a population prevalence of 60% was deleted from the manuscript. 34. line 225: plural of specimen is specimens Thank you, this was adapted in the manuscript. (line 225) 35. line 239: here it is clear that you are assessing its performance in a high prevalence setting – thus clinical setting – because the prevalence reported on is still too high for general screening, where it will be lower than 10%. The NPV of course will be even more improved, given very low prevalence. Thank you for this remark, we hope it is sufficiently answered with the clarification of part one and two of the study, the removal of the part on combined data and the clarification on the setting (reviewer comment one). 36. line 264: the prospective arm here is overstated. It remains unclear if this is your target setting or not. We would like to kindly refer to the response to comment 35. 37. line 273: “As this cut off was based…”: unclear what the message is of this sentence. As the cut off of 30 was based on the results found in the study (data driven), a prospective evaluation is needed. 38. line 282: A key element to mention is if the selection criteria to be tested were the same at the Breda sample in this later period – where we know that testing criteria have shifted over time. This discussion and clarity about test criteria might have a better place earlier in the discussion. We would like to thank the reviewer for this comment. The criteria to get tested did not change during the study period, this was added to the method section. (line 111) 39. line 319: What is written supports the opportunity and the counterfactual of: with more tests, even with lower sens, this strategy will capture an absolute total of infected individuals that is larger compared to those now. It is appreciated that this is now discussed purely in the scheme of the symptomatic non-tested individuals – where this manuscript has the data to support its use in that specific group (compared to not having data to support PH community screening). This sentence might need some re-formulation however. We have discussed this remark in our group and we are unsure what the exact point is the reviewer puts forward here, we hope the remark is sufficiently answered by the response to comment one. 40. line 326-328: is there data to back this up? Reference? At least some more elaborated discussion – reference – opinion – study – even modelling… Thank you for this justified remark, the reference (Mina MJ, Parker R, Larremore DB. Rethinking Covid-19 Test Sensitivity - A Strategy for Containment. N Engl J Med. 2020 Sep 30) was added behind the sentence. (line 361) 41. line 330: this conclusion is not warranted that this test has – had an impact on disease control – the study did not look at this outcome in any way – but assumingly this is why it is written that it is promising – but the conclusion of your study should rather be based on what your data support. Rather as well a proposal on how further research can be meaningful and feasibly be performed and implemented can be of added value. Thank you for this remark. As stated in the answer to comment one, the goal of testing at the GGD test centres is prompt initiation of infection control measures. As we did not look at this outcome, we write the test is promising in this context (opposite to the indication of testing of clinically ill patients with a need for medical care). The ‘impact on disease control’ was added to the topics needing further research. (line 366-367) 42. line 331: Performance: might it be that rather is meant “Implementation of the test”… ‘Performance’ was changed to ‘execution’. (line 364) 43. Table 1: this is not a table that shows a real comparison – given it purely lists the effect estimates – it is not a real agreement evaluation neither. Cross tabulation is necessary. Thank you for this remark, cross tabulations for both visual interpretation and interpretation with VRD were added as supplementary tables. (S1) 44. Figure 1: the legend and info below a flow diagram and all tables and figures should be complete and self-explanatory. Part one better to be replaced by prospective cohort… (readers might be only looking at the graphs). As well: potentially eligible: they were or they were not – their status of being eligible was not potential: it was real. Or what is the potential of being eligible? The captions for figure 1 and 2 were adapted, ‘prospective cohort’ and ‘qRT-PCR positive participants only’ were added respectively. Potentially eligible was adapted to eligible. Submitted filename: Response to Reviewers.docx Click here for additional data file. 29 Mar 2021 PONE-D-20-36174R1 Evaluation of the test accuracy of a SARS-CoV-2 rapid antigen test in symptomatic community dwelling individuals in the Netherlands PLOS ONE Dear Nathalie Van der Moeren, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Minor comments about grammatical errors and sections with missing information have been raised by peer-reviewer #3. Please note that PLOS ONE does not copy-edit accepted manuscripts, hence we would appreciate your revisions to the highlighted areas. Please submit your revised manuscript by 15 April 2021. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Eleanor Ochodo, M.D., PhD Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I thank the authors for their detailed responses to my comments and improvements made to the manuscript. I have no further comments or suggestions. Reviewer #3: Thank you for the reviewed manuscript. The decisions to exclude part 3 and other major changes improved the interpretation and quality of the manuscript and its interpretation. The questions where answered. Short remarks (mainly language - short words missing): -Line 184 tracked changes version: (full participant information letter on the website, ...): will there be an url address be added? -line 186: "No written informed consent was obtained as this would have compromised the strictly needed high flow of individuals being tested at...": the explanation is not necessary here - as long as this was discussed with the ethical committee and the written consent was waved, then this is valid. It is written later in the text as well. NOt necessary here. -line 233: "he cut-off of 7 days was chosen based...": results OF a study by Bullard... (some re-writing - English- of the sentence necessary) -line 288: "18 to 83 years (M = 44,": mean or median? -line 296: "to be the ": take out the (and as well from line 301) -line 337: "Based the cohort in part two of the study":re-write - word missing... -line 384: "overestimate test accuracy in final clinical setting.".. grammar ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Jac Dinnes Reviewer #3: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 2 Apr 2021 Additional points raised by Reviewer 2 -Line 184 tracked changes version: (full participant information letter on the website, ...): will there be an url address be added? Thank you for this comment. As the participant information letter was written in Dutch, we doubt the added value of adding the URL. (https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjnt9aWydrvAhUIg_0HHc0aAaUQFjAAegQIAxAD&url=https%3A%2F%2Fwww.ggdwestbrabant.nl%2Fcoronavirus%2F-%2Fmedia%2FA70F8404BCC24075B03934AA764D9FBD.ashx&usg=AOvVaw1Hx9cTDPy_qh-0miug_rY0) -line 186: "No written informed consent was obtained as this would have compromised the strictly needed high flow of individuals being tested at...": the explanation is not necessary here - as long as this was discussed with the ethical committee and the written consent was waved, then this is valid. It is written later in the text as well. NOt necessary here. Thank you for this comment. The extra information on why no written informed consent was obtained was added to the manuscript at the request of reviewer 2 and the academic editor and was clarified for study part one and two seperately. -line 233: "he cut-off of 7 days was chosen based...": results OF a study by Bullard... (some re-writing - English- of the sentence necessary) Thank you for this remark, the sentence was rewritten (line 206). -line 288: "18 to 83 years (M = 44,": mean or median? M stands for mean, this was added to the manuscript (line259). -line 296: "to be the ": take out the (and as well from line 301) Thank you for this correction, ‘the’ was deleted from both lines. (line 263 and 267) -line 337: "Based the cohort in part two of the study":re-write - word missing... Thank you for this correction, the sentence was adapted. (line 286) -line 384: "overestimate test accuracy in final clinical setting.".. grammar The sentence was rewritten. (line 324) Submitted filename: Response to Reviewers.docx Click here for additional data file. 7 Apr 2021 PONE-D-20-36174R2 Evaluation of the test accuracy of a SARS-CoV-2 rapid antigen test in symptomatic community dwelling individuals in the Netherlands PLOS ONE Dear Nathalie Van der Moeren, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. PLOS ONE does not copy edit accepted manuscripts. We therefore request a minor modification of the statement in the ethics section lines 158-159" In part one of the study, individuals were informed about the study trough local media, by 159 MHS communication channels (full participant information letter on the website, …). We accept not adding the URL as the letter will be in Dutch. However, the statement as it is, (website.....) may confuse readers into thinking there is missing data. Kindly delete the dots after website. In addition please change the word trough to through. A final spelling and grammar check for your manuscript will be appreciated before you resubmit. Please submit your revised manuscript by 13 April 2021. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. We look forward to receiving your revised manuscript. Kind regards, Eleanor Ochodo, M.D., PhD Academic Editor PLOS ONE 10 Apr 2021 - PLOS ONE does not copy edit accepted manuscripts. We therefore request a minor modification of the statement in the ethics section lines 158-159" In part one of the study, individuals were informed about the study trough local media, by 159 MHS communication channels (full participant information letter on the website, …). We accept not adding the URL as the letter will be in Dutch. However, the statement as it is, (website.....) may confuse readers into thinking there is missing data. Kindly delete the dots after website. The dots in question were removed. (line 159) - In addition please change the word trough to through. Thank you for this correction, trough was changed by through. (line 158) - A final spelling and grammar check for your manuscript will be appreciated before you resubmit. We performed a final spelling and grammar check, minor corrections were made. Submitted filename: Response to Reviewers.docx Click here for additional data file. 16 Apr 2021 Evaluation of the test accuracy of a SARS-CoV-2 rapid antigen test in symptomatic community dwelling individuals in the Netherlands PONE-D-20-36174R3 Dear Nathalie Van Der Moeren, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Eleanor Ochodo, M.D., PhD Academic Editor PLOS ONE 6 May 2021 PONE-D-20-36174R3 Evaluation of the test accuracy of a SARS-CoV-2 rapid antigen test in symptomatic community dwelling individuals in the Netherlands Dear Dr. Van der Moeren: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Eleanor Ochodo Academic Editor PLOS ONE
  5 in total

1.  Rethinking Covid-19 Test Sensitivity - A Strategy for Containment.

Authors:  Michael J Mina; Roy Parker; Daniel B Larremore
Journal:  N Engl J Med       Date:  2020-09-30       Impact factor: 91.245

2.  Temporal dynamics in viral shedding and transmissibility of COVID-19.

Authors:  Xi He; Eric H Y Lau; Peng Wu; Xilong Deng; Jian Wang; Xinxin Hao; Yiu Chung Lau; Jessica Y Wong; Yujuan Guan; Xinghua Tan; Xiaoneng Mo; Yanqing Chen; Baolin Liao; Weilie Chen; Fengyu Hu; Qing Zhang; Mingqiu Zhong; Yanrong Wu; Lingzhai Zhao; Fuchun Zhang; Benjamin J Cowling; Fang Li; Gabriel M Leung
Journal:  Nat Med       Date:  2020-04-15       Impact factor: 53.440

3.  To Interpret the SARS-CoV-2 Test, Consider the Cycle Threshold Value.

Authors:  Michael R Tom; Michael J Mina
Journal:  Clin Infect Dis       Date:  2020-11-19       Impact factor: 9.079

4.  SARS-CoV-2: The viral shedding vs infectivity dilemma.

Authors:  Arabella Widders; Alex Broom; Jennifer Broom
Journal:  Infect Dis Health       Date:  2020-05-20

5.  Rapid, point-of-care antigen and molecular-based tests for diagnosis of SARS-CoV-2 infection.

Authors:  Jacqueline Dinnes; Jonathan J Deeks; Ada Adriano; Sarah Berhane; Clare Davenport; Sabine Dittrich; Devy Emperador; Yemisi Takwoingi; Jane Cunningham; Sophie Beese; Janine Dretzke; Lavinia Ferrante di Ruffano; Isobel M Harris; Malcolm J Price; Sian Taylor-Phillips; Lotty Hooft; Mariska Mg Leeflang; René Spijker; Ann Van den Bruel
Journal:  Cochrane Database Syst Rev       Date:  2020-08-26
  5 in total
  11 in total

Review 1.  Rapid, point-of-care antigen tests for diagnosis of SARS-CoV-2 infection.

Authors:  Jacqueline Dinnes; Pawana Sharma; Sarah Berhane; Susanna S van Wyk; Nicholas Nyaaba; Julie Domen; Melissa Taylor; Jane Cunningham; Clare Davenport; Sabine Dittrich; Devy Emperador; Lotty Hooft; Mariska Mg Leeflang; Matthew Df McInnes; René Spijker; Jan Y Verbakel; Yemisi Takwoingi; Sian Taylor-Phillips; Ann Van den Bruel; Jonathan J Deeks
Journal:  Cochrane Database Syst Rev       Date:  2022-07-22

Review 2.  Performance of Antigen Detection Tests for SARS-CoV-2: A Systematic Review and Meta-Analysis.

Authors:  Anastasia Tapari; Georgia G Braliou; Maria Papaefthimiou; Helen Mavriki; Panagiota I Kontou; Georgios K Nikolopoulos; Pantelis G Bagos
Journal:  Diagnostics (Basel)       Date:  2022-06-04

3.  Accuracy of rapid point-of-care antigen-based diagnostics for SARS-CoV-2: An updated systematic review and meta-analysis with meta-regression analyzing influencing factors.

Authors:  Lukas E Brümmer; Stephan Katzenschlager; Sean McGrath; Stephani Schmitz; Mary Gaeddert; Christian Erdmann; Marc Bota; Maurizio Grilli; Jan Larmann; Markus A Weigand; Nira R Pollock; Aurélien Macé; Berra Erkosar; Sergio Carmona; Jilian A Sacks; Stefano Ongarello; Claudia M Denkinger
Journal:  PLoS Med       Date:  2022-05-26       Impact factor: 11.613

Review 4.  Rapid Antigen Assays for SARS-CoV-2: Promise and Peril.

Authors:  Thao T Truong; Jennifer Dien Bard; Susan M Butler-Wu
Journal:  Clin Lab Med       Date:  2022-03-04       Impact factor: 2.172

5.  Clinical Accuracy of Instrument-Read SARS-CoV-2 Antigen Rapid Diagnostic Tests (Ag-IRRDTs).

Authors:  Ali Umit Keskin; Pinar Ciragil; Aynur Eren Topkaya
Journal:  Int J Microbiol       Date:  2022-05-09

6.  The Coronavirus Disease 2019 Spatial Care Path: Home, Community, and Emergency Diagnostic Portals.

Authors:  Gerald J Kost
Journal:  Diagnostics (Basel)       Date:  2022-05-12

7.  Rapid, point-of-care antigen and molecular-based tests for diagnosis of SARS-CoV-2 infection.

Authors:  Jacqueline Dinnes; Jonathan J Deeks; Sarah Berhane; Melissa Taylor; Ada Adriano; Clare Davenport; Sabine Dittrich; Devy Emperador; Yemisi Takwoingi; Jane Cunningham; Sophie Beese; Julie Domen; Janine Dretzke; Lavinia Ferrante di Ruffano; Isobel M Harris; Malcolm J Price; Sian Taylor-Phillips; Lotty Hooft; Mariska Mg Leeflang; Matthew Df McInnes; René Spijker; Ann Van den Bruel
Journal:  Cochrane Database Syst Rev       Date:  2021-03-24

8.  ESCMID COVID-19 guidelines: diagnostic testing for SARS-CoV-2.

Authors:  Paraskevi C Fragkou; Giulia De Angelis; Giulia Menchinelli; Fusun Can; Federico Garcia; Florence Morfin-Sherpa; Dimitra Dimopoulou; Elisabeth Mack; Adolfo de Salazar; Adriano Grossi; Theodore Lytras; Chrysanthi Skevaki
Journal:  Clin Microbiol Infect       Date:  2022-02-23       Impact factor: 13.310

9.  Detection of SARS-CoV-2 infection in the general population by three prevailing rapid antigen tests: cross-sectional diagnostic accuracy study.

Authors:  Roderick P Venekamp; Irene K Veldhuijzen; Karel G M Moons; Wouter van den Bijllaardt; Suzan D Pas; Esther B Lodder; Richard Molenkamp; Zsofi Igloi; Constantijn Wijers; Claudy Oliveira Dos Santos; Sylvia B Debast; Marjan J Bruins; Khaled Polad; Carla R S Nagel-Imming; Wanda G H Han; Janneke H H M van de Wijgert; Susan van den Hof; Ewoud Schuit
Journal:  BMC Med       Date:  2022-02-24       Impact factor: 8.775

Review 10.  Performance of Rapid Antigen Tests for COVID-19 Diagnosis: A Systematic Review and Meta-Analysis.

Authors:  Muhammad Fazli Khalid; Kasturi Selvam; Alfeq Jazree Nashru Jeffry; Mohamad Fazrul Salmi; Mohamad Ahmad Najib; Mohd Noor Norhayati; Ismail Aziah
Journal:  Diagnostics (Basel)       Date:  2022-01-04
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.