Literature DB >> 35120141

Diagnostic accuracy of serological tests for the diagnosis of Chikungunya virus infection: A systematic review and meta-analysis.

Anna Andrew1,2, Tholasi Nadhan Navien1, Tzi Shien Yeoh1, Marimuthu Citartan1, Ernest Mangantig1, Magdline S H Sum3, Ewe Seng Ch'ng1, Thean-Hock Tang1.   

Abstract

BACKGROUND: Chikungunya virus (CHIKV) causes febrile illnesses and has always been misdiagnosed as other viral infections, such as dengue and Zika; thus, a laboratory test is needed. Serological tests are commonly used to diagnose CHIKV infection, but their accuracy is questionable due to varying degrees of reported sensitivities and specificities. Herein, we conducted a systematic review and meta-analysis to evaluate the diagnostic accuracy of serological tests currently available for CHIKV. METHODOLOGY AND PRINCIPAL
FINDINGS: A literature search was performed in PubMed, CINAHL Complete, and Scopus databases from the 1st December 2020 until 22nd April 2021. Studies reporting sensitivity and specificity of serological tests against CHIKV that used whole blood, serum, or plasma were included. QUADAS-2 tool was used to assess the risk of bias and applicability, while R software was used for statistical analyses. Thirty-five studies were included in this meta-analysis; 72 index test data were extracted and analysed. Rapid and ELISA-based antigen tests had a pooled sensitivity of 85.8% and 82.2%, respectively, and a pooled specificity of 96.1% and 96.0%, respectively. According to our meta-analysis, antigen detection tests serve as a good diagnostic test for acute-phase samples. The IgM detection tests had more than 90% diagnostic accuracy for ELISA-based tests, immunofluorescence assays, in-house developed tests, and samples collected after seven days of symptom onset. Conversely, low sensitivity was found for the IgM rapid test (42.3%), commercial test (78.6%), and for samples collected less than seven of symptom onset (26.2%). Although IgM antibodies start to develop on day 2 of CHIKV infection, our meta-analysis revealed that the IgM detection test is not recommended for acute-phase samples. The diagnostic performance of the IgG detection tests was more than 93% regardless of the test formats and whether the test was commercially available or developed in-house. The use of samples collected after seven days of symptom onset for the IgG detection test suggests that IgG antibodies can be detected in the convalescent-phase samples. Additionally, we evaluated commercial IgM and IgG tests for CHIKV and found that ELISA-based and IFA commercial tests manufactured by Euroimmun (Lübeck, Germany), Abcam (Cambridge, UK), and Inbios (Seattle, WA) had diagnostic accuracy of above 90%, which was similar to the manufacturers' claim.
CONCLUSION: Based on our meta-analysis, antigen or antibody-based serological tests can be used to diagnose CHIKV reliably, depending on the time of sample collection. The antigen detection tests serve as a good diagnostic test for samples collected during the acute phase (≤7 days post symptom onset) of CHIKV infection. Likewise, IgM and IgG detection tests can be used for samples collected in the convalescent phase (>7 days post symptom onset). In correlation to the clinical presentation of the patients, the combination of the IgM and IgG tests can differentiate recent and past infections.

Entities:  

Mesh:

Substances:

Year:  2022        PMID: 35120141      PMCID: PMC8849447          DOI: 10.1371/journal.pntd.0010152

Source DB:  PubMed          Journal:  PLoS Negl Trop Dis        ISSN: 1935-2727


1. Introduction

Chikungunya virus (CHIKV) is transmitted to humans by Aedes mosquito bite. First isolated in Tanzania in 1953 [1], CHIKV was restricted to sporadic outbreaks in Africa and Asia. The three genotypes of CHIKV are designated after its geographical origins: East/Central/South/African (ECSA), West African, and Asian [2]. A genotypic shift of the CHIKV from Asian to ECSA was observed during the massive Indian Ocean outbreak in 2004, affecting millions of people [3]. ECSA genotype of CHIKV then continues to cause outbreaks in India and other parts of Asia [4,5]. Due to increased human movement and virus adaptability inside vectors, CHIKV has been recorded in nonendemic regions of the world [6,7]. To date, CHIKV is widespread in the Americas, Asia, and Africa [8], and the risk of reemergence and transmission remains a public health concern. Chikungunya fever is caused by CHIKV and is characterised by fever, rashes, and severe joint pain. The symptoms can progress to chronic joint pain, affecting the patient’s quality of life [9]. Since no licensed vaccines or therapies are available yet against CHIKV, early diagnosis may allow for early control strategies, preventing further outbreaks. As the clinical symptoms of CHIKV infections are similar to other viral illnesses, a reliable, sensitive, and specific laboratory test that can distinguish CHIKV infections from other viral infections is urgently needed. According to World Health Organization (WHO) guidelines, the three main laboratory tests for diagnosing CHIKV infections are virus isolation, serological tests, and molecular technique of polymerase chain reaction (PCR) [10]. The choice of tests depends on the number of days from the symptom onset. Virus isolation and quantitative reverse transcription-PCR (qRT-PCR) are recommended for samples collected within the first five days of illness. Meanwhile, serology tests are used for samples collected 5 days after the onset of illness. According to WHO, the Immunoglobulin M (IgM) ELISA is the most prevalent serology test used to diagnose CHIKV infection. Compared to the standard methods such as virus isolation, qRT-PCR, and plaque reduction neutralisation tests (PRNT), antigen and antibody-based serological tests are easier to perform, cost-effective, and require minimum resources. Following the outbreaks in the Indian Ocean in 2004, studies on CHIKV serological tests increased tremendously [11]. However, the diagnostic accuracy of these serological tests is unknown due to various degrees of reported sensitivities and specificities. To assess the diagnostic accuracy of the existing CHIKV serological assays, we performed a systematic review and meta-analysis. As different analytes were detected at different time points of sample collection, the diagnostic performance of serological tests identifying CHIKV antigen, IgM and IgG antibodies was determined.

2. Methods

2.1 Study registration

We adopted the preferred reporting items for a systematic review and meta-analysis of diagnostic test accuracy (PRISMA-DTA) guideline in preparing this report [12]. This systematic review was registered in the PROSPERO database under CRD42021227523.

2.2 Inclusion and exclusion criteria

Inclusion criteria in this systematic review were studies that 1) used suspected chikungunya patients regardless of age, gender, or other health status; 2) assessed the diagnostic performance of either antigen or antibody-based serological tests; 3) used either virus isolation, cell culture, or molecular methods as the reference standard for antigen detection test; 4) used either human serum, plasma, or whole blood as the samples; 5) contained sufficient information to tabulate 2 x 2 contingency table. Other research materials such as conference abstracts, commentaries, review articles, editorials, notes, and studies that did not specify the reference methods were excluded.

2.3 Literature search strategy

The literature search was performed in PubMed, CINAHL Complete, and Scopus databases from the 1st December 2020 until 22nd April 2021. The search was limited to journal articles written in English and published from the year 2000 onwards. The year 2000 was chosen as the cutoff year because CHIKV infection had been neglected before the unprecedented magnitude outbreak in Indian Ocean territories in 2004 [11]. Therefore, not many studies on CHIKV serological tests were available before the year 2000. We also screened through the reference lists of all the included studies to identify the relevant literature. The detailed search strategies for each database are shown in the S1 Appendix. All the articles were imported into Endnote X9.2 (Clarivate Analytics, USA) for the study selection. After the full-text screening stage, we documented the reasons for studies excluded in a PRISMA flow diagram.

2.4 Data extraction

According to the inclusion criteria mentioned above, the data extraction was done independently by two reviewers (AA and YTS). Other than the true positive, false positive, false negative, and true negative, information such as author information, study design, sample size, index test format, reference test description, and the time of sample collection were extracted from these articles. Any ambiguities of the extracted data were resolved by mutual agreement between authors. A study can evaluate more than one index test, and all the index tests data reported in each study were extracted. One of the studies [13] reported diagnostic accuracy from three different laboratories, namely CDC, CARPHA, and NML. As each of these laboratories evaluated a different set of index tests, we named these studies according to the laboratories (i.e., Johnson (CDC), Johnson (CARPHA), and Johnson (NML)). For studies developing serological tests either with different antigens or antibodies of the same test format, only the optimised index test data (highest diagnostic accuracy) were extracted for analysis.

2.5 Quality assessment

2.5.1 Study design

Analysis based on study design was done to determine each study’s reliability and quality of evidence. We divided the study design into the cohort, case-control, and partial cohort partial case-control study. The cohort study was a study that used suspected chikungunya patient (patient presented with fever and/or rash, myalgia, or arthralgia) samples to determine the accuracy of a test. The case-control study was a study that used confirmed chikungunya positive patient samples to determine the test sensitivity and serum samples from healthy individuals to determine the test specificity. The partial cohort and partial case-control study, on the other hand, assessed the diagnostic accuracy of the test using cohort samples as well as other pathogen positive samples (for example, dengue, Ross River virus (RRV), O’nyong-nyong virus (ONN)). For this analysis, the cohort was pooled together with the partial cohort partial case-control study design and compared with the case-control study design.

2.5.2 Risk of bias and applicability

The Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool was used to evaluate the quality and bias of each study [14]. The four domains evaluated were patient selection, index test, reference standard, and flow and timing (flow of patients through the study and timing of the index test and reference standard). The risk of bias was described as either low, high, or unclear in each domain, while concerns regarding the applicability were only assessed for the first three domains. Slight modifications were done to the signalling questions from the original tool. When more than one signalling question in a domain answered “no” or “unclear”, that domain will be rated as a high risk of bias (see S2 Appendix). Two reviewers (AA and NTN) independently assessed the quality of each study, and any disagreements were resolved through a consensual approach. The graph for the risk of bias and the applicability concern was generated using Review Manager 5.4 software.

2.6 Data analysis

A meta-analysis was performed in R software version 4.0.5 using the "meta" package. Pooled estimates of sensitivity (the probability of a test to identify those with the disease correctly) and specificity (the probability of a test to exclude those without disease correctly) with 95% confidence intervals were calculated using a random-effect model (Maximum-likelihood estimation), and the summary was presented in a paired forest plot. A random-effect model was chosen to consider the heterogeneity present within and between the studies [15]. Heterogeneity between studies was estimated using I2 statistics (total variation across the studies). The I2 value of 75% and above was rated as high, 50–74% as medium, and 49–25% as low heterogeneity. A funnel plot asymmetry test was used to assess publication bias [16].

2.6.1 Subgroup analysis

The source of heterogeneity was investigated by stratifying the data based on analytes detected by the serological tests, namely antigen, IgM, and IgG antibodies. We further assessed the source of heterogeneity by classifying the data based on test formats (ELISA-based, IFA, and rapid test), commercial versus in-house developed test, and time of samples collection (samples collected day 1 to 7 and after 7 days from the onset of clinical symptoms). For commercial tests (specific brand) with two or more diagnostic accuracy studies, meta-analyses were done according to the individual commercial kit. We included only samples collected after 7 days from the onset of clinical symptoms for this analysis. The commercial kit sensitivity and specificity reported by the manufacturers were also compared with the accuracy reported in this study. All the analyses were done using R software to calculate pooled estimates of sensitivity and specificity. The Mann-Whitney or Kruskal-Wallis test was used to compare the sensitivity and specificity values between groups.

3. Results

3.1 Literature search results

A total of 563 articles were identified through the mentioned databases. After removing duplicates, the remaining articles underwent title and abstract screening. Thereafter, a total of 40 articles were subjected to inclusion criteria evaluation. Three studies did not specify the reference standard [17-19], while one did not provide sufficient details for constructing the 2 x 2 contingency table [20]. In addition, one particular study involving cerebrospinal fluid (CSF) samples was excluded [21]. Finally, the remaining thirty-five articles were subjected to full-text reviewing for meta-analysis (Fig 1).
Fig 1

PRISMA flow diagram.

3.2 Characteristics of the included studies

We tabulated 72 sets of data from the 35 studies. Of the 72 tests assessed, 7 were antigen detection tests, 48 were IgM, 15 were IgG, and two were neutralising antibodies detection tests. Tables 1–4 show the data for each analyte, and Table 5 shows the summary characteristics of the studies that were included. A total of 10563 participants were included in this study, with 880 participants tested for antigen, 7613 participants for IgM, 1539 participants for IgG, and 531 participants for neutralising antibodies. Most of the studies (70%) did not specify the time of sample collection and the clinical background of the study participants. Only five studies (14.3%) specified that the samples were collected from hospitalised patients, and six studies (17.1%) used patient samples collected during CHIKV outbreaks.
Table 1

Characteristics of the studies on antigen detection tests included in the meta-analysis.

AuthorYearStudy designReference testIndex test formatIndex test (Commercial/ In-house)Time of sample collection (day of post symptom onset)Total number of samplesTPFPFNTNRef
Huits2018Partial cohort and case-controlRT-PCRRapid testIn-house1 to 109718122146[22]
Jain2018Case-controlqRT-PCRRapid testIn-house1 to 15123742542[23]
Kashyap2010CohortRT-PCR or qRT-PCR or virus isolationAntigen Indirect ELISAIn-house1 to >201289821117[24]
Khan2014CohortRT-PCRAntigen capture ELISAIn-houseNA60350322[25]
Okabayashi2015CohortRT-PCRRapid testIn-houseNA112682834[26]
Reddy2020CohortqRT-PCRAntigen Indirect ELISAIn-house1 to 51605124958[27]
Suzuki2020Partial cohort and case-controlRT-PCRRapid testIn-house1 to 72009208100[28]

Note: TP, true positive; FP, false positive; FN, false negative; TN, true negative; Ref, reference; NA, not available

Table 4

Characteristics of studies on neutralising antibodies detection tests.

AuthorYearStudy designReference testIndex test formatIndex test (Commercial/ In-house)Time of sample collection (day of post symptom onset)Total number of samplesTPFPFNTNRef
Goh2015Case-controlIndirect immunofluorescence antibody assay and haemagglutination inhibitionEpitope blocking ELISAIn-houseNA80601019[54]
Morey2010CohortRT-PCR and/or qRT-PCR or virus isolationPeptide ELISAIn-houseNA2817227[55]

Note: TP, true positive; FP, false positive; FN, false negative; TN, true negative; Ref, reference; NA, not available

Table 5

Characteristics of the Index tests (n = 72) from the 35 included studies.

CharacteristicNo. (%)
Analyte
IgM antibodies48 (66.7)
IgG antibodies15 (20.8)
Antibodies2 (2.8)
Antigen7 (9.7)
Index test
Commercial assay39 (54.2)
In-house developed assay33 (45.8)
Index test format
ELISA-based46 (63.9)
Rapid test20 (27.8)
Immunofluorescence assay6 (8.3)
Study design
Cohort17 (23.6)
Case-control18 (25)
Partial cohort and partial case-control37 (51.4)
Note: TP, true positive; FP, false positive; FN, false negative; TN, true negative; Ref, reference; NA, not available Note: TP, true positive; FP, false positive; FN, false negative; TN, true negative; Ref, reference; NA, not available a Acute samples b Convalescent samples Note: TP, true positive; FP, false positive; FN, false negative; TN, true negative; Ref, reference; NA, not available Note: TP, true positive; FP, false positive; FN, false negative; TN, true negative; Ref, reference; NA, not available

3.3 Diagnostic accuracy of serological tests for CHIKV infection

A meta-analysis based on the analytes (CHIKV antigen, IgM, and IgG antibodies) was done in this study. Forest plot for antigen, IgM, IgG, and neutralising antibodies (See S1 Fig) shows that the sensitivity across studies ranged from 0 to 1.0, while the specificity ranged from 0.73 to 1.0. Following the available information, the source of heterogeneity was further evaluated based on the test format, in-house developed versus commercial test, and time of sample collection. As there were only two studies on neutralising antibodies detection tests, subgroup analysis was not performed.

3.4 Antigen detection test

All seven antigen detection studies used molecular method and/or virus isolation as the reference test, and none of the antigen detection tests was commercially available. The samples used for antigen detection test were acute samples ranging from 1 to 20 days post symptom onset (Table 1). The forest plot for antigen detection test based on test format is shown in Fig 2. Meta-analysis showed no difference in the diagnostic performance between rapid and ELISA-based tests (P = >0.05) (Table 6). The heterogeneity for the sensitivity was high for both test formats, while moderate heterogeneity was observed for the specificity of the rapid antigen detection test.
Fig 2

Forest plot for antigen detection test based on test format; CI, confidence interval; TP, true positive; FP, false positive; FN, false negative; TN, true negative.

Table 6

Analysis for antigen detection tests.

Number of index testSample sizePooled SensitivityP-valuePooled SpecificityP-value
Percentage [95% CI]I2 [95% CI]Percentage [95% CI]I2 [95% CI]
Test format
Rapid test453285.8 [65.6; 95.1]93.0% [85.2; 96.7]1a96.1 [81.9; 99.3]56.9% [0.0; 85.7]0.721a
ELISA-based334882.2 [55.6; 94.4]95.1% [89.1; 97.8]96.0 [89.9; 98.5]0.0% [0.0; 85.1]

Abbreviations: CI, confidence interval; ELISA, enzyme-linked immunosorbent assay; I, Inconsistency

a Mann-Whitney test

Abbreviations: CI, confidence interval; ELISA, enzyme-linked immunosorbent assay; I, Inconsistency a Mann-Whitney test

3.5 IgM detection test

A variety of reference standards were used in the diagnostic accuracy studies of the IgM detection test, which included molecular methods, in-house developed serology tests, and commercial kits (Table 2). Some studies used the molecular method to confirm CHIKV infection for samples collected on the first day of symptoms appeared, then later samples from the same patients were collected for the IgM detection test.
Table 2

Characteristics of studies on IgM detection tests included in the meta-analysis.

AuthorYearStudy designReference testIndex test formatIndex test (Commercial/ In-house)Time of sample collection (day of post symptom onset)Total number of samplesTPFPFNTNRef
Bagno2020Partial cohort and case-controlAnti-chikungunya IgG ELISA kit (Euroimmun, Germany)IgM Indirect ELISAIn-houseNA1445711076[29]
Bhatnagar2015Case-controlRT-PCR and IgM kitIgM Indirect ELISAIn-house7 to 23 b90450045[30]
Blacksell2011CohortHemagglutination inhibition (HI) and/or IgM antibody capture ELISA and/or RT-PCRRapid testCommercial (SD Diagnostics)3 to 7 a29221550225[31]
Blacksell2011CohortHemagglutination inhibition (HI) and/or IgM antibody capture ELISA and/or RT-PCRMAC-ELISACommercial (SD Diagnostics)3 to 7 a29221850222[31]
Blacksell2011CohortHemagglutination inhibition (HI) and/or IgM antibody capture ELISA and/or RT-PCRMAC-ELISACommercial (SD Diagnostics)19 to 30 b29244218219[31]
Cho2008Case-controlIgM capture ELISA (Lyon, France)IgM Indirect ELISA (E1)In-houseNA60310920[32]
Cho2008Case-controlIgM capture ELISA (Lyon, France)IgM Indirect ELISA (E2)In-houseNA60360420[32]
Cho2008Case-controlIgM capture ELISA (Lyon, France)IgM Indirect ELISA (Capsid)In-houseNA60340620[33]
Cho2008Case-controlIgM capture ELISA (Lyon, France)Rapid test (Capsid)In-houseNA60350520[33]
Damle2016CohortMAC-ELISA (National Institute of Virology, Pune)MAC-ELISA (Capsid)In-houseNA24867010171[34]
Galo2017CohortCDC-MAC-ELISA (Atlanta, Georgia, United States)MAC-ELISAIn-house~5.9 a1981131381[35]
Johnson (CDC)2016Partial cohort and case-controlCDC MAC-ELISA and PRNTIgM Indirect ELISACommercial (Euroimmun)2 to 3392511139[13]
Johnson (CDC)2016Partial cohort and case-controlCDC MAC-ELISA and PRNTIFACommercial (Euroimmun)2 to 3375343038[13]
Johnson (CDC)2016Partial cohort and case-controlCDC MAC-ELISA and PRNTMAC-ELISACommercial (Abcam)2 to 3370361033[13]
Johnson (CDC)2016Partial cohort and case-controlCDC MAC-ELISA and PRNTMAC-ELISACommercial (InBios)2 to 3371360035[13]
Johnson (CDC)2016Partial cohort and case-controlCDC MAC-ELISA and PRNTMAC-ELISACommercial (CTK Biotech)2 to 332020144[13]
Johnson (CDC)2016Partial cohort and case-controlCDC MAC-ELISA and PRNTMAC-ELISACommercial (Genway)2 to 3343002716[13]
Johnson (CDC)2016Partial cohort and case-controlCDC MAC-ELISA and PRNTMAC-ELISACommercial (SD Diagnostics)2 to 33441221911[13]
Johnson (CDC)2016Partial cohort and case-controlCDC MAC-ELISA and PRNTRapid testCommercial (SD Diagnostics)2 to 333100247[13]
Johnson (CDC)2016Partial cohort and case-controlCDC MAC-ELISA and PRNTRapid testCommercial (CTK Biotech)2 to 332730204[13]
Johnson (CARPHA)2016Partial cohort and case-controlCDC MAC-ELISA and PRNTIndirect ELISACommercial (Euroimmun)NA36260010[13]
Johnson (CARPHA)2016Partial cohort and case-controlCDC MAC-ELISA and PRNTIFACommercial (Euroimmun)NA33211011[13]
Johnson (CARPHA)2016Partial cohort and case-controlCDC MAC-ELISA and PRNTMAC-ELISACommercial (Abcam)NA46360010[13]
Johnson (CARPHA)2016Partial cohort and case-controlCDC MAC-ELISA and PRNTMAC-ELISACommercial (InBios)NA41271013[13]
Johnson (NML)2016Partial cohort and case-controlCDC MAC-ELISA and PRNT and/or qRT-PCR and/or hemagglutination inhibition assayIndirect ELISACommercial (Euroimmun)NA2479466141[13]
Khan2014CohortRT-PCR and in-house indirect IgM ELISAIndirect ELISAIn-houseNA96682026[25]
Khan2014CohortRT-PCR and in-house indirect IgM ELISAMAC-ELISAIn-houseNA96670128[25]
Kikuti2020CohortRT-PCRMAC-ELISACommercial (InBios)1 to 7 a36965144214[36]
Kikuti2020CohortRT-PCRMAC-ELISACommercial (InBios)8 to >30 b26661195181[36]
Kikuti2020CohortRT-PCRIndirect ELISACommercial (Euroimmun)1 to 7 a3541524130185[36]
Kikuti2020CohortRT-PCRIndirect ELISACommercial (Euroimmun)8 to >30 b25863312162[36]
Kosasih2012Partial cohort and case-controlIn-house IgM capture ELISA and/or RT-PCRRapid testCommercial (CTK Biotech)1 to ≥2120627010574[37]
Kosasih2012Partial cohort and case-controlIn-house IgM capture ELISA and/or RT-PCRRapid testCommercial (SD Diagnostics)1 to ≥212066786566[37]
Lee2020Case-controlEuroimmun and Inbios IgM ELISARapid testCommercial (Boditech Med Inc)NA2205710162[38]
Litzba2008Case-controlIn-house IgM capture ELISA or in-house IIFTIFACommercial (Euroimmun)NA24612724113[39]
Matheus2015CohortqRT-PCR and/or MAC-ELISAMAC-ELISAIn-house>5 b58151042[40]
Mendoza2019Case-controlPlaque reduction neutralization test (PRNT) and/or RT-PCRIgM Indirect ELISACommercial (Euroimmun)NA2121540751[41]
Prat2014Partial cohort and case-controlIn-house MAC-ELISA and PRNTRapid testCommercial (SD Diagnostics)NA2534711[42]
Prat2014Partial cohort and case-controlIn-house MAC-ELISA and PRNTRapid testCommercial (CTK Biotech)NA2521814[42]
Prat2014Partial cohort and case-controlIn-house MAC-ELISA and PRNTMAC-ELISACommercial (IBL International)NA53223622[42]
Prat2014Partial cohort and case-controlIn-house MAC-ELISA and PRNTIgM Indirect ELISACommercial (Euroimmun)NA50225419[42]
Priya2014Partial cohort and case-controlSD IgM ELISA (Standard Diagnostics, South Korea)IgM Indirect ELISAIn-house3 to 10 b90482040[43]
Rianthavorn2010CohortSemi-nested RT-PCR and ELISA kit (SD BIOLINE)Rapid testCommercial (SD Diagnostics)1 to 6 a3673317153164[44]
Rianthavorn2010CohortSemi-nested RT-PCR and ELISA kit (SD BIOLINE)Rapid testCommercial (SD Diagnostics)7 to >14 b16067231456[44]
Theillet2019Case-controlIn-house MAC-ELISARapid testIn-houseNA782411043[45]
Verma2014Case-controlRT-PCR or IgM kitIgM Indirect ELISAIn-house7 to 15 b1951150872[46]
Wang2019Partial cohort and case-controlELISA kit (Euroimmun)Rapid testIn-houseNA109103294[47]
Wasonga2015CohortIgM-capture ELISA (CDC) and focus reduction neutralization testMAC-ELISAIn-houseNA148513589[48]
Yap2010Partial cohort and case-controlRT-PCR and IgM serologyRapid testCommercial (CTK Biotech)1 to 6 a1412406750[49]
Yap2010Partial cohort and case-controlRT-PCR and IgM serologyRapid testCommercial (CTK Biotech)7 to 40 b932302050[49]
Yap2010Partial cohort and case-controlRT-PCR and IgM serologyIFACommercial (Euroimmun)1 to 6 a2409209850[49]
Yap2010Partial cohort and case-controlRT-PCR and IgM serologyIFACommercial (Euroimmun)7 to 40 b145950050[49]
Yap2010Partial cohort and case-controlRT-PCR and IgM serologyMAC-ELISA (226A)In-house1 to 6 a2409629448[49]
Yap2010Partial cohort and case-controlRT-PCR and IgM serologyMAC-ELISA (226A)In-house7 to 40 b145952048[49]
Yap2010Partial cohort and case-controlRT-PCR and IgM serologyMAC-ELISA (226V)In-house1 to 6 a24011827248[49]
Yap2010Partial cohort and case-controlRT-PCR and IgM serologyMAC-ELISA (226V)In-house7 to 40 b145952048[49]

Note: TP, true positive; FP, false positive; FN, false negative; TN, true negative; Ref, reference; NA, not available

a Acute samples

b Convalescent samples

Subgroup analyses were conducted for the IgM detection test based on test format, in-house developed versus commercial, and sampling time. The three test formats available for IgM detection tests were rapid, ELISA-based, and immunofluorescence assay (IFA). Regardless of the test formats, the forest plot (Fig 3) shows that the sensitivity estimates vary more widely than the specificity estimates. Meanwhile, meta-analyses revealed that the rapid tests had the poorest sensitivity, 42.3% (95% CI 19.2 to 69.4) (Table 7). The sensitivity of the rapid tests (42.3%; 95% CI 19.2 to 69.4) was statistically different from ELISA-based (93.4%; 95% CI 81.7 to 97.8; P = 0.002) and IFA (99.3%; 95% CI 69.4 to 100; P = 0.027), while no significant difference was found in the sensitivity of IFA and ELISA-based tests (P = 0.414).
Fig 3

Forest plot for IgM detection test based on test format; CI, confidence interval; TP, true positive; FP, false positive; FN, false negative; TN, true negative.

Table 7

Analysis for IgM detection tests.

Number of index testSample sizePooled SensitivityP-valuePooled SpecificityP-value
Percentage [95% CI]I2 [95% CI]Percentage [95% CI]I2 [95% CI]
Test format
ELISA-based31516993.4 [81.7; 97.8]93.0% [91.3; 94.4]0.003 a, b96.8 [95.0; 98.0]37.4% [6.2; 58.2]0.796 a
Rapid test13204042.3 [19.2; 69.4]92.2% [88.8; 94.6]97.1 [92.0; 99.0]72.0% [52.9; 83.3]
IFA473999.3 [69.4; 100]91.0% [82.0; 95.5]98.0 [93.6; 99.4]0.0% [0.0; 72.4]
Commercial vs In-house
Commercial30538878.6 [51.0; 92.8]94.0% [92.5; 95.1]<0.001 c95.9 [93.3; 97.6]59.3% [41.2; 71.8]0.006 c
In-house18256094.7 [87.7; 97.8]86.4% [80.4; 90.6]98.0 [96.9; 98.8]0.0% [0.0; 0.0]
Time of sample collection
≤7 days10273326.2 [9.0; 56.0]96.5% [95.0; 97.5]<0.001 c95.8 [92.5; 97.7]52.4% [2.5; 76.8]0.914 c
>7 days12193698.4 [90.7; 99.7]73.7% [53.3; 85.2]96.6 [91.0; 98.8]69.9% [45.6; 83.4]

Abbreviations: CI, confidence interval; ELISA, enzyme-linked immunosorbent assay; IFA, Immunofluorescent assay; I, Inconsistency

a Kruskal-Wallis test

b pairwise tests ELISA-based vs rapid test, P = 0.002; pairwise test rapid test vs IFA, P = 0.027; pairwise test ELISA-based vs IFA, P = 0.414.

c Mann-Whitney test

Abbreviations: CI, confidence interval; ELISA, enzyme-linked immunosorbent assay; IFA, Immunofluorescent assay; I, Inconsistency a Kruskal-Wallis test b pairwise tests ELISA-based vs rapid test, P = 0.002; pairwise test rapid test vs IFA, P = 0.027; pairwise test ELISA-based vs IFA, P = 0.414. c Mann-Whitney test More than half of the IgM detection tests investigated (60%) were commercially available, and the sensitivity of these tests was highly variable compared to the in-house developed test (Fig 4). According to our meta-analysis, the diagnostic accuracy of in-house developed tests was significantly higher than commercial IgM tests (Table 7).
Fig 4

Forest plot for IgM detection test based on in-house developed and commercial test; CI, confidence interval; TP, true positive; FP, false positive; FN, false negative; TN, true negative.

The sample collection time for the IgM detection tests ranges from day 1 to day 40 after the onset of symptoms. For studies that provide sample collection time, we categorised sample collected ≤ 7 days post symptom onset as acute-phase samples and >7 days post symptom onset as convalescent-phase samples (Table 2). The forest plot (Fig 5) shows that the sensitivity estimates for samples collected ≤ 7 days of symptoms onset mostly lies on the left side of the plot. Consistent with this observation, our meta-analysis shows that the sensitivity for the samples collected ≤ 7 days of symptoms onset was significantly lower than samples collected >7 days post symptom onset (Table 7). These results indicate that the IgM detection test had low accuracy for acute-phase samples.
Fig 5

Forest plot for IgM detection test based on time of sampling; CI, confidence interval; TP, true positive; FP, false positive; FN, false negative; TN, true negative.

The sensitivity heterogeneity was moderate to high (73.7 to 96.5%) across all subgroup studies for IgM detection tests. In comparison, the test specificity showed low to moderate (0 to 72.0%) heterogeneity (Table 7).

3.6 IgG detection test

The reference standards used for IgG detection test studies include the commercial kits, in-house developed ELISA, IFA, or PRNT. The time of sample collection for IgG detection tests ranges from 7 to 90 days of post symptom onset. Subgroup analysis based on test format and in-house developed versus commercial tests were done for the IgG detection test. The forest plot for the three different test formats (ELISA-based, rapid test, and IFA) was shown in Fig 6. We found no difference (P = >0.05) in the diagnostic performance of the three different test formats (IFA, ELISA-based and rapid test), and rapid tests showed the highest accuracy (Table 8). Although there was no difference, the IFA and rapid test accuracy have to be interpreted with caution as the sample size for IFA and the rapid IgG detection test was relatively low compared to the ELISA-based test.
Fig 6

Forest plot for IgG detection test based on test format; CI, confidence interval; TP, true positive; FP, false positive; FN, false negative; TN, true negative.

Table 8

Analysis for IgG detection tests.

Number of index testSample sizePooled SensitivityP-valuePooled SpecificityP-value
Percentage [95% CI]I2 [95% CI]Percentage [95% CI]I2 [95% CI]
Test format
IFA224396.0 [89.9; 98.5]0.0% [0.0; 0.0]0.269 a99.1 [61.0; 100]0.0% [0.0; 0.0]0.220 a
ELISA-based10114793.0 [85.9; 96.6]83.6% [71.3; 90.6]96.4 [91.2; 98.6]4.0% [0.0; 63.9]
Rapid test343899.3 [28.8; 100]0.0% [0.0; 0.0]100 [0.0; 100]0.0% [0.0; 0.0]
Commercial vs In-house
Commercial9103895.3 [87.4; 98.4]82.3% [67.6; 90.3]0.475 b97.8 [91.6; 99.4]0.0% [0.0; 50.9]0.238b
In-house679093.2 [82.8; 97.5]72.4% [36.3; 88.0]99.6 [89.5; 100]0.0% [0.0; 59.9]

Abbreviations: CI, confidence interval; ELISA, enzyme-linked immunosorbent assay; IFA, Immunofluorescent assay; I, Inconsistency

a Kruskal-Wallis test

b Mann-Whitney test

Abbreviations: CI, confidence interval; ELISA, enzyme-linked immunosorbent assay; IFA, Immunofluorescent assay; I, Inconsistency a Kruskal-Wallis test b Mann-Whitney test We compared the diagnostic performance of commercial and in-house developed IgG tests. Fig 7 shows the forest plot for commercial and in-house developed IgG tests, and our analysis showed no difference in the diagnostic accuracy of the two tests (Table 8). In summary, the CHIKV IgG detection tests had a high diagnostic accuracy with more than 93% sensitivity and specificity regardless of the test format and commercial or in-house developed test.
Fig 7

Forest plot for IgG detection test based on in-house developed and commercial test; CI, confidence interval; TP, true positive; FP, false positive; FN, false negative; TN, true negative.

The sensitivity heterogeneity in the subgroup analysis for IgG tests ranged from medium to high (I of 72.4 to 83.6) except for the IFA and rapid test, which showed no heterogeneity (Table 8). There was no heterogeneity in the specificity of all the IgG detection tests.

3.7 Subgroup analysis of commercial serological tests for CHIKV

A meta-analysis was performed for nine commercial tests detecting IgM and IgG antibodies (Table 9). The data for meta-analysis based on test format were available in sections A-C of S1 Table. Most commercial kits indicated testing using samples taken between 6 and 8 days after symptom onset. Therefore, data from samples collected less than 7 days after symptom onset were eliminated from the analysis. The commercial kit studies mostly use cohort or partial cohort partial case-control study designs. Case-control study design was used in only two of the studies [39,41].
Table 9

Subgroup analysis for commercial tests.

ManufacturerNumber of studiesSample sizePooled SensitivitySensitivity reported by manufacturerPooled SpecificitySpecificity reported by manufacturer
Percentage [95% CI]I2 [95% CI]Percentage [95% CI]I2 [95% CI]
ELISA-based
Anti-CHIKV ELISA (IgM)Euroimmun Lübeck, Germany689595.3 [92.9; 97.0]25.5% [0.0; 64.0]98.195.2 [84.9; 98.6]66.6% [20.3; 86.0]98.9
Anti-CHIKV ELISA (IgG)Euroimmun Lübeck, Germany329595.5 [91.6; 97.6]30.4% [0.0; 92.8]NA91.5 [78.0; 97.1]55.0% [0.0; 87.2]NA
SD Chikungunya IgM ELISAStandard Diagnostics Inc., Yongin-si, Korea233665.3 [28.9; 89.8]93.9% [80.7; 98.1]93.690.9 [86.7; 93.9]0.0% [0.0; 0.0]95.9
Anti-Chikungunya Virus IgM Human ELISA KitAbcam, UK2116100 [0; 100]0.0% [0.0; 0.0]>9097.7 [85.6; 99.7]0.0% [0.0; 0.0]>90
CHIKjj Detect MAC-ELISAInBios, Seattle, WA, USA337898.6 [64.9; 100]0.0% [0.0; 0.0]>9092.0 [87.9; 94.8]0.0% [0.0; 0.0]>90
Immunofluorescence assay (IFA)
Anti-CHIKV IIFT (IgG)Euroimmun Lübeck, Germany224396.0 [89.9; 98.5]0.0% [0.0; 0.0]9599.1 [61; 100]0.0% [0.0; 0.0]96
Anti-CHIKV IIFT (IgM)Euroimmun Lübeck, Germany449998.1 [91.5; 99.6]0.0% [0.0; 0.0]10098.6 [95.8; 99.5]0.0% [0.0; 72.6]96
Rapid test
On-site CHIK IgM Combo Rapid testCTK Biotech Inc., San Diego, CA, USA314527.9 [10.8; 55.2]81.0% [40.5; 93.9]90.498.7 [84.9; 99.9]0.0% [0.0; 0.0]98
SD BIOLINE Chikungunya IgMStandard Diagnostics Inc., Yongin-si, Korea321619.1 [0.6; 90.0]80.7% [39.4; 93.8]97.173.3 [63.8; 81.0]0.0% [0.0; 0.0]98.9

Abbreviations: CI, confidence interval; NA, not available; I, Inconsistency.

Abbreviations: CI, confidence interval; NA, not available; I, Inconsistency. There are three types of commercial tests: ELISA-based, IFA-based, and rapid test. The diagnostic performance of all the tests (ELISA and IFA) developed by Euroimmun (Lübeck, Germany) had more than 90% sensitivity and specificity. There was no heterogeneity found in the diagnostic performance of IFA (Table 9). Another ELISA-based assay was developed by Abcam (UK) and Inbios (Seattle, WA, USA). Both assays showed high diagnostic performance with no heterogeneity. Meanwhile, ELISA-based and rapid IgM test developed by Standard Diagnostics Inc. (Yongin-si, South Korea) had poor diagnostic performance compared to tests of the same format from other manufacturers. The sensitivity of another IgM rapid test developed by CTK Biotech Inc. (San Diego, CA, USA) was equally poor (27.9%; CI 10.8 to 55.2). Compared to the sensitivity claimed by the manufacturers, the sensitivity of the two rapid tests reported in this study was relatively low. In summary, ELISA-based and IFA outperform rapid tests in terms of diagnostic performance among all the commercial tests.

3.8 Quality assessment

3.8.1 Study design

The diagnostic performance of the case-control and cohort/partial cohort partial case-control study design was compared (Table 10). No analysis was done for the antigen detection test since there is just one study with a case-control study design (Table 1). The sensitivity and specificity of the two study designs were shown to differ significantly (P = <0.05) for the IgM detection test. Meanwhile, only the specificity of the two study designs was shown to be significantly different for the IgG detection test. Overall, the case-control study had a higher diagnostic accuracy than the cohort/partial cohort partial case-control study.
Table 10

Subgroup analysis for study design.

Number of index testPooled SensitivityP-valuePooled SpecificityP-value
Percentage [95% CI]I2 [95% CI]Percentage [95% CI]I2 [95% CI]
IgM
Case-control1093.1 [86.3; 96.7]72.5% [47.9; 85.5]0.001 a99.3 [98.1; 99.7]0.0% [0.0; 0.0]<0.001 a
Cohort/partial cohort partial case-control3883.2 [62.2; 93.7]92.4% [90.5; 93.9]96.1 [94.0; 97.5]47.4% [23.0; 64.0]
IgG
Case-control695.0 [89.6; 97.7]76.3% [46.8; 89.4]<0.905 a99.8 [84.1; 100]0.0%0.015 a
Cohort/partial cohort partial case-control994.3 [82.6; 98.3]65.3% [29.4; 83.0]94.6[89.0; 97.4]0.0% [0.0; 58.1]

Abbreviations: CI, confidence interval; I, Inconsistency

a Mann-Whitney test

Abbreviations: CI, confidence interval; I, Inconsistency a Mann-Whitney test

3.8.2 Risk of bias and application

Based on the QUADAS-2 tool, nine (24.3%) and six (16.2%) studies had a high risk of bias with regards to the patient selection and index test, respectively (Fig 8). All of the studies showed low applicability concern. The risk of bias and applicability concerns assessment of individual studies is available in S2 Fig.
Fig 8

Overall percentage of risk of bias and applicability concern using the QUADAS-2 tool.

3.9 Publication bias

Analysis showed a symmetrical funnel plot, suggesting no publication bias (P = 0.236) (Fig 9).
Fig 9

Funnel plot asymmetry test to assess publication bias.

Each dot represents an individual study, and the dashed line represents the regression line. P-value = 0.236.

Funnel plot asymmetry test to assess publication bias.

Each dot represents an individual study, and the dashed line represents the regression line. P-value = 0.236.

4. Discussion

CHIKV is a mosquito-borne virus that causes an acute febrile illness with severe joint pain. This study reviewed and analysed serological tests detecting CHIKV antigen, IgM, and IgG antibodies. During CHIKV infection, once the virus enters the host, it replicates and causes viremia, which lasts about 7 days. The patient’s clinical manifestations, such as fever, are closely related to the high viral load during this period [56,57]. The appearance of the antibodies in the following phase is linked to a decrease in viremia. In this meta-analysis, the acute phase is defined as days 1 to 7 following the beginning of symptoms, while the convalescent phase is defined as after 7 days of symptom onset. Since different analytes are detected at different time points during CHIKV infection (acute and convalescent), herein, we elaborate the findings of this meta-analysis considering the utility of these tests during CHIKV infection.

4.1 Acute phase

Our meta-analysis demonstrates that antigen detection tests serve as a good diagnostic test for samples collected during the acute phase of CHIKV infection. According to the CHIKV testing algorithm developed by the Center for Disease Control and Prevention (CDC), qRT-PCR is the standard test used for samples collected less than 6 days after symptom onset [58]. Nevertheless, the qRT-PCR has limitations, such as the need for expensive reagents and equipment that are not available in most laboratories, especially in rural areas where CHIKV is prevalent. Less complicated tests, such as rapid and ELISA-based antigen detection tests, can be utilised as an alternative. For the antigen detection test, most of the studies in this meta-analysis employed samples from the early stage of infection (1 to 20 days). Virus isolation or a molecular-based assay were used as reference standards to confirm the presence of viral particles (antigen). Only one study [23] used the case-control study design, and all antigen tests were generated in-house. As a result, there was no further analysis based on these variables to discover the source of heterogeneity. The low sensitivity of the test against different CHIKV genotypes could be one source of heterogeneity for antigen detection tests. The rapid test developed by Okabayashi et al. [26] was shown to be less sensitive in detecting CHIKV of Asian genotype [22]. Suzuki et al. [28] generated new monoclonal antibodies and showed that their improvised rapid test was more sensitive to cultured Asian and West African genotypes than the rapid test developed by Okabayashi et al. [26]. To further augment the diagnostic accuracy of this test, we suggest that different populations covering different genotypes should be tested in the future. In this meta-analysis, the time of sample collection for IgM detection tests ranges from day 1 to day 40 post symptom onset. This wide range of time of samples collection is theoretically acceptable as IgM antibodies are known to appear as early as day 2 from the onset of illness and can persist up to 3 months [59]. However, our meta-analysis revealed that the sensitivity of the IgM detection tests was low for acute-phase samples (1 to 7 days post symptom onset) (26.2%) compared to the convalescent-phase samples (≥7 days post symptom onset) (98.4%). This result is consistent with Natrajan et al. (60) findings, who found that IgM tests can detect CHIKV with a 100% accuracy rate for samples taken more than 6 days of symptom onset. In summary, while IgM antibodies begin to develop from day 2 of CHIKV infection, the level can be way below the detection limit of most serological assays. Thus, the IgM detection test is not recommended for samples taken during the acute phase of infection.

4.2 Convalescent phase

As mentioned above, our meta-analysis showed that the diagnostic accuracy of the IgM detection test was high for convalescent-phase samples. According to WHO guidelines, a confirmed CHIKV case is defined as the presence of CHIKV IgM antibodies in a single serum sample taken during the acute or convalescent phases, indicating recent infection [10]. IgM rapid tests had the lowest diagnostic performance compared to ELISA-based and IFA. Despite having the highest accuracy, IFA requires more expensive equipment and reagents. In addition, we found that in-house developed IgM tests had higher diagnostic performance compared to commercial tests. This finding is consistent with an external quality assurance report that found in-house developed ELISA tests to be more sensitive than commercial ELISA tests [60]. We are concerned that case-control design would lead to the overestimation of performance of the in-house developed IgM tests. Nonetheless, excluding the case-control studies from the meta-analysis showed a similar result (see S2 Table). Thus, this strengthens the notion that the accuracy of in-house developed tests is better than commercial tests. According to the CDC testing algorithm, the PRNT is required to confirm a positive IgM test in diagnosing CHIKV disease [56]. Our meta-analysis showed that the IgM test had more than 97% specificity, regardless of test formats. More than half of the index tests evaluated in this meta-analysis included other pathogen positive samples (e.g., dengue, ONN, and RRV) in determining the cross-reactivity of the tests (partial cohort partial case-control study). These results validated the high specificity of the IgM tests, which could imply that PRNT may not be needed as a confirmatory test for positive cases determined by IgM tests. On the other hand, IgG antibodies can be detected approximately from day 7 to 10 post symptom onset and remain detectable for months to years [56]. Correspondingly, our meta-analysis showed that IgG detection tests had more than 93% sensitivity and specificity for samples collected between days 7 to 90 of post symptom onset. As CHIKV IgG antibodies persist for years, a second sample should be collected three weeks apart to rule out past infection. As stated in the WHO guidelines, a recent CHIKV diagnosis can be confirmed if there is a fourfold increase in IgG titer between the samples [10]. However, obtaining second samples from the patients is not always possible. In such situation, the presence of the CHIKV IgG antibody in a single sample should be interpreted in correlation with the clinical presentation of the patients. There was no difference in the diagnostic performance of the IgG rapid test, IFA, and ELISA-based test. Among these tests, rapid tests are attractive because they are easy to perform, do not require expensive equipment, and the result can be obtained within a minute. Two commercial IgG rapid tests with promising diagnostic accuracy are recently available [38,47], but further evaluation with multiple prospective cohort studies is needed to provide comprehensive data for meta-analysis. In summary, IgM and IgG antibody detection tests had high accuracy (>90%) for samples collected in the convalescent phase of CHIKV infection. The detection of IgM indicates recent infection, while a second sample collected at least 3 weeks apart is needed for the positive IgG test to rule out past infection.

4.3 Diagnostic performance of CHIKV commercial test kits

To our best knowledge, this is the first review that assessed the diagnostic accuracy of commercial tests for CHIKV. As mentioned above, the accuracy of the IgM detection test was very low for samples collected <7 days post symptom onset. Most commercial test kits recommend testing using samples collected between 6 to 8 days post symptom onset. Thus, we omitted acute-phase samples (< 7 days post symptom onset) from this analysis, and we found that the heterogeneity was low for almost all the commercial kits tested. Our meta-analysis supported the findings reported by Johnson et al. [13], which showed high diagnostic performance of the test kits manufactured by Euroimmun (Lübeck, Germany), Abcam (Cambridge, UK), and Inbios (Seattle, WA, USA). However, according to the authors, IFA developed by Euroimmun (Lübeck, Germany) needed more testing for equivocal results due to background fluorescence which may not be applicable in a real clinical setting. We also found that the diagnostic accuracy of most of the commercial tests reported in this review was lower than the accuracy mentioned by the manufacturers except for ELISA-based tests developed by Abcam (Cambridge, UK) and Inbios (Seattle, WA). Although the accuracy of these two tests was high, more studies using diverse samples population should be carried out to ascertain its use in other regions. Of note, among all the commercial tests evaluated in this review, only CHIKjj Detect MAC-ELISA (InBios, Seattle, WA, USA) has Conformité Européenne (CE) marking.

4.4 The impact of the study quality

4.4.1 Study design

The partial cohort partial case-control studies included other pathogens positive samples to evaluate the cross-reactivity of the tests. The ability of the tests to discern CHIKV from other pathogens is important because alphaviruses such as ONN and RRV are prevalent, especially in Sub-Saharan Africa and Australia. In tropical countries, samples positive for DENV are always used for specificity check due to the co-prevalence of CHIKV and DENV within the same region. The inclusion of these well-defined samples in partial cohort partial case-control studies is unlikely to increase the risk of bias and thus were grouped with cohort studies. One of the issues identified in most diagnostic accuracy studies is the flaw in the study design [61]. Because CHIKV patient samples are difficult to obtain, case-control studies are used for CHIKV diagnostic accuracy research. In this study design, the spectrum between individuals without chikungunya disease is widely separated from those with chikungunya disease. As a result, discerning between people who have the disease and those who have not is much easier. The case-control study design is expected to cause an overestimation in the diagnostic accuracy, which was observed in our analysis. The sensitivity and specificity of case-control study design were higher than cohort and partial cohort partial case-control study design, but not for the sensitivity of the IgG detection test. There was no statistical difference in the IgG sensitivity for the two study designs. Nevertheless, the case-control studies included samples from two distinct sources of populations (healthy and CHIKV positive), and these samples did not represent the population in a real clinical setting [62]. In summary, the cohort study design is the ideal study design in determining diagnostic accuracy. However, it is not always feasible for most studies, especially in countries with a low prevalence of chikungunya. Although the accuracy estimates from the case-control study design may not represent the actual value, this study design is an alternative to cohort study design, especially in determining the accuracy of a test in its developmental phase.

4.4.2 Quality assessment of bias and application

The high risk of bias in the patient selection domain was mainly contributed by studies that applied case-control study design [23,30,38,39,41,45,46,52,54]. As mentioned previously, the case-control study design could exaggerate the test accuracy and thus may not reflect the actual accuracy. For the index test domain, almost all the studies did not mention whether the index test results were interpreted without knowing the result of the reference standard. There is a high risk of bias as the interpretation of index test results can be influenced by knowledge of the reference standard. The ELISA-based test results were categorised into positive, borderline (or equivocal), and negative based on the obtained OD (absorbance) or a ratio. To simplify the analysis, some studies coded the borderline or equivocal samples as positive [41] and negative [13,50]. A study coded equivocal results for the immunochromatographic test as negative [31]. Although not described in the study, the equivocal result for ICT can be defined as ambiguous test lines observed. The inclusion of inconclusive results (borderline or equivocal) in the analysis will increase the risk of bias. However, as the number of borderline and equivocal samples (19 out of 10563) in this study was very small, the inclusion of these data will not affect the general result of the meta-analysis. There was no bias recorded for the reference standard domain. Different reference standards were used in the diagnostic accuracy studies since no gold standard is available for diagnosing CHIKV. This meta-analysis specifies direct detection methods such as virus isolation and molecular-based method as the reference standard for antigen detection tests to ensure that the samples were collected during the viral stage. Some studies used molecular tests as the reference standards [36,41], and subsequent samples collected from the same patient were used for IgM or IgG accuracy studies. Although not wrong, each patient’s immune response can be varied against CHIKV. Some patients may not develop antibodies against CHIKV, and thus the use of these samples could lead to the low sensitivity of the test. Analysis based on the reference standard used was not done in this study due to the variations of the reference standard even in a single study. Most studies briefly mentioned the reference standard used and did not provide detailed data. Thus, it is difficult to extract data based on the reported reference standard and perform further analysis. For the flow and timing domain, the reference and index tests should be performed at the same or almost the same time point. However, as chikungunya disease is not available year-round, most of the studies in this review use retrospective samples (samples pre-defined in other studies) to determine the accuracy of the test, therefore we rated this domain as low risk of bias.

5. Strengths and limitations of the review

This systematic review and meta-analysis followed a standard protocol registered in the PROSPERO database (CRD42021227523) and PRISMA-DTA review methodology. We carried out our literature search based on the quality of the study design, test formats, and type of analytes. We evaluated the diagnostic accuracy of serological tests detecting CHIKV antigen, IgM, and IgG antibody, which were applicable in different phases (acute and convalescent) of CHIKV infection. Furthermore, we analysed the diagnostic performance of the available commercial test kits for CHIKV and compared it with the diagnostic accuracy reported by the manufacturers. Our review also has several limitations. Following the subgroup analysis, there was heterogeneity between groups that could not be explained by the findings of our study. This heterogeneity may be explained by performing analyses on other possible sources such as different lineages of CHIKV utilised to prepare antigen or antibody, the nature of antigen (recombinant protein or inactive virus), the sample population (country, origin, or CHIKV lineage), and types of the reference standards. One of the characteristics that affect the width of the confidence interval is the sample size [63]. Due to the small sample size, some of the analyses showed a wide range of 95% confidence interval. For example, the heterogeneity (I) for the specificity of the rapid (95% CI 0 to 85.7) and ELISA-based (95%CI 0 to 85.1) antigen detection tests had a wide 95% confidence interval. A similar observation was seen for the specificity of the rapid IgG test (95% CI 0.0 to 100). Wide confidence often indicates that the estimated results provide less certain information. Therefore, at this point, we have a low level of certainty for analysis with a wide confidence interval. In addition, only a subset of the studies included in this review provides information with respect to the time of sample collection. The sampling time is significant since the detection effectiveness of the test varies depending on the presence of analytes in the patient’s sample. Other information such as blinding of the reference test result when interpreting index test, the expertise of the person who performs IFA, and samples conditions, are also important to grasp sources of variance and evaluate applicability. We strongly recommend employing a prospective cohort study design and a full report on the methodology associated with reference and index tests for a more accurate estimation of the diagnostic accuracy for CHIKV serological testing.

6. Conclusion

According to our meta-analysis, depending on the time of samples collection, antigen and antibody-based serological tests can accurately diagnose CHIKV. Antigen detection tests are an effective diagnostic test for samples obtained during the acute phase (1 to 7 days post symptom onset), whereas IgM and IgG detection tests can be used for samples collected in the convalescent phase (>7 days post symptom onset). In correlation to the clinical presentation of the patients, the combination of the IgM and IgG tests can differentiate recent and past infections. Several commercial IgM and IgG assays have been recognised as promising, which included kits from Euroimmun (Lübeck, Germany), Abcam (Cambridge, UK), and Inbios (Seattle, WA). The caveats to the finding of this meta-analysis are inconclusive reporting of data in this review and low quality of reporting in diagnostic test accuracy studies.

PRISMA-DTA checklist.

Preferred reporting items for a systematic review and meta-analysis of diagnostic test accuracy studies. (DOCX) Click here for additional data file.

PRISMA-DTA for the abstract checklist.

(DOCX) Click here for additional data file.

Search strategy used for PubMed and CINAHL Complete and Scopus databases.

(DOCX) Click here for additional data file.

QUADAS-2 Validation form.

(DOCX) Click here for additional data file.

Forest plot for antigen, IgM, IgG, and neutralising antibodies test; CI, confidence interval; TP, true positive; FP, false positive; FN, false negative; TN, true negative; [S1 Reference list].

(TIF) Click here for additional data file.

Risk of bias and applicability concerns assessment of individual studies using the QUADAS-2 tool; [S2 Reference list].

(TIF) Click here for additional data file.

Reference list for S1 Fig.

(PDF) Click here for additional data file.

Reference list for S2 Fig.

(PDF) Click here for additional data file. Section A: Characteristics of commercial ELISA-based tests included in the meta-analysis. section B: Characteristics of commercial Immunofluorescence assays included in the meta-analysis. section C: Characteristics of commercial rapid tests included in the meta-analysis. (DOCX) Click here for additional data file.

Analysis of commercial versus in-house developed IgM tests with the exclusion of case-control study.

(DOCX) Click here for additional data file. 14 Sep 2021 Dear Prof. Dr. Tang, Thank you very much for submitting your manuscript "Diagnostic accuracy of serological tests for the diagnosis of Chikungunya virus infection: a systematic review and meta-analysis" for consideration at PLOS Neglected Tropical Diseases. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments. Both reviewers raised important concerns about the study. I agree with them that the overall pooled sensitivity analysis is not worthwhile, as it is important to know (and compare) the performance of tests according to their characteristics and uses. Combining tests that detect IgM antibodies, IgG antibodies, neutralizing antibodies and antigen makes no sense as they have different uses and applications. Likewise, the combination of rapid tests (which show poor performance) with ELISA and IFA to obtain an overall sensitivity does not contribute to the understanding of the performance of these diagnostic methodologies. Just as important, it is not worth combining studies that evaluated samples from the acute and convalescent phases of the disease to determine the overall pooled sensitivity. I suggest that overall pooled analysis be removed and that the paper focus on the subgroup analysis, comparing test performance according to the type of test (RDT, ELISA, IFA), the type of antibody detected (IgM, IgG, …), and the timing of sampling (acute, convalescent). Such an approach will best answer which antibody (and which test method) should be used in the acute and in the convalescent-phases of the disease. If possible, further assessment of tests sensitivity according to study design and reference standard for comparison will be good. Another concern is the fact that: “Most of the studies did not specify the time of sample collection”. The lack of this information introduces an important bias, making it difficult to properly interpret sensitivity because sampling time is critical to determine the performance of serological tests. As stated above, meta-analysis should be performed for subgroups based on the time of sampling (acute vs. convalescent). Please address these issues so that the manuscript can be considered for publication on PLOS NTD. We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation. When you are ready to resubmit, please upload the following: [1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. [2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file). Important additional instructions are given below your reviewer comments. Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts. Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments. Sincerely, Guilherme S. Ribeiro, M.D., M.Sc., Ph.D Associate Editor PLOS Neglected Tropical Diseases Emma Wise Deputy Editor PLOS Neglected Tropical Diseases *********************** Both reviewers raised important concerns about the study. I agree with them that the overall pooled sensitivity analysis is not worthwhile, as it is important to know (and compare) the performance of tests according to their characteristics and uses. Combining tests that detect IgM antibodies, IgG antibodies, neutralizing antibodies and antigen makes no sense as they have different uses and applications. Likewise, the combination of rapid tests (which show poor performance) with ELISA and IFA to obtain an overall sensitivity does not contribute to the understanding of the performance of these diagnostic methodologies. Just as important, it is not worth combining studies that evaluated samples from the acute and convalescent phases of the disease to determine the overall pooled sensitivity. I suggest that overall pooled analysis be removed and that the paper focus on the subgroup analysis, comparing test performance according to the type of test (RDT, ELISA, IFA), the type of antibody detected (IgM, IgG, …), and the timing of sampling (acute, convalescent). Such an approach will best answer which antibody (and which test method) should be used in the acute and in the convalescent-phases of the disease. If possible, further assessment of tests sensitivity according to study design and reference standard for comparison will be good. Another concern is the fact that: “Most of the studies did not specify the time of sample collection”. The lack of this information introduces an important bias, making it difficult to properly interpret sensitivity because sampling time is critical to determine the performance of serological tests. As stated above, meta-analysis should be performed for subgroups based on the time of sampling (acute vs. convalescent). Please address these issues so that the manuscript can be considered for publication on PLOS NTD. Reviewer's Responses to Questions Key Review Criteria Required for Acceptance? As you describe the new analyses required for acceptance, please consider the following: Methods -Are the objectives of the study clearly articulated with a clear testable hypothesis stated? -Is the study design appropriate to address the stated objectives? -Is the population clearly described and appropriate for the hypothesis being tested? -Is the sample size sufficient to ensure adequate power to address the hypothesis being tested? -Were correct statistical analysis used to support conclusions? -Are there concerns about ethical or regulatory requirements being met? Reviewer #1: Objectives are clearly stated, and methods are appropriate for an overall assessment of test accuracy. However, some methodological choices need to be better justified. For example, authors considered high risk of bias if a study excluded borderline or equivocal results from the analysis or used multiple reference tests. However, equivocal results are usually not to be interpreted as either positive or negative, thus assigning these cases as either TP/FP/FN/TN can biased the results. This is of particular importance if studies are assigning these cases differently. To that point, an overall description on how each study classified equivocal results will help understanding this potential source of heterogeneity. Similarly, for diseases in which no gold standard testing is adequate throughout most of the course of the disease, it is reasonable that a combination of tests is used to assure a correct classification. Authors also mention that only studies that used direct diagnosis methods (antigen detection, viral isolation or PCR) as reference standards were used. This limits the studied population to the cases that sought medical care early enough in the course of disease. Authors also opted to combine acute- and convalescent-phase data (when available) for an overall description of IgM and IgG testing accuracy. Although not wrong, these tests are not intended for acute-phase testing, thus the overall accuracy combining both phases has limited applicability. Reviewer #2: This is a well written manuscript that is of particular interest to diagnosticians of tropical infections. -------------------- Results -Does the analysis presented match the analysis plan? -Are the results clearly and completely presented? -Are the figures (Tables, Images) of sufficient quality for clarity? Reviewer #1: Results are clearly presented although additional information might be useful to interpret the analysis. In Table 1, it would be important to know which clinical setting the studies took place (outpatient, hospitalized, etc). It would also be useful to know which type of reference test was used (antigen detection, viral isolation, PCR). The average and range of days of symptoms of the tested sample will also be important for interpretation of Table 3 summarizing IgM/IgG sensitivity. Similarly, in the subgroup analysis of commercial tests it might be also important to summarize the type of study and average days of symptom of samples used for these calculations (at a quick glance it seems like most are partial cohort/case-control for most companies and cohort for most SD). For Table 7, it is unclear what number of index tests mean. Is it the number of studies evaluating the test? Euroimmun only has one CHIKV ELISA IgM commercial test, for example, hence my confusion. Reviewer #2: The results are well presented as are the figures. -------------------- Conclusions -Are the conclusions supported by the data presented? -Are the limitations of analysis clearly described? -Do the authors discuss how these data can be helpful to advance our understanding of the topic under study? -Is public health relevance addressed? Reviewer #1: Conclusions are supported by the data presented. Some important limitations were not discussed. Authors state that most of the studies did not specify the time of sample collection. This can be an issue since the reference tests they selected for are all direct methods. If IgM/IgG testing is being done in the same sample as the reference test, this can artificially decrease sensitivity because it is being used at an inappropriate timing. This is likely one of the reasons for the heterogeneity found. Similarly, another source of heterogeneity not discussed it the intentional pooling of acute- and convalescent-phase samples for the overall sensitivity calculations, particularly for IgM/IgG. In the subgroup analysis, there is likely a combination of factors that were only assessed individually that potentially justifies the heterogeneity (e.g. acute- vs convalescent-phase samples, study type, etc.). Additionally, although generally the statement “Detection of IgG alone indicates past infection, while IgM positive with or without IgG indicates recent infection” is correct, I would point to caution as detection of IgG alone might not necessarily indicate past infection if a false negative in the IgM testing occurs. Reviewer #2: The authors have missed some of the main limitations especially sample timing, reference comparators and sample composition. They are mentioned but only in passing - more emphasis needs to be placed on these important potential bias factors. I am concerned about the lack of detail in the timing of the sample (days of illness) information provided in this meta-analysis. While the authors state that this was a problem in the abstract it is not mentioned as a limitation of the study. It is noted that this information is included in table 5 of the study and illustrates the point well regarding sample timing especially in the acute phase of the study. The authors have overlooked the importance of this observation as it is imperative that the days of illness be included in any diagnostic accuracy study because if there is a large number of acute phase samples included in the study then it is unlikely that there will be detectable levels of antibodies available and therefore this will be reflected in the results indicating low sensitivity as illustrated in the last rows of table 5. This conclusion is fine because you cannot detect something that is not present. Conversely, it is simple to make any insensitive test look good by including large numbers of convalescent samples in the study group. It is strongly recommended that additional comment or information be provided in the Discussion section regarding the impact of the timing of the sample in the context of days of illness presentation. There is little or no mention of the influence of Chikungunya prevalence on the outcome of diagnostic accuracy studies. While there is mention of this in the section on bias the implications are not really clear to the reader (i.e., high vs low prevalence in prospective study design and high versus low n positives in a case-control design) Please provide additional information regarding the implications of such bias in the sample composition especially in case control studies. There is an absence of information regarding the influence of the reference comparator in the diagnostic accuracy study. This is an especially important source of bias often an inappropriate reference comparator is selected or an inappropriate diagnostic cutoff is used which can significantly bias the results. Furthermore, it is often more appropriate that a final patient case results (true disease status) of “Chikungunya positive” or “Chikungunya negative” be used for calculating the sens and spec because the true disease status is more relevant that comparing positivity scores against an randomly assigned diagnostic cutoff. Line 420. The authors have stated the following “ Second, we received no response from the corresponding authors (24, 32) regarding discrepancies in the raw data and diagnostic accuracy.” It is not clear what the authors mean by this statement as there is no additional information in the manuscript regarding these two papers. It would appear that because these two studies departed from the main theme of generally high diagnostic accuracy that there is a methodological problem with the study. The study of Huits et al describes the Arkay Chikungunya antigen ICT which described “fair diagnostic sensitivity for ECSA genotype chikungunya, but low sensitivity for Asian genotype, and poor overall specificity.”The study of Blacksell et al describes a diagnostic accuracy study evaluating the Standard Diagnostics IgM ELISA using a series of acute-phase samples compared to a gold standard reference comparators with the conclusion that the essay had low sensitivity which is a reflection on the days of illness of the samples. In fact, figure 4, indicates that for that particular study there was Low risk of bias or applicability concerns. Because of the lack of detail, I recommend that this sentence be removed from the manuscript as the implications are unfair to the authors. -------------------- Editorial and Data Presentation Modifications? Use this section for editorial suggestions as well as relatively minor modifications of existing data that would enhance clarity. If the only modifications needed are minor and/or editorial, you may wish to recommend “Minor Revision” or “Accept”. Reviewer #1: L87: The number of CHIKV serological tests increase tremendously after the Indian Ocean outbreaks in 2004 - As in number of tests used or different tests commercially available? Line 126: We included only the optimized index test data in the analysis for studies developing serological tests using a different batch of antigens or antibodies. - What is optimized index test data? Line 169: For commercial tests (specific brand) with two or more reports, meta-analyses were done to determine their diagnostic accuracy. We included only samples collected after 7 days from the onset of clinical symptoms for this analysis. - Clarify that for 2+ studies describing accuracy of a commercially available test, a meta-analysis per manufacturer/brand was done. This sentence was not clear to me until I read the results. Clarification on the use of “acceptable reference standards” (line 347) is needed. Are the authors suggesting the reference standards they used (PCR, viral isolation or antigen detection) are not acceptable? Reviewer #2: (No Response) -------------------- Summary and General Comments Use this section to provide overall comments, discuss strengths/weaknesses of the study, novelty, significance, general execution and scholarship. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. If requesting major revision, please articulate the new experiments that are needed. Reviewer #1: The overall accuracy analysis seems to have limited applicability due to the compilation of studies with such different methodologies (e.g., grouping different detection methods – ELISA, RDT, IF; sample type, etc.). In that sense, the subgroup analysis is more informative and perhaps the most important aspect of this work. The risk of bias assessment needs better theoretical background justifying authors choices in qualifying the information (i.e., why were studies that used a combination of reference tests classified as having higher risk of bias?). Perhaps the novelty factor of a meta-analysis lies on the discussion of the heterogeneity of the data. As described, many aspects of this discussion were lacking. Reviewer #2: The authors have missed some of the main limitations especially sample timing, reference comparators and sample composition. They are mentioned but only in passing - more emphasis needs to be placed on these important potential bias factors to provide a balanced viewpoint. -------------------- PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Figure Files: While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Data Requirements: Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5. Reproducibility: To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols 6 Nov 2021 Submitted filename: Response to reviewers.doc Click here for additional data file. 3 Dec 2021 Dear Prof. Dr. Tang, Thank you very much for submitting your manuscript "Diagnostic accuracy of serological tests for the diagnosis of Chikungunya virus infection: a systematic review and meta-analysis" for consideration at PLOS Neglected Tropical Diseases. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. The reviewers appreciated the attention to an important topic. Based on the reviews, we are likely to accept this manuscript for publication, providing that you modify the manuscript according to the review recommendations. Please prepare and submit your revised manuscript within 30 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. When you are ready to resubmit, please upload the following: [1] A letter containing a detailed list of your responses to all review comments, and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out [2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file). Important additional instructions are given below your reviewer comments. Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments. Sincerely, Guilherme S. Ribeiro, M.D., M.Sc., Ph.D Associate Editor PLOS Neglected Tropical Diseases Emma Wise Deputy Editor PLOS Neglected Tropical Diseases *********************** Reviewer's Responses to Questions Key Review Criteria Required for Acceptance? As you describe the new analyses required for acceptance, please consider the following: Methods -Are the objectives of the study clearly articulated with a clear testable hypothesis stated? -Is the study design appropriate to address the stated objectives? -Is the population clearly described and appropriate for the hypothesis being tested? -Is the sample size sufficient to ensure adequate power to address the hypothesis being tested? -Were correct statistical analysis used to support conclusions? -Are there concerns about ethical or regulatory requirements being met? Reviewer #1: Objectives are clearly stated, and methods are appropriate. I appreciate that the authors justified some of their methodological choices so the readers can easily understand the rationale some assessments are based. Authors also justified that there was a low proportion of equivocal results, so the potential impact in the analysis is minimal (although there was not a clear statement of what that proportion was). Reviewer #2: The revisions made by the authors are acceptable. -------------------- Results -Does the analysis presented match the analysis plan? -Are the results clearly and completely presented? -Are the figures (Tables, Images) of sufficient quality for clarity? Reviewer #1: Results are clearly presented although some figures might be combined for a more concise display of the information. It would still be useful to have a description of both the clinical setting the studies took place (outpatient, hospitalized, etc) and geographic location of where the studies took place to have an idea of the epidemiological scenario in which it took place. A brief text summary (e.g., most (XX%) of studies occurred with outpatient patients retrospectively and XX% took place in endemic regions) would suffice. That gives the reader context to what these studies represent. Reviewer #2: The revisions made by the authors are acceptable. -------------------- Conclusions -Are the conclusions supported by the data presented? -Are the limitations of analysis clearly described? -Do the authors discuss how these data can be helpful to advance our understanding of the topic under study? -Is public health relevance addressed? Reviewer #1: Conclusions are supported by the data presented although some phrasing needs to be revised. Some limitations were discussed as the results were presented (IgG RDT having higher accuracy) but need to be reinforced in the discussion as well. Additionally, although generally the statement “Detection of IgG alone indicates past infection, while IgM positive with or without IgG indicates recent infection” is correct, I would point to caution as detection of IgG alone might not necessarily indicate past infection if a false negative in the IgM testing occurs. Lastly, the authors mention throughout the discussion/conclusion that IgM and IgG testing of the convalescent-phase sample is recommended to differentiate between recent and past infections. I think the authors might mean testing IgM/IgG in the acute-phase sample to help differentiate between recent and past infections, please revise. Although IgM sensitivity is low for acute-phase samples, testing only convalescent-phase samples would not differentiate between recent or past infection since the patient would have had enough time to develop antibodies even in a primary infection. Reviewer #2: The revisions made by the authors are acceptable. -------------------- Editorial and Data Presentation Modifications? Use this section for editorial suggestions as well as relatively minor modifications of existing data that would enhance clarity. If the only modifications needed are minor and/or editorial, you may wish to recommend “Minor Revision” or “Accept”. Reviewer #1: Figs 2-7 seems to be somewhat redundant with Figure S1. - The stratification is relevant, but in interest of efficiency I would suggest combining them into maybe Figure S1 with three additional columns to denote the test type (RDT, ELISA), whether or not it is a commercial test, and days of symptoms (> or < 7) Ag detection test: no difference between rapid test and elisa - Very large confidence for Sp heterogeneity, I would like to have the authors discuss possible explanations in the discussion - Include I² in the legend for Table 6 Line 251: In addition, the sensitivity of the rapid tests was statistically different from ELISA-based (93.4%; 95% CI 81.7 to 97.8; P=0.002) and IFA (99.3%; 95% CI 69.4 to 100; P=0.027) - I would suggest adding RDTs Se in the phrasing as well for easy reading “In addition, the sensitivity of the rapid tests (42.3%, 95%CI…) …” Line 267: The forest plot (Fig 5) shows that the sensitivity for samples collected ≤7 days of symptoms onset mostly lies on the left side of the plot. Incoherence with this observation, our meta-analysis shows that the sensitivity of the acute samples was significantly lower than convalescent samples (Table 7). - It doesn’t seem incoherent. Fig 5 shows most sensitivities within the ≤7 days studies around 0-60% (left skewed) and most sensitivities within >7 days studies around 80-100% (right skewed), meaning lower sensitivity in acute-phase samples. Table S1: Clarify if CHIKjj Detect MAC-ELISA is IgM or IgG (both are available commercially), same for Anti-Chikungunya Virus, Abcam. Similarly for Table 9. Figure 9 could be submitted as a supplementary figure Line 426: “As IgG antibodies generally persist for years, qualitative detection of both CHIKV IgG and IgM antibodies in the convalescent phase can help to differentiate recent and past infections.” - I think the authors might mean testing IgM/IgG in the acute-phase sample to help differentiate between recent and past infections, please revise. Although IgM sensitivity is low for acute-phase samples, testing only convalescent-phase samples would not differentiate between recent or past infection since the patient would have had enough time to develop antibodies even in a primary infection. - Same comment for line 435: “In summary, IgM and IgG antibody detection tests need to be carried out simultaneously for a more accurate diagnosis of CHIKV infection in the convalescent phase of the infection.” - And Line 537: “In contrast, IgM and IgG tests can be used for samples collected in the convalescent phase, whereby the combination of the tests can differentiate recent and past infections.” Line 429: “Rapid tests showed the highest diagnostic accuracy among test formats available to detect CHIKV IgG antibodies, followed by IFA and ELISA-based tests” - A follow up phrase pointing that there was no statistical difference between RDT, ELISA and IFA sensitivities and that the confidence intervals for the RDT Se and Sp are very large (Table 8) is needed here. Reviewer #2: The revisions made by the authors are acceptable however the manuscript still requires some editing for English grammar and phrasing. -------------------- Summary and General Comments Use this section to provide overall comments, discuss strengths/weaknesses of the study, novelty, significance, general execution and scholarship. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. If requesting major revision, please articulate the new experiments that are needed. Reviewer #1: The manuscript improved significantly with changes in the methodology and text and my main concerns were addressed. Few revisions are still necessary to assure some phrasings are not misinterpreted. Reviewer #2: The revisions made by the authors are acceptable. -------------------- PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Figure Files: While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Data Requirements: Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5. Reproducibility: To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols References Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article's retracted status in the References list and also include a citation and full reference for the retraction notice. 3 Jan 2022 Submitted filename: Response to reviewers.docx Click here for additional data file. 6 Jan 2022 Dear Prof. Dr. Tang, We are pleased to inform you that your manuscript 'Diagnostic accuracy of serological tests for the diagnosis of Chikungunya virus infection: a systematic review and meta-analysis' has been provisionally accepted for publication in PLOS Neglected Tropical Diseases. Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests. Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated. IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript. Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS. Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Neglected Tropical Diseases. Best regards, Guilherme S. Ribeiro, M.D., M.Sc., Ph.D Associate Editor PLOS Neglected Tropical Diseases Emma Wise Deputy Editor PLOS Neglected Tropical Diseases *********************************************************** 21 Jan 2022 Dear Prof. Dr. Tang, We are delighted to inform you that your manuscript, "Diagnostic accuracy of serological tests for the diagnosis of Chikungunya virus infection: a systematic review and meta-analysis," has been formally accepted for publication in PLOS Neglected Tropical Diseases. We have now passed your article onto the PLOS Production Department who will complete the rest of the publication process. All authors will receive a confirmation email upon publication. The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any scientific or type-setting errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript. Note: Proofs for Front Matter articles (Editorial, Viewpoint, Symposium, Review, etc...) are generated on a different schedule and may not be made available as quickly. Soon after your final files are uploaded, the early version of your manuscript will be published online unless you opted out of this process. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers. Thank you again for supporting open-access publishing; we are looking forward to publishing your work in PLOS Neglected Tropical Diseases. Best regards, Shaden Kamhawi co-Editor-in-Chief PLOS Neglected Tropical Diseases Paul Brindley co-Editor-in-Chief PLOS Neglected Tropical Diseases
Table 3

Characteristics of studies on IgG detection tests included in the meta-analysis.

Author Year Study design Reference test Index test format Index test (Commercial/ In-house) Time of sample collection (day of post symptom onset) Total number of samples TP FP FN TN Ref
Bagno2020Partial cohort and case-controlAnti-chikungunya IgG ELISA kit (Euroimmun, Germany)IgG Indirect ELISAIn-houseNA156693183[29]
De Salazar2017Partial cohort and case-controlIn-house ELISA (CDC, Atlanta, United States)GAC-ELISACommercial (InBios)15 to 9036132120[50]
De Salazar2017Partial cohort and case-controlIn-house ELISA (CDC, Atlanta, United States)IgG Indirect ELISACommercial (Euroimmun)15 to 9036144018[50]
De Salazar2017Partial cohort and case-controlIn-house ELISA (CDC, Atlanta, United States)IFACommercial (Euroimmun)15 to 9036142020[50]
Fumagalli2018CohortPlaque reduction neutralization testIgG Indirect ELISAIn-houseNA59260330[51]
Kowalzik2008Case-controlIFARapid testIn-houseNA1302208100[52]
Kumar2014Partial cohort and case-controlIgG IFA (Euroimmun)GAC-ELISAIn-house≥ 91418351736[53]
Lee2020Case-controlEuroimmun and Inbios IgG ELISARapid testCommercial (Boditech Med Inc)NA1993600163[38]
Litzba2008Case-controlIndirect IgG ELISA or In-house IIFTIFACommercial (Euroimmun)NA2078304120[39]
Mendoza2019Case-controlPlaque reduction neutralisation test and/or RT-PCRIgG Indirect ELISACommercial (Euroimmun)NA2121551650[41]
Mendoza2019Case-controlPlaque reduction neutralisation test and/or RT-PCRGAC-ELISACommercial (Abcam)NA2121550651[41]
Prat2014Partial cohort and case-controlIn-house ELISA and PRNTGAC-ELISACommercial (IBL International)NA531511324[42]
Prat2014Partial cohort and case-controlIn-house ELISA and PRNTIgG Indirect ELISACommercial (Euroimmun)NA47223319[42]
Verma2014Case-controlRT-PCR or IgM kitIgG Indirect ELISAIn-house7 to 151951170672[46]
Wang2019Partial cohort and case-controlELISA kit (Euroimmun)Rapid testIn-houseNA109290080[47]

Note: TP, true positive; FP, false positive; FN, false negative; TN, true negative; Ref, reference; NA, not available

  60 in total

1.  Genetic divergence of Chikungunya viruses in India (1963-2006) with special reference to the 2005-2006 explosive epidemic.

Authors:  Vidya A Arankalle; Shubham Shrivastava; Sarah Cherian; Rashmi S Gunjikar; Atul M Walimbe; Santosh M Jadhav; A B Sudeep; Akhilesh C Mishra
Journal:  J Gen Virol       Date:  2007-07       Impact factor: 3.891

2.  Poor diagnostic accuracy of commercial antibody-based assays for the diagnosis of acute Chikungunya infection.

Authors:  Stuart D Blacksell; Ampai Tanganuchitcharnchai; Richard G Jarman; Robert V Gibbons; Daniel H Paris; Mark S Bailey; Nicholas P J Day; Ranjan Premaratna; David G Lalloo; H Janaka de Silva
Journal:  Clin Vaccine Immunol       Date:  2011-08-24

3.  Comparative evaluation of the diagnostic potential of recombinant envelope proteins and native cell culture purified viral antigens of Chikungunya virus.

Authors:  Mohsin Khan; Rekha Dhanwani; Jyoti S Kumar; P V Lakshmana Rao; Manmohan Parida
Journal:  J Med Virol       Date:  2013-09-16       Impact factor: 2.327

4.  Analysis of antibody response (IgM, IgG, IgG3) to Chikungunya virus using panel of peptides derived from envelope protein for serodiagnosis.

Authors:  Priyanka Verma; Santwana Bhatnagar; Pradeep Kumar; Vinita Chattree; M M Parida; S L Hoti; Shakir Ali; D N Rao
Journal:  Clin Chem Lab Med       Date:  2014-02       Impact factor: 3.694

5.  Development and evaluation of antigen capture ELISA for early clinical diagnosis of chikungunya.

Authors:  Jyoti Shukla; Mohsin Khan; Mugdha Tiwari; Santhosh Sannarangaiah; Shashi Sharma; Putcha Venkata Lakshmana Rao; Manmohan Parida
Journal:  Diagn Microbiol Infect Dis       Date:  2009-10       Impact factor: 2.803

6.  Evaluation of two IgM rapid immunochromatographic tests during circulation of Asian lineage Chikungunya virus.

Authors:  Herman Kosasih; Susana Widjaja; Edwin Surya; Sri H Hadiwijaya; Deni P R Butarbutar; Ungke A Jaya; Bachti Alisjahbana; Maya Williams
Journal:  Southeast Asian J Trop Med Public Health       Date:  2012-01       Impact factor: 0.267

7.  Global transmission and evolutionary dynamics of the Chikungunya virus.

Authors:  F Deeba; M S H Haider; A Ahmed; A Tazeen; M I Faizan; N Salam; T Hussain; S F Alamery; S Parveen
Journal:  Epidemiol Infect       Date:  2020-02-19       Impact factor: 2.451

Review 8.  Chikungunya virus: an update on the biology and pathogenesis of this emerging pathogen.

Authors:  Felicity J Burt; Weiqiang Chen; Jonathan J Miner; Deborah J Lenschow; Andres Merits; Esther Schnettler; Alain Kohl; Penny A Rudd; Adam Taylor; Lara J Herrero; Ali Zaid; Lisa F P Ng; Suresh Mahalingam
Journal:  Lancet Infect Dis       Date:  2017-02-01       Impact factor: 25.071

9.  Evaluation of Commercially Available Chikungunya Virus Immunoglobulin M Detection Assays.

Authors:  Barbara W Johnson; Christin H Goodman; Kimberly Holloway; P Martinez de Salazar; Anne M Valadere; Michael A Drebot
Journal:  Am J Trop Med Hyg       Date:  2016-03-14       Impact factor: 2.345

10.  Chikungunya E2 Protein Produced in E. coli and HEK293-T Cells-Comparison of Their Performances in ELISA.

Authors:  Flávia Fonseca Bagno; Lara Carvalho Godói; Maria Marta Figueiredo; Sarah Aparecida Rodrigues Sérgio; Thaís de Fátima Silva Moraes; Natália de Castro Salazar; Young Chan Kim; Arturo Reyes-Sandoval; Flávio Guimarães da Fonseca
Journal:  Viruses       Date:  2020-08-26       Impact factor: 5.048

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.