Literature DB >> 34735520

A multicenter analytical performance evaluation of a multiplexed immunoarray for the simultaneous measurement of biomarkers of micronutrient deficiency, inflammation and malarial antigenemia.

Eleanor Brindle1, Lorraine Lillis2, Rebecca Barney2, Pooja Bansil2, Sonja Y Hess3, K Ryan Wessells3, Césaire T Ouédraogo3,4, Francisco Arredondo5, Mikaela K Barker6, Neal E Craft7, Christina Fischer8, James L Graham3, Peter J Havel3, Crystal D Karakochuk6, Mindy Zhang8, Ei-Xia Mussai6, Carine Mapango8, Jody M Randolph3, Katherine Wander9, Christine M Pfeiffer8, Eileen Murphy2, David S Boyle2.   

Abstract

A lack of comparative data across laboratories is often a barrier to the uptake and adoption of new technologies. Furthermore, data generated by different immunoassay methods may be incomparable due to a lack of harmonization. In this multicenter study, we describe validation experiments conducted in a single lab and cross-lab comparisons of assay results to assess the performance characteristics of the Q-plex™ 7-plex Human Micronutrient Array (7-plex), an immunoassay that simultaneously quantifies seven biomarkers associated with micronutrient (MN) deficiencies, inflammation and malarial antigenemia using plasma or serum; alpha-1-acid glycoprotein, C-reactive protein, ferritin, histidine-rich protein 2, retinol binding protein 4, soluble transferrin receptor, and thyroglobulin. Validations included repeated testing (n = 20 separately prepared experiments on 10 assay plates) in a single lab to assess precision and linearity. Seven independent laboratories tested 76 identical heparin plasma samples collected from a cohort of pregnant women in Niger using the same 7-plex assay to assess differences in results across laboratories. In the analytical validation experiments, intra- and inter-assay coefficients of variation were acceptable at <6% and <15% respectively and assay linearity was 96% to 99% with the exception of ferritin, which had marginal performance in some tests. Cross-laboratory comparisons showed generally good agreement between laboratories in all analyte results for the panel of 76 plasma specimens, with Lin's concordance correlation coefficient values averaging ≥0.8 for all analytes. Excluding plates that would fail routine quality control (QC) standards, the inter-assay variation was acceptable for all analytes except sTfR, which had an average inter-assay coefficient of variation of ≥20%. This initial cross-laboratory study demonstrates that the 7-plex test protocol can be implemented by users with some experience in immunoassay methods, but familiarity with the multiplexed protocol was not essential.

Entities:  

Mesh:

Substances:

Year:  2021        PMID: 34735520      PMCID: PMC8568126          DOI: 10.1371/journal.pone.0259509

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

Micronutrient (MN) deficiencies include iron, vitamin A, and iodine amid other essential elements and vitamins [1, 2]. It is estimated that over 2 billion people worldwide are directly affected by a MN deficiency [3]. Children and pregnant women are particularly at risk due to an inadequate diet that fails to meet the greater micronutrient requirements necessary for fetal growth or childhood development [4]. Iron, iodine and vitamin A are three of the micronutrients of greatest public concern [1]. MN deficiency can adversely affect the physiology of diverse organ systems, impairing, for example, ocular, immunologic, and neurological function, often causing irreversible damage [4]. As such the quality of life of those affected by MN deficiency is significantly reduced, making it critical to accurately assess the prevalence of micronutrient deficiency to allow the targeted implementation of micronutrient intervention programs among high risk populations and assess intervention outcomes [5, 6]. Data harmonization for MN deficiency surveillance is challenged by the use of different survey biomarkers and methods by different labs. For example, vitamin A is determined via serum retinol or retinol binding protein 4 (RBP4) which do not always correlate well with each other [7-9], while iodine is measured using urinary iodine, thyroglobulin (Tg) or thyroid hormones [10-12]. Furthermore, the quantitative data generated by enzyme linked immunosorbent assays (ELISAs) is impacted by a variety of factors including the sample type (e.g. dried blood spot [DBS], serum or plasma from venous or capillary blood) [13-15], variations in the antibodies, buffers and protocols used by commercially available ELISA kits [16, 17], a lack of international reference materials for some key biomarkers (e.g. RBP4), and a lack of external quality assessment (EQA) materials to consistently qualify tests and user performance resulting in variation in measurements across laboratories [18]. Finally, in some cases, routinely used immunoassays have not been fully validated by the manufacturer or by users to confirm acceptable assay performance for their intended use [19]. These challenges, either individually or in combination, result in poorer quality datasets that make it difficult to accurately and consistently monitor MN deficiency distribution, prevalence, and severity, in particular, across different surveys using similar but not identical analytical methods. Ideally, MN deficiency data collected in large surveys such as the Demographic and Health Surveys (DHS), which conduct surveillance in different populations globally and at multiple time points, should be uniform to allow constructive comparisons across countries and/or survey waves. A primary purpose of micronutrient status assessments is to understand what populations are most vulnerable and to assess the impact of interventions. For the accurate inter-region comparison of MN deficiency surveys or along a series of time points, the harmonization of absolute measurements generated via all the analytical methods used is essential. This can be realized, in part, by using inter-laboratory performance studies and evaluations of these methods to identify technologies that are relatively easy to perform and have sufficient accuracy and reproducibility to generate comparable datasets irrespective of where the testing is carried out. We have reported previously on a multiplex assay method developed to simplify population surveillance of micronutrient status by combining relevant biomarkers into a single test. In this study we report results of a full formal validation of the Q-plex 7-plex Human Micronutrient Array (hereafter the 7-plex), examining the reproducibility observed with multiple users across seven different laboratories in order to characterize measurement variability for biomarkers pertinent to MN deficiency surveillance, namely inflammatory biomarkers alpha-1-acid glycoprotein (AGP) and C-reactive protein (CRP); thyroglobulin (Tg, iodine); serum ferritin and soluble transferrin receptor (sTfR, both iron); retinol binding protein 4 (RBP4, vitamin A); and histidine-rich protein 2 (HRP2, Plasmodium falciparum malaria) [9, 11, 20, 21]. We assessed the precision and performance of the 7-plex for use in population surveillance of MN deficiency [22-24].

Materials and methods

7-plex array procedure

The panel samples and controls were thawed on the bench top at room temperature on the day of the assay. Samples were processed following the assay protocol. First, the lyophilized competitor mix provided with the kit was reconstituted in the sample diluent volume recommended in the product insert to produce a 1X strength competitor mix. Next, the lyophilized calibrator was reconstituted with the competitor mix volume indicated in the kit insert, then a series of 7 threefold dilutions was prepared to create an eight-point standard curve. A 15 μL volume of each sample or quality control (QC) was combined with 135 μL competitor mix to produce final dilutions of 1:10. A volume of 50 μL per well of prepared standards, controls and samples were added to the plates in duplicate wells and each plate was incubated at room temperature for 2 hours with shaking on a flatbed shaker at 500 revolutions per minute. All reactions were aspirated, and the wells washed 3 times with the wash buffer provided with the kit. Next, 50 μL of detection mix was added to each well and the plate was then incubated with shaking for 1 hour and then washed one more time as described above. Labeling was performed by adding 50 μL streptavidin horseradish peroxidase solution to each well and shaking for 20 minutes. After washing 6 times, the chemiluminescent substrate mixture of equal volumes of parts A and B were added at 50 μL per well. Each plate was then immediately imaged at 270 seconds of exposure time using a Quansys Q-View™ Imager LS (Quansys Biosciences). Q-View Software (Quansys Biosciences) was used to overlay a plate map onto the locations of analyte spots in each well to quantify the chemiluminescent signal from each spot in units of pixel intensity. The software applies the calibrator concentration values to the pixel intensities for each spot in the standard curve wells and was set to automatically fit optimal 5 parameter logistic calibration curves for each analyte. The pixel intensities of the spots in each test well were then used to interpolate the concentration of each analyte relative to its calibrator curve. Once the plate image is overlaid with the analysis grid, all of the curve fitting and data reduction steps are automatically applied via the software. The upper and lower limits of quantification determined by Quansys for each kit lot were applied to exclude values beyond the concentration ranges that yield precise concentration estimates.

Validation of 7-plex performance in a single lab

The intra- and inter-assay performance ranges of the 7-plex reported in earlier publications, along with other components of assay validation, were originally generated by the manufacturer of the assay, Quansys Biosciences [23]. As a follow up to this, before the inter-laboratory evaluation was performed, a second validation of 7-plex performance was conducted independently in the PATH laboratory to confirm the original findings (see Fig 1).
Fig 1

A flow chart depicting the three primary workstreams followed to prepare and complete the interlaboratory assessment of the 7-plex assay.

This included the validation of two separate lots of both plate and calibrator reagents prior to sharing identical test kits with all partner laboratories; the construction and qualification of the blinded test panels; and finally the inter laboratory assessment of the blinded test panels and analysis of data. * This user ran 4 plates instead of just 2. LK, Liquichek Standard; H, High; M, medium; L, Low; SLK, spiked Liquichek Standard; G and L, Quansys quality control standards; QC, quality control; UW, University of Washington.

A flow chart depicting the three primary workstreams followed to prepare and complete the interlaboratory assessment of the 7-plex assay.

This included the validation of two separate lots of both plate and calibrator reagents prior to sharing identical test kits with all partner laboratories; the construction and qualification of the blinded test panels; and finally the inter laboratory assessment of the blinded test panels and analysis of data. * This user ran 4 plates instead of just 2. LK, Liquichek Standard; H, High; M, medium; L, Low; SLK, spiked Liquichek Standard; G and L, Quansys quality control standards; QC, quality control; UW, University of Washington.

Validation materials

The test panel used to qualify the 7-plex performance consisted of Liquichek (LK) Immunology Control Level 3 (Lot # 66363, Bio-Rad Laboratories Inc., Hercules, CA, USA), a pooled human serum-based matrix containing most of the analytes of interest. As the LK control has low concentrations of both sTfR and Tg and is negative for HRP2, a spiked version was also prepared by adding concentrated sTfR antigen (Fitzgerald, MA, USA), Human Tg (BiosPacific Inc., Emeryville, CA, USA) and HRP 2 (CTK Biotech, San Diego, CA, USA) to better reflect the quantitative range of the array. Additionally, seven human plasma samples previously determined to have the highest sTfR measurements (all HRP2 negative) were selected from our US donor panel for use in the validation experiments [22, 23].

Validation experiments

A flow chart in Fig 1 highlights the sequential processes used to validate the 7-plex assays, construct blinded test panels and finally carry out the inter laboratory assessment. Ten 7-plex plates in total, were employed to evaluate assay precision and linearity. Each plate performed two identical experiments, with all controls and test sample dilutions prepared independently for each experiment, with results derived from an independent standard curve for each half of a plate. Using a total of 20 replicate experiments, the LK and spiked Liquichek (SLK) were screened in triplicate as high (undiluted), medium (1:4 dilution) and low concentrations (1:10 dilution). The absolute values derived from these samples were used to calculate the independence of volume (linearity) and precision of each assay across its linear dilution range. The seven human plasma specimens were run in duplicate at a 1:10 dilution. To assess real-use imprecision, including common potential sources of variability, the 20 experiments used two different 7-plex plate lots and two different lots of calibrator, with experiments performed by two users. All testing was performed at ambient temperature (approximately 22°C) following the 7-plex protocol as described in detail below. The ten plates generated 60 unique data points for each biomarker in the LK and SLK dilutions (run in triplicate in 20 experiments) and 40 data points per biomarker in the 7-member plasma panel (run in duplicate in 20 experiments).

Validation statistical methods

Validation experiment results were analyzed to estimate intra- and inter-assay imprecision and linearity. A variance components model was used to calculate intra- and inter-assay coefficients of variation (CV) for the LK, SLK and plasma sample results from the 20 independent experiment batches run on ten assay plates [25]. This method parses within-plate variance and between-plate variance to more accurately reflect the sources of variability in imprecision estimates. CVs were calculated with results grouped by plate lot, by calibrator lot, and by user, as well as in aggregate. CV is used to simplify interpretation of estimates of assay variability, but because it is a ratio of standard deviation to mean concentration, it tends to overstate variation at lower concentrations. Linearity was estimated using three dilutions of both the LK and SLK and was calculated by dividing the concentration of a diluted sample by the concentration of the next higher dilution, multiplying by the dilution factor and expressed as a percentage.

Inter-laboratory assessment

Donor panel

While the validation work was carried out with a mixture of serum and plasma samples we have previously demonstrated comparable results between paired serum and heparinized plasma samples [22], thus we were confident plasma samples would be appropriate for interlaboratory assessment. Plasma samples from a study of micronutrient status among pregnant women in Niger were used to generate test panels for the inter-laboratory assessment [23]. Samples were collected as part of a cross-sectional study embedded into the Niger Maternal Nutrition (NiMaNu) Project, which was registered with the U.S. National Institutes of Health (www.ClinicalTrials.gov; NCT01832688) [26]. The National Ethical Committee (Niger) and the Institutional Review Board of the University of California Davis (UC Davis; USA) provided ethical approval for the study protocol and the consent procedure. The local implementation was under the responsibility of Helen Keller International (Niamey, Niger), who followed relevant national regulations and laws applying to project implementation and foreign researchers. Written informed consent was obtained from all study participants. A total of 18 rural health centers from 2 health districts in the Zinder Region were selected to participate in the NiMaNu project. In each community, pregnant women were randomly selected and invited to participate in the survey. They were eligible if they provided written informed consent, had resided in the village for at least six months, and had no plans to move within the coming two months. As part of the NiMaNu study, venous blood samples were collected and used to prepare heparinized plasma and DBS cards. PATH signed a material transfer agreement (MTA) with UC Davis and then each of the participating laboratories in this study signed an MTA with PATH prior to receipt of test materials.

Construction of blinded test panels

The NiMaNu panel of 208 plasma samples were previously analyzed using the 7-plex [23]. As the 7-plex requires only 13.5 μL of plasma per test, multiple samples within this panel had a significant residual volume (>300 μL) of plasma. Using the original 7-plex data, seventy-eight samples with concentrations representing the full range for each analyte were chosen for sub aliquoting to create 16+ identical panels consisting of 78 separate 20 μL plasma samples as follows: The frozen plasma was thawed on ice, spun briefly in a microfuge and pipetted into sterile screw cap tubes, which were then stored -80 °C (Fig 1). Two of these samples were randomly chosen from the panel and all 19 of the aliquots prepared from these samples were assessed by the 7-plex to test for tube-to-tube variability that might have been introduced during sub-aliquoting. Both samples had an intra-assay CV < 10% for each analyte (S1 Table), confirming analyte uniformity across sample tubes. The original specimen identifiers of the remaining samples were replaced with sequential numbering from 1–76, effectively blinding the labs previously involved in studies that used specimens from this panel. The samples were stored at -80 °C until shipment to the partner laboratories. In addition to the 76 member Niger heparin plasma panel samples, Quansys Biosciences prepared QC samples, named G and H, representing both high and low analyte values to be run on each plate (Fig 1). These quality controls were used to evaluate whether each plate used during this study would meet acceptance criteria ideally applied in the routine use of the kit. The controls were prepared by spiking serum with purified biomarkers as needed to reach the desired concentration of each biomarker [23]. Prior to distribution, the G and H controls were quantified by Quansys via a series of twenty independent test runs using the 7-plex to determine the expected values of all 7 biomarkers (S2 Table).

Laboratories

Seven distinct laboratories offered to be part of the inter-laboratory performance study, each providing data from at least one, and ideally two, laboratorians per facility. Laboratories at PATH, the University of Washington, Quansys, and UC Davis had previously collaborated to develop and verify the performance of the Human Micronutrient assay [22-24] (Fig 1). Other laboratories, including ones from the US Centers for Disease Control and Prevention (CDC, GA), Eurofins Craft Technologies, Inc. (NC), Binghamton University (SUNY), and the University of British Columbia have also been independently evaluating the performance of the Human Micronutrient assay [27-30]. Once each laboratory had signed the MTA to access the samples, two complete sets of 76 heparin plasma samples and two of the G and H quality control sets, were shipped on dry ice via overnight courier. Recipients acknowledged the panels’ integrity (frozen with dry ice still in packaging) upon arrival and stored them at -80 °C until assay. The manufacturer of the assay, Quansys, was excluded from the study in order to limit bias, as their technical staff are most familiar with the platform and they manufacture and market the Human Micronutrient assay kit. Prior to performing testing, all laboratories were offered a training webinar hosted by an experienced Q-plex user (E. Brindle), to ensure that each study laboratorian was familiar with the test protocol and data analysis methods. All of the array kits used in the inter-laboratory assessment exercise were from the same manufacturing lot. Each plate image was saved and reviewed by an expert user (E. Brindle) to confirm consistency in software settings used to fit calibration curves and report results (Fig 1).

Assessment of laboratory equipment and user capability to operate the Q-plex assay

To understand effects of user skills and experience and status of laboratory equipment on results, a questionnaire was distributed prior to testing to collect details from each laboratory. Each operator completed a questionnaire to determine their level of previous experience with the 7-plex, and experience with quantitative immunoassays (Fig 1). An inventory of equipment summarized maintenance histories for items necessary for use with the 7-plex, and specified the plate washing method. Experience and equipment status questionnaire results were summarized by assigning a scale value to each element, scoring each factor as follows: Lab operator experience (2 elements, 1 to 3 scale, with 3 as most experience), Quansys software experience (0 to 1 scale, 1 is experienced), automated plate washer availability (0 to 1 scale, 1 is available), and recency of calibration (2 elements, 1 to 3 scale with 3 as most recent). Scores were totaled to derive a summary score ranging from 0 (no experience, poor equipment status indicators) to 14 (extensive user experience, all equipment present and recently calibrated).

Inter-laboratory statistical methods

Values below the lower limit of quantification (LLOQ) for each analyte were excluded from analyses. Results of the quality control samples run on every plate were evaluated to determine whether the plates would meet acceptance criteria that, for the purposes of this study, were intentionally less stringent than would generally be permitted, whereby at least one control result should have any 6 of the 7 analyte results falling within a 95% confidence interval calculated from all plates in the study. Because the intent of this exercise was to evaluate reproducibility, all plates were included nearly all subsequent analyses. The effect of excluding data from any plates meeting this rejection criteria was considered separately. Inter-assay CV’s were calculated to evaluate the performance between the 7 labs and intra-assay CV’s were calculated to evaluate the performance within each of the 7 labs. Intra-assay CVs for duplicate wells of the test samples were averaged for each analyte on each plate, and then plate averages were aggregated across analytes to summarize intra-assay CV averages by lab and by operator. Inter-assay CVs were calculated across all plates (n = 12) for each sample (n = 76); inter-assay CVs were then averaged to summarize inter-assay CV for each analyte. Agreement between results across laboratories was assessed using Lin’s concordance correlation coefficient (CCC) [31]. Results from assays conducted in the PATH and UW labs by the three operators with the most experience using the 7-Plex were averaged to create a comparison set that was compared to each of the nine remaining assay batches from five labs. Lin’s CCC was calculated using STATA version 15.1 (StataCorp, College Station, TX USA).

Results

All test data can be publicly accessed at Dataverse (https://dataverse.harvard.edu/dataverse/micronutrient_immunoarray). The data derived from 20 independent replicate experiments run on ten 7-Plex assay plates in the PATH lab were used to evaluate the precision (intra- and inter-assay, n = 13 samples) for each assay (see Fig 1). Table 1 provides a summary of results from the validation sample with a value closest to the relevant cutoff concentration for each analyte. The intra-assay CV for each analyte was less than 5%, with the exception of ferritin (5.8%), and all inter-assay CVs were less than 15% (Table 1). These are the accepted maximum CVs for ELISAs and comparable to CVs observed previously in the manufacturer’s evaluation of the 7-plex [23, 32]. S3 Table includes all results, including those outside assay limits of quantification; average intra-assay CV was below 5% for all analytes. There was one plasma sample that gave an intra-assay CV of 15.7% with the ferritin assay. However, the concentration in this particular sample was around the LLOQ, and as the generally acceptable threshold at this concentration is 20%, this was still considered acceptable [32]. Average inter-assay CV was ≤15%, with the CV for most samples below 10% for each analyte.
Table 1

Analytical validation of 7-plex performance.

AGP (g/L)CRP (mg/L)Ferritin (μg/L)HRP2 (μg/L)RBP4 (μmol/L)sTfR (mg/L)Tg (μg/L)**
Calibration range0.001–0.370.028–20.50.156–1140.001–1.040.001–1.040.163–1190.019–13.7
Limits of quantification (mean, n = 20 experiments from 10 plates)0.0016–0.3540.0648–20.50.451–108.30.0016–0.88650.005–0.9290.241–118.10.61–13.7
Optimal cutoff value (1:10 dilution)*0.0670.331.680.0920.121.170.72
Mean QC sample concentration0.0730.280.880.130.0771.30.64
intra-assay %CV2.52.15.81.91.53.12.1
inter-assay %CV6.59.114.38.813.610.010.8
Linearity (%, LK)989883N/A999698
Linearity (%, SLK)999957***99999898

Summary coefficients of variation (control result with mean concentration closest to the relevant cutoff value) and linearity for each analyte. See S3 Table for all results, including those outside the assay limits of quantification.

*Cutoff values estimated by ROC analysis using NiMaNu study classification as a gold-standard; values are given as 1/10 dilution adjusted values to show their relationship to the assay calibration and quantification ranges. CV’s calculated using a variance components model to separate within-plate and between-plate contributions to variation.

**Cutoff value estimation confounded by measurement of Tg in DBS in the NiMaNu study which served as the gold-standard for ROC analysis.

***SLK included concentrated Tg from whole blood, which interfered with the ferritin assay.

AGP, α-1-acid glycoprotein; CRP, C-reactive protein; HRP2, histidine rich protein 2; LK, Liquichek; N/A, not available; QC, Quality control; RBP4, retinol binding protein 4; SLK, spiked Liquichek; sTfR, soluble transferrin receptor; Tg, thyroglobulin.

Summary coefficients of variation (control result with mean concentration closest to the relevant cutoff value) and linearity for each analyte. See S3 Table for all results, including those outside the assay limits of quantification. *Cutoff values estimated by ROC analysis using NiMaNu study classification as a gold-standard; values are given as 1/10 dilution adjusted values to show their relationship to the assay calibration and quantification ranges. CV’s calculated using a variance components model to separate within-plate and between-plate contributions to variation. **Cutoff value estimation confounded by measurement of Tg in DBS in the NiMaNu study which served as the gold-standard for ROC analysis. ***SLK included concentrated Tg from whole blood, which interfered with the ferritin assay. AGP, α-1-acid glycoprotein; CRP, C-reactive protein; HRP2, histidine rich protein 2; LK, Liquichek; N/A, not available; QC, Quality control; RBP4, retinol binding protein 4; SLK, spiked Liquichek; sTfR, soluble transferrin receptor; Tg, thyroglobulin. Tests of assay linearity showed no evidence of systemic non-parallelism across dilutions for any analyte. All biomarkers, apart from ferritin, had a linearity of 96% to 99%. The ferritin gave poorer linearity of 57% with SLK samples; however, it was noted that the Tg added to the spiked Liquichek was derived and concentrated from whole blood, thus adding this also increased the concentration of ferritin to above the limit of quantification in the high dilution samples. In the normal Liquichek the linearity improved to 83%, though this was still substantially lower than the other linearity values observed. The pooled data presented in Table 1 and S3 Table demonstrates that different operators and/or plate lots did not impact performance. Overall the results confirmed the previously reported findings and the assay was considered suitable for the subsequent inter-lab study [23]. Eleven operators in seven labs tested the full set of 76 plasma samples with the 7-plex (Table 2). In most cases, each laboratorian tested the entire panel of 76 plasma samples only once (requiring two assay plates per operator). In one laboratory, a single operator assayed the entire panel of 76 plasma samples twice (for a total of four assay plates). In four laboratories, two different users tested the complete panel. Overall, each specimen was tested in duplicate wells 12 times (i.e. 24 data points). All laboratories completed testing within 3 months of each other, and samples were kept frozen until the day of assay. While many of the partners had very limited experience with the Q-plex platform, their laboratorians did have variable levels of experience in performing other immunoassays. All laboratories had the required equipment, including multichannel pipettors and rotating plate shakers. All labs but two had an automated plate washer; the remaining labs used the manual plate washing protocol described by Quansys in the kit instructions for use. Each laboratory had a Q-View imager and analysis software necessary for reading the 7-plex plates and for processing the raw data into concentration values for each analyte. Scores were tallied with overall scores for each lab/laboratorian shown in Table 2. One lab had the maximum possible score of 14 indicating a highly experienced laboratorian with access to recently calibrated equipment, while the minimum observed score was 6, indicating a laboratorian that had limited experience with ELISAs, who was not familiar with the Q-view software, and did not have access to a plate washer.
Table 2

Average intra-assay (within-plate) and inter-assay (between plate) %CVs by laboratory and operator.

Average intra-assay (within plate) %CVs, all samples
 Laboratory Identifier1234567
 n valid results, all samples, all plates, all assays, all operators892431932454889982859
 Average intra-assay (within plate) %CV3.34.74.43.14.44.37.3
 Operator ID (up to 2 per lab)1234567891011
 7-plex assay plates used (n)22222242222
 Experience and equipment score (0 to 14 scale, ideal score = 14)1312131113141376128
 n valid results, all assays, all plates, all samples run by all operators455437431473459454889503479467392
 Average intra- assay (within plate) %CV for each operator3.23.34.74.54.43.14.44.34.35.78.8
Average inter-assay (between plate) %CVs, quality control samples
 QC sample G
 Laboratory Identifier1234567
 AnalyteMean conc.Valid results
 AGP0.876 g/L246.72.79.11.710.627.338.0
 CRP10.94 mg/L2415.01.118.80.78.414.737.8
 Ferritin286.62 μg/L248.94.25.513.95.96.88.1
 HRP2negative--------
 RBP41.875 μmol/L212.914.140.35.913.413.28.4
 sTfR11.215 mg/L2211.8no data8.15.016.416.945.4
 Tg6.10 μg/L245.35.511.13.712.014.826.2
 Inter-assay %CV average8.45.518.15.111.116.627.3
 QC sample H
 AGP0.325 g/L248.51.37.20.011.76.824.5
 CRP1.105 mg/L2411.78.215.11.915.220.76.8
 Ferritin17.767 μg/L232.50.36.9no data10.512.947.6
 HRP20.216 μg/L221.30.010.9no data11.22.017.0
 RBP40.435 μmol/L2418.117.04.02.116.44.019.3
 sTfR4.327 mg/L172.4no data15.021.914.712.060.2
 Tg1.00.96 μg/L2111.27.611.7no data10.019.426.8
 Inter-assay %CV average8.05.710.16.512.811.128.9
 Mean inter-assay %CV (QC samples)8.25.614.15.812.013.928.1

Intra-assay %CVs are calculated by averaging the well-to-well %CVs from all valid results, all plates run by each laboratory/operator. Inter-assay %CVs are mean values derived from pooling quality control outputs from samples G and H run in duplicate on every plate. The %CVs were calculated using all results that were within the assay limits of quantification. Numbers of possible valid results vary because laboratories ran different total numbers of plates. N is the number of valid results.

Intra-assay %CVs are calculated by averaging the well-to-well %CVs from all valid results, all plates run by each laboratory/operator. Inter-assay %CVs are mean values derived from pooling quality control outputs from samples G and H run in duplicate on every plate. The %CVs were calculated using all results that were within the assay limits of quantification. Numbers of possible valid results vary because laboratories ran different total numbers of plates. N is the number of valid results. A G- and H-quality control sample were included in duplicate on every plate (Fig 1), with results summarized in Table 2. Multiple analyte results for both controls on one plate were outside the 95% confidence interval derived from all plates included in this study. Normally this would indicate a QC fail indicating the test results were not acceptable, however as this experiment was intended to assess variability across users and laboratories, results for all plates were included in subsequent analyses summarizing the intra- and inter-assay CVs irrespective of whether they passed QC or not. Summaries of the G and H quality control sample measures, intra-assay CVs for all samples and inter-assay CVs for two quality control samples, are shown by laboratory and operator in Table 2. Values outside the limits of quantification were excluded. The maximum possible number of valid results for each lab and each operator varied because of the different numbers of plates tested. Numbers of valid results also differed because some samples had concentrations near the limits of the quantification range, calculated from mean and standard deviations for replicate standard wells on each plate. Some specimens were within range on some plates and out of range on others; thus, the number of out-of-range values is reflected in the numbers of valid results included in Table 2. For intra-assay CV, all well-to-well CVs were averaged regardless of analyte; the averages ranged from 3.1 to 8.8%, with two results above the 5% threshold coming from one laboratory. Inter-assay CVs calculated using the two quality control samples (G and H) also showed differences by laboratory. Table 3 shows inter-assay CV by analyte for the panel of 76 heparin plasma samples run across all seven laboratories. HRP2 had the highest inter-assay CV (31.1%) but is the only analyte intended not to be interpreted quantitatively (e.g. a qualitative assay) and the test results were at the lower range of the calibration standard where the greatest variance is observed. Fig 2 shows the full distribution of results for each sample for every analyte, with the results plotted by the rank order of the mean concentration calculated using data from the plates tested in the PATH and UW laboratories (n = 3 plates per sample). The plots show the greatest scatter around these means at the lowest and highest concentrations. In general, inter-assay CVs were higher for those samples at the extremes of the analyte calibration ranges (S1 Fig).
Table 3

Inter-assay CV calculated using 76 heparin plasma samples tested in 7 labs.

AnalyteAll platesExcluding 2 plates with QC results out of range
Average inter-assay %CVValid CVs (n)Valid results (n)Average inter-assay %CVValid CVs (n)Valid results (n)
AGP15.67689815.976796
CRP21.27582313.476836
Ferritin15.87685415.776829
HRP231.14122923.176794
RBP415.17685718.074761
sTfR25.47686927.946219
Tg15.87690114.376813

Inter-assay CV calculated across all plates (n = 12) for each sample (n = 76); inter-assay CVs were then averaged for each analyte. CV calculations are shown with and without two plates with quality control specimen values outside the 95% confidence intervals (calculated from all plates included in this study) for multiple analytes.

Fig 2

Repeated measures across laboratories for a panel of 76 plasma specimens.

Open circles measured mean concentration (or pixel intensity for HRP2) of duplicate wells from a single plate. Red closed circles, mean concentration, duplicate wells run on each of 3 plates (2 PATH, 1 UW) for every sample. Results are plotted on Log10 Y axes to reveal proportional differences at lower concentrations and have been sorted by rank order of the mean concentration from 3 plates (2 PATH, 1 UW). Horizontal line, optimal 7-plex cutoff value; for HRP2, line represents approximate pixel intensity corresponding to the cutoff concentration. Cutoff values were determined by ROC curve analysis using results from the NiMaNu study as a gold-standard, and using the cutoff thresholds applied in that study [23]. Black hash marks on y-axes indicate lot-specific upper- and lower-limits of quantification (see S4 Table); values have been adjusted to account for 1:10 sample dilution used for all samples. Results out of range are plotted as the limits values noted in S4 Table. AGP, α-1-acid glycoprotein; CRP, C-reactive protein; HRP2, histidine rich protein 2; RBP4, retinol binding protein 4; sTfR, soluble transferrin receptor; Tg, thyroglobulin.

Repeated measures across laboratories for a panel of 76 plasma specimens.

Open circles measured mean concentration (or pixel intensity for HRP2) of duplicate wells from a single plate. Red closed circles, mean concentration, duplicate wells run on each of 3 plates (2 PATH, 1 UW) for every sample. Results are plotted on Log10 Y axes to reveal proportional differences at lower concentrations and have been sorted by rank order of the mean concentration from 3 plates (2 PATH, 1 UW). Horizontal line, optimal 7-plex cutoff value; for HRP2, line represents approximate pixel intensity corresponding to the cutoff concentration. Cutoff values were determined by ROC curve analysis using results from the NiMaNu study as a gold-standard, and using the cutoff thresholds applied in that study [23]. Black hash marks on y-axes indicate lot-specific upper- and lower-limits of quantification (see S4 Table); values have been adjusted to account for 1:10 sample dilution used for all samples. Results out of range are plotted as the limits values noted in S4 Table. AGP, α-1-acid glycoprotein; CRP, C-reactive protein; HRP2, histidine rich protein 2; RBP4, retinol binding protein 4; sTfR, soluble transferrin receptor; Tg, thyroglobulin. Inter-assay CV calculated across all plates (n = 12) for each sample (n = 76); inter-assay CVs were then averaged for each analyte. CV calculations are shown with and without two plates with quality control specimen values outside the 95% confidence intervals (calculated from all plates included in this study) for multiple analytes. Mean standard deviation (SD) results for each assay batch for the panel of 76 plasma samples and a measure of agreement between the results across labs assessed using Lin’s Concordance Correlation Coefficient are shown in Table 4. Rather than comparing each batch in a pairwise test against all other results sets for the sample panel, a predicate set of results was derived by averaging results from batches run by the three operators at the PATH and UW labs who had the most experience with the 7-Plex assay. Lin’s rho was generally high, most often r ≥0.9, and averaging r >0.8 for all analytes. The ferritin assay had the highest concordance of all the analytes (r averaging 0.958), while the concordance was lowest for CRP (r averaging 0.820). Concordance was notably lower across several analytes for the two batches from lab 7. Removing those batches increased the average Lin’s rho for all analytes except ferritin. This result is consistent with the higher intra- and inter-assay CVs from that lab (Table 2), possibly reflecting the impact of imprecision on the concordance estimates.
Table 4

Mean, standard deviation, and Lin’s concordance correlation coefficient for 76 heparin plasma samples tested in 7 labs.

LAB_ID12mean, batches 1, 2, 334567
operator ID12345677891011
batch ID123456789101112
AGP
n76767676767676767676767662
mean (SD), g/L0.690 (0.223)0.732 (0.267)0.646 (0.223)0.689 (0.234)0.633 (0.237)0.667 (0.254)0.708 (0.249)0.683 (0.232)0.594 (0.215)0.745 (0.434)0.563 (0.212)0.607 (0.202)0.425 (0.192)
number of pairwise comparisons to mean 1, 2, 3767676767676767662
r c 0.9140.9310.9520.9660.7940.7240.7770.8760.444
CRP
n69696673727171696871716561
mean (SD), mg/L8.56 (12.55)10.15 (14.54)8.00 (9.72)9.99 (14.64)9.81 (14.64)9.67 (12.65)11.72 (16.29)10.69 (16.50)10.53 (14.60)11.27 (15.27)12.31 (15.71)10.80 (14.19)5.76 (8.01)
number of pairwise comparisons to mean 1, 2, 3717071686771696560
r c 0.8970.9430.9770.8870.9590.9140.9620.9860.463
Ferritin
n72657275767667717175737067
mean (SD), μg/L142.4 (143.2)116.4 (114.6)102.4 (116.7)131.4 (146.5)123.3 (122)127.6 (131.3)144.0 (158.4)129.3 (124.7)107.0 (106.1)135.3 (168.1)151.2 (201.3)92.7 (98)95.8 (105.2)
number of pairwise comparisons to mean 1, 2, 3757567717074736966
r c 0.9350.9730.9420.9490.9950.9720.9020.9810.976
HRP2
n99111121101182154352911
mean (SD), μg/L2.93 (2.46)2.50 (2.20)2.30 (2.47)2.25 (2.35)3.14 (2.52)1.8 (1.91)1.87 (2.24)1.89 (1.67)3.30 (2.88)0.34 (1.03)0.65 (1.48)0.64 (1.46)1.90 (2.02)
number of pairwise comparisons to mean 1, 2, 31110108910101010
r c 0.9490.9650.8320.9280.9520.9340.8880.9210.913
RBP4
n76677676767676647676747644
mean (SD), μmol/L1.08 (0.37)0.89 (0.31)1.01 (0.43)1.02 (0.39)0.81 (0.29)0.85 (0.37)0.95 (0.39)0.92 (0.38)0.90 (0.36)0.96 (0.57)0.90 (0.4)0.98 (0.36)0.79 (0.36)
number of pairwise comparisons to mean 1, 2, 3767676647676747644
r c 0.7860.8530.880.8540.8910.8150.8720.9370.746
sTfR
n76765276767676587676767675
mean (SD), mg/L22.2 (15.1)19.3 (14.7)20.6 (18.4)19.9 (15.1)22.2 (19.4)24.4 (21.5)17.1 (14.6)26.6 (24.6)22.4 (22.3)15.8 (16)19.7 (21.3)27.9 (26.7)24.5 (27.2)
number of pairwise comparisons to mean 1, 2, 3767676587676767675
r c 0.9420.8920.9710.8580.8990.9460.9320.770.728
Tg
n75757676767576767675747572
mean (SD), μg/L20.7 (16.4)21 (17.4)19.6 (20)21.1 (19)20.4 (16.9)22.4 (21.3)21.8 (19.1)22.4 (22.3)18.4 (15.4)19.3 (18.5)20.3 (20.7)21.8 (20.1)21.6 (22.2)
number of pairwise comparisons to mean 1, 2, 3767576767675747572
r c 0.9790.9320.9480.9510.9220.9460.870.9280.955

Mean and standard deviation for all plasma samples (n = 76) by lab and operator. Results outside the assay limits of detection are excluded. Lin’s Concordance Correlation Coefficient (r) measures agreement between a predicate result set (the mean of results for each sample across three assay batches from the most experienced users) and results from each of the remaining nine assay batches.

Mean and standard deviation for all plasma samples (n = 76) by lab and operator. Results outside the assay limits of detection are excluded. Lin’s Concordance Correlation Coefficient (r) measures agreement between a predicate result set (the mean of results for each sample across three assay batches from the most experienced users) and results from each of the remaining nine assay batches.

Discussion and conclusions

This study evaluates the performance of a multiplex micronutrient surveillance tool that quantifies biomarkers of vitamin A, iron, and iodine deficiency, inflammation or infection, and malaria through validation experiments to estimate precision and linearity within a single lab, along with assessing inter-laboratory reproducibility. Our within-lab validation experiments repeat and expand upon previously reported assay performance evaluations [23]. The validations described here were conducted independently in PATH’s laboratory and represent an expert user’s experience of assay performance characteristics across 20 repeated experiments. The intra-assay CVs were good in this validation with only one biomarker, ferritin, being slightly out of range. For most analytes and most samples, the inter-assay CV was under 15% (Table 1 and S3 Table) which is an accepted maximum inter-assay CV for ELISAs and comparable to CV’s observed previously for the 7-plex assay. Because the specimens we used for replication (e.g., commercially available control specimens with and without spiking with sTfR and Tg) included values at the low and high ends of the assay range, where estimated values can be less precise, generating higher CVs were to be expected. It was not possible to produce more concentrated versions with these biomarkers without significantly diluting the other analytes. While the plasma specimens were selected from our in-house panel for the validation study based upon the highest sTfR measurements, concentrations of sTfR and Tg in each sample were still lower than their respective concentrations in the SLK. This indicated that while the range used in the validation studies was lower than the range of quantification of the 7-plex assay, they still reflected the range found in most clinical samples. Tests of assay linearity showed no evidence of systemic non-parallelism across dilutions for any analyte. Inter-assay CVs for biomarker measurements in the panel of 76 plasma specimens measured by 11 different operators in seven laboratories averaged 20.0%, with imprecision estimates highest in the semi-quantitative HRP2 assay (31.1%), and higher than generally accepted range of error for sTfR (25.4%) and CRP (21.2%). Lin’s CCC was generally high (r ≥ 0.8 for 54 of 63 comparisons), but showed the same pattern observed in CVs, with two low (≤0.5 for both AGP and CRP) concordance results from one lab. Some of the imprecision is attributable to including specimens at the very low or very high ends of the working assay ranges. For CRP in particular, most of the imprecision is due to variability at very low concentrations, all of which were below levels that indicate infection or inflammation (S1 Fig). Some individual laboratories showed variability of their results, with one laboratory generally having higher inter-assay CVs (averaging 28.1%) for quality control samples as compared to the 6 other labs (ranging from 5.6 to 14.1%). One significant difference between this laboratory and most others was the absence of an automated microtiter plate washer. It is possible that manual washing compromises assay precision; further testing is needed to confirm that speculation. When excluding these two plates and one other plate with quality control results outside a 95% confidence interval, the average inter-assay CV decreases from 20.0% to 17.2%, and decreases to 16.2% when the semi-quantitative HRP2 assay is excluded. Because the 7-plex assay method was designed for use as a surveillance tool in LMIC and academic research facilities, the needs and challenges for assay performance are different from those associated with clinical laboratories. Assay methods used for these purposes have in the past been selected in an ad hoc fashion as practical and financial constraints differ from those encountered in clinical laboratories. Laboratories in these settings are not routinely testing a steady number of samples as clinical laboratories do. Instead, research laboratories are engaged sporadically to assay large numbers of specimens over a short time frame. Maintaining consistency under that sporadic workflow is a particular challenge, even within laboratories. For micronutrient status surveillance, monitoring across time and space are necessary for assessment of progress, but this presents laboratory challenges. A single method that measures key indicators of nutritional status as a single tool, rather than a collection of assays from various sources assembled for individual surveys, offers opportunity to greatly improve comparability across sets of data. These benefits are realized only if the assay results are reproducible across laboratories. The results here suggest that the 7-plex can provide generally reproducible results across laboratories, with imprecision only slightly above the range acceptable for results generated within a single laboratory. They also highlight the need for including internal quality control specimens on every plate, and in cases where precise estimates are needed for values at the physiological extremes, for repeating testing at adjusted dilutions for specimens with concentrations at the margins of the assay range. Work is ongoing to improve the performance of the 7-Plex assay and includes improvements to ferritin, RBP4, and sTfR assays. A challenge inherent to multiplex assay methods is simultaneously optimizing the assay performance for both high- and low-abundant proteins; forthcoming improvements to assay sensitivity for ferritin will allow for a change to the recommended sample dilution from 1:10 to 1:40, a change that is expected to improve precision for RBP4 assay values by bringing them closer to the middle of the reportable assay range, a previous criticism of the 7-plex [27, 28]. Similarly, changes to the sTfR assay to improve precision are underway. A new version of the 7-Plex assay with these improvements is expected within the coming year. The project team also recognizes a need for evaluations comparable to those described here to be conducted in a field study in Low and middle income countries (LMICs). In this setting, working protocols can be developed and validated with country partners to properly implement the 7-plex into supporting nutrition research and interventions. Our multiplex tool has to potential to reduce labor, supplies, and sample volumes traditionally required for MN screening. By demonstrating comparable performance across multiple laboratories and users we believe this assay can be a key tool in the identification of populations with key micronutrient deficiencies as well as a monitoring tool following any subsequent interventions, generating high quality reproducible data. The Quansys system is a low cost technology that could be easily implemented in laboratories in LMICs where ELISA assays are routinely used and can be applied to multiple other biomarkers and sample types beyond the 7-plex described in this work [33-36].

Assay precision for two specimens across 19 replicate aliquots.

Within and between tube variation in measures from 19 aliquots for each of two samples selected at random from aliquots prepared for distribution to partner labs. CVs were calculated using the Rodbard variance components model. CV for HRP2 was not calculated because results were above the upper limit of detection. CV, coefficient of variation; AGP, α-1-acid glycoprotein; CRP, C-reactive protein; HRP2, histidine rich protein 2; N/A, not available; RBP4, retinol binding protein 4; sTfR, soluble transferrin receptor; Tg, thyroglobulin. (DOCX) Click here for additional data file.

The established values for the G and H controls developed for the 7-plex array.

The expected value for each biomarker is shown in addition to the acceptable values for upper and lower limits. AGP, α-1-acid glycoprotein; CRP, C-reactive protein; HRP2, histidine rich protein 2; N/A, not available; RBP4, retinol binding protein 4; sTfR, soluble transferrin receptor; Tg, thyroglobulin. (DOCX) Click here for additional data file.

Assay precision and linearity for 7 plex array.

AGP, α-1-acid glycoprotein; CRP, C-reactive protein; HRP2, histidine rich protein 2; N/A, not available; RBP4, retinol binding protein 4; sTfR, soluble transferrin receptor; Tg, thyroglobulin; LK, Liquichek LK; SLK, spiked Liquichek. After each analyte in parentheses are the LLOQ and ULOQ, respectively. (DOCX) Click here for additional data file.

The upper- and lower limits of quantification for each biomarker using the 7-plex assay.

AGP, α-1-acid glycoprotein; CRP, C-reactive protein; HRP2, histidine rich protein 2; RBP4, retinol binding protein 4; sTfR, soluble transferrin receptor; Tg, thyroglobulin. (DOCX) Click here for additional data file.

Average inter-assay CV plotted against average concentration (n = 76 heparinized plasma samples tested 12 times per sample).

Y-axes set to 100% for all analytes. One CRP result (31 mg/L, CV = 123%) and one HRP2 result (0.2 mg/mL, CV = 177%) are not shown. AGP, α-1-acid glycoprotein; CRP, C-reactive protein; HRP2, histidine rich protein 2; RBP4, retinol binding protein 4; sTfR, soluble transferrin receptor; Tg, thyroglobulin. (TIF) Click here for additional data file. 5 May 2021 PONE-D-21-06971 A multicenter analytical performance evaluation of a multiplexed immunoarray for the simultaneous measurement of biomarkers of micronutrient deficiency, inflammation and malarial antigenemia PLOS ONE Dear Dr. Boyle, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Jun 19 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Guido Sebastiani Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. Thank you for stating the following in the Competing Interests section: "The authors have declared that no competing interests exist." We note that one or more of the authors are employed by a commercial company: Craft Nutrition Consulting; PATH. 2.1. Please provide an amended Funding Statement declaring this commercial affiliation, as well as a statement regarding the Role of Funders in your study. If the funding organization did not play a role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript and only provided financial support in the form of authors' salaries and/or research materials, please review your statements relating to the author contributions, and ensure you have specifically and accurately indicated the role(s) that these authors had in your study. You can update author roles in the Author Contributions section of the online submission form. Please also include the following statement within your amended Funding Statement. “The funder provided support in the form of salaries for authors [insert relevant initials], but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.” If your commercial affiliation did play a role in your study, please state and explain this role within your updated Funding Statement. 2.2. Please also provide an updated Competing Interests Statement declaring this commercial affiliation along with any other relevant declarations relating to employment, consultancy, patents, products in development, or marketed products, etc. Within your Competing Interests Statement, please confirm that this commercial affiliation does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests) . If this adherence statement is not accurate and  there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. Please include both an updated Funding Statement and Competing Interests Statement in your cover letter. We will change the online submission form on your behalf. Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests 3. We note that two NCT registries are linked in the ethics statement in the online submission form, but only one in the manuscript Methods. Please review and ensure that this is stated correctly in both the online submission form and manuscript Methods, and that ethical information for all relevant studies is included. 4. During our internal checks, the in-house editorial staff noted that you conducted research or obtained samples in another country. Please check the relevant national regulations and laws applying to foreign researchers and state whether you obtained the required permits and approvals. Please address this in your ethics statement in both the manuscript and submission information. In addition, please ensure that you have suitably acknowledged the contributions of any local collaborators involved in this work in your authorship list and/or Acknowledgements. Authorship criteria is based on the International Committee of Medical Journal Editors (ICMJE) Uniform Requirements for Manuscripts Submitted to Biomedical Journals - for further information please see here: https://journals.plos.org/plosone/s/authorship. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The work by Brindle et al. is quite well written and extensively describes a 7-plex ELISA-based assay for the simultaneous detection of several circulating biomarkers for malaria, inflammation and malnutrition. However, some aspect of the paper could be clarified or improved: 1. In the abstract, authors say that this immunoassay can be used on both plasma and serum, however, successively authors only describe plasma analysis they performed: did you test it also in serum? If yes, please show results in a supplementary material file in order to demonstrate that both biofluid can be indifferently used or explain why they chose plasma (is it more reproducible? Is there some molecule not detectable or highly confounding in serum measurement?). If serum has not been evaluated, please remove it. 2. Since followed workflow is quite long, I strongly suggest to authors to describe it in a graphical workflow, in order to allow readers to catch key points of primary analysis and validation. 3. Patients nor controls have been described from a clinical point of view: are they comparable? Please, provide a table containing at least basic clinical characteristics of analyzed subjects (variability in molecules detection could also derive from heterogeneity of population?). 4. In Materials and Methods section authors start with "validation". However, it is impossible to start with a validation. I suggest to modify this term and substitute it with "Test" or "Setup". 5. Authors did not explain why a 7-plex only considering plasma evaluation. Could analysis of other molecules (in other biofluids) such as uric Nitrogen (evaluation of protein intake) or erythrocyte EPA (eicopentanoid acid, for omega-3 intake) render this assay more reliable and reproducible? 6. Which is the clinical relevance of this study? It is very important to demonstrate the reproducibility of this test, however, in order to render more relevant the study from a clinical point of view, I strongly suggest to authors to consider adding, in discussion section, some sentence about the clinical applicability and importance of this test. For example: this test will be potentially used prevalently in countries with a high rate of malnutrition and a high prevalence of malaria, which are not very rich countries: which is the cost of this assay? Is it applicable to villages hard to access? Which is the time for referral? Please, describe advantages and disadvantages. 7. Please, provide spelling of some abbreviation and revise some English expression. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. Submitted filename: Review PLOSOne.docx Click here for additional data file. 30 Sep 2021 June 11, 2021 We would like to note our sincere thanks to the reviewers for their providing reasoned comments for us to improve the quality of our work. We greatly appreciate this opportunity to reply to the comments raised and it is our intent to clearly address all of them in sufficient detail. Following are our responses to each comment. We have included the line numbering for where we made revisions to the original draft. This document is called Manuscript Number PONE-D-21-06971 "A multicenter analytical performance evaluation of a multiplexed immunoarray for the simultaneous measurement of biomarkers of micronutrient deficiency, inflammation and malarial antigenemia" The work by Brindle et al. is quite well written and extensively describes a 7-plex ELISA-based assay for the simultaneous detection of several circulating biomarkers for malaria, inflammation and malnutrition. However, some aspect of the paper could be clarified or improved: 1. In the abstract, authors say that this immunoassay can be used on both plasma and serum, however, successively authors only describe plasma analysis they performed: did you test it also in serum? If yes, please show results in a supplementary material file in order to demonstrate that both biofluid can be indifferently used or explain why they chose plasma (is it more reproducible? Is there some molecule not detectable or highly confounding in serum measurement?). If serum has not been evaluated, please remove it. Response to 1: Yes, both serum and plasma were used in the study. With serum, we used a serum-based matrix, Liquichek Immunology Control (Bio Rad) and some plasma-based clinical samples during the validation experiments while plasma (collected from the NiMaNu study) was used during the inter-lab study. This was part of a panel used in a previous paper demonstrating the assay performance (Brindle, E. et al; 2017). In our original study (Brindle, E. et al; 2014) we demonstrated comparable results with paired serum and heparinized plasma samples, confirming that either can be used. We have added some text at lines 176-178 to reflect that either sample type can be used. 2. Since followed workflow is quite long, I strongly suggest to authors to describe it in a graphical workflow, in order to allow readers to catch key points of primary analysis and validation. Response to 2: We have added Figure 1 (See line 154-160) a flow chart depicting the three primary workstreams followed to prepare and complete the interlaboratory assessment of the 7-plex assay 3. Patients nor controls have been described from a clinical point of view: are they comparable? Please, provide a table containing at least basic clinical characteristics of analyzed subjects (variability in molecules detection could also derive from heterogeneity of population?). Response to 3: This is a great point and highlights the importance of carrying out any validation with clinical samples, as well as repeat testing across multiple laboratories, showing that while controls can be more homogeneous, showing high reproducibility, that heterogeneity of a clinical sample could affect results. This was something we wanted to learn more about in our evaluations, both by running a sample multiple times on one plate as well as having the sample tested across different labs by different users and seeing how the assay performed. However, we do not have access to any of the clinical characteristics of the participants in the study and so cannot include them, other than to state there is inherent variability when measuring a population. 4. In Materials and Methods section authors start with "validation". However, it is impossible to start with a validation. I suggest to modify this term and substitute it with "Test" or "Setup". Response to 4: The reviewer is fully correct. We have moved the description of the assay set up to the beginning of the Materials and Methods section (lines 97-122), with subsequent later sections reflecting validation and inter lab assessment. 5. Authors did not explain why a 7-plex only considering plasma evaluation. Could analysis of other molecules (in other biofluids) such as uric Nitrogen (evaluation of protein intake) or erythrocyte EPA (eicopentanoid acid, for omega-3 intake) render this assay more reliable and reproducible? Response to 5: Our assay is intended as a tool for surveying the most common types of micronutrient deficiency, with the literature showing that plasma/serum are the best sample type for indicating this, while we have also demonstrated performance with eluates from dried blood spots. We have no doubt other biofluids could be used but have not investigated any of these. The platform used utilizes a multiplexed ELISA approach as an alternative to running multiple monoplex assays, with detection via antigens and/or antibodies and so in principle there is potential for adding and assay for eicopentanoid acid. We have demonstrated the expansion of the 7-plex to an 11-plex to support the detection of environmental enteric dysfunction and so it is feasible (Arndt M et al., 2020, PMID: 32997666). In principle the platform can host up to 18 assays per well and thus it may be possible to develop a different assay that targets other molecules and other sample types as suggested by the reviewer. However, the inclusion of additional biomarkers for detection is dependent upon availability of appropriate assay reagents and these need to be highly compatible with the other assays already present on the array. We tried to add ELISAs to quantify vitamin D, vitaminB12 and folate but we were unable to get them to work sufficiently well as a complete array. So, while it is feasible to add other biomarkers, it can also be very challenging. A big challenge to this is also finding the funding resources to add further tests. A commercial component to the mass multiplexing is there needs to be a strong market demand for each of these tests to be added and this is why we chose the more common micronutrients that have biomarkers; as more tests are added, the cost goes up and Quansys are already finding that some researchers do not want malaria or iodine assays on their array. For Quansys to earn revenue (and keep making the product) there’s a need to limit the test menu. If a customer specifically wants more tests added then they could go directly to Quansys who specialize in customized arrays. We have added some text to the discussion at lines 462-464 to highlight that this technology can be applied to multiple other sample types. 6. Which is the clinical relevance of this study? It is very important to demonstrate the reproducibility of this test, however, in order to render more relevant the study from a clinical point of view, I strongly suggest to authors to consider adding, in discussion section, some sentence about the clinical applicability and importance of this test. For example: this test will be potentially used prevalently in countries with a high rate of malnutrition and a high prevalence of malaria, which are not very rich countries: which is the cost of this assay? Is it applicable to villages hard to access? Which is the time for referral? Please, describe advantages and disadvantages. Response to 6: We have added some text in the discussion at lines 452-460 to address this. It can be noted that this is not a clinical diagnostic tool, but rather a population surveillance tool thus the purpose to look for general trends within a population as opposed to providing an individual assessment of each patient; for example assessing the effectiveness of an iron supplementation program in a region where iron deficiency is a concern. 7. Please, provide spelling of some abbreviation and revise some English expression. Response to 7: We have revised the text to reflect the full name at the first time these abbreviations are used in order to help the reader. Once again we would like to thank you and the reviewers for your feedback. We hope the changes we have made in response to these comments have strengthened the overall quality of manuscript and helped to clarify the primary points that we want to portray. We look forward to hearing your response. Submitted filename: Response to reviewers.docx Click here for additional data file. 21 Oct 2021 A multicenter analytical performance evaluation of a multiplexed immunoarray for the simultaneous measurement of biomarkers of micronutrient deficiency, inflammation and malarial antigenemia PONE-D-21-06971R1 Dear Dr. Boyle, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Guido Sebastiani Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: 25 Oct 2021 PONE-D-21-06971R1 A multicenter analytical performance evaluation of a multiplexed immunoarray for the simultaneous measurement of biomarkers of micronutrient deficiency, inflammation and malarial antigenemia Dear Dr. Boyle: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Guido Sebastiani Academic Editor PLOS ONE
  30 in total

1.  Validation of a new multiplex assay against individual immunoassays for the quantification of reproductive, stress, and energetic metabolism biomarkers in urine specimens.

Authors:  Katrina G Salvante; Eleanor Brindle; Daniel McConnell; Kathleen O'connor; Pablo A Nepomnaschy
Journal:  Am J Hum Biol       Date:  2011-11-28       Impact factor: 1.937

Review 2.  Evidence-based interventions for improvement of maternal and child nutrition: what can be done and at what cost?

Authors:  Zulfiqar A Bhutta; Jai K Das; Arjumand Rizvi; Michelle F Gaffey; Neff Walker; Susan Horton; Patrick Webb; Anna Lartey; Robert E Black
Journal:  Lancet       Date:  2013-06-06       Impact factor: 79.321

3.  Evaluation of plasma retinol-binding protein as a surrogate measure for plasma retinol concentrations.

Authors:  J Almekinder; W Manda; D Soko; Y Lan; D R Hoover; R D Semba
Journal:  Scand J Clin Lab Invest       Date:  2000-05       Impact factor: 1.713

4.  Performance of commercially available enzyme immunoassays for detection of antibodies against herpes simplex virus type 2 in African populations.

Authors:  Eddy van Dyck; Anne Buvé; Helen A Weiss; Judith R Glynn; David W G Brown; Bénédicte De Deken; John Parry; Richard J Hayes
Journal:  J Clin Microbiol       Date:  2004-07       Impact factor: 5.948

5.  Iodine intake in an urban environment: a study of urine iodide excretion in Auckland.

Authors:  G J Cooper; M S Croxson; H K Ibbertson
Journal:  N Z Med J       Date:  1984-03-14

6.  Diagnosis of malaria by detection of Plasmodium falciparum HRP-2 antigen with a rapid dipstick antigen-capture assay.

Authors:  C Beadle; G W Long; W R Weiss; P D McElroy; S M Maret; A J Oloo; S L Hoffman
Journal:  Lancet       Date:  1994-03-05       Impact factor: 79.321

7.  The Quansys multiplex immunoassay for serum ferritin, C-reactive protein, and α-1-acid glycoprotein showed good comparability with reference-type assays but not for soluble transferrin receptor and retinol-binding protein.

Authors:  Razieh Esmaeili; Ming Zhang; Maya R Sternberg; Carine Mapango; Christine M Pfeiffer
Journal:  PLoS One       Date:  2019-04-29       Impact factor: 3.240

8.  Comparison of a New Multiplex Immunoassay for Measurement of Ferritin, Soluble Transferrin Receptor, Retinol-Binding Protein, C-Reactive Protein and α¹-Acid-glycoprotein Concentrations against a Widely-Used s-ELISA Method.

Authors:  Crystal D Karakochuk; Amanda M Henderson; Kaitlyn L I Samson; Abeer M Aljaadi; Angela M Devlin; Elodie Becquey; James P Wirth; Fabian Rohner
Journal:  Diagnostics (Basel)       Date:  2018-02-02

9.  Simultaneous assessment of iodine, iron, vitamin A, malarial antigenemia, and inflammation status biomarkers via a multiplex immunoassay method on a population of pregnant women from Niger.

Authors:  Eleanor Brindle; Lorraine Lillis; Rebecca Barney; Sonja Y Hess; K Ryan Wessells; Césaire T Ouédraogo; Sara Stinca; Michael Kalnoky; Roger Peck; Abby Tyler; Christopher Lyman; David S Boyle
Journal:  PLoS One       Date:  2017-10-05       Impact factor: 3.240

10.  Multiplex Human Malaria Array: Quantifying Antigens for Malaria Rapid Diagnostics.

Authors:  Ihn Kyung Jang; Abby Tyler; Chris Lyman; John C Rek; Emmanuel Arinaitwe; Harriet Adrama; Maxwell Murphy; Mallika Imwong; Stephane Proux; Warat Haohankhunnatham; Rebecca Barney; Andrew Rashid; Michael Kalnoky; Maria Kahn; Allison Golden; François Nosten; Bryan Greenhouse; Dionicia Gamboa; Gonzalo J Domingo
Journal:  Am J Trop Med Hyg       Date:  2020-06       Impact factor: 2.345

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.