Literature DB >> 35771801

Evaluation of Sibel's Advanced Neonatal Epidermal (ANNE) wireless continuous physiological monitor in Nairobi, Kenya.

Jesse Coleman1, Amy Sarah Ginsburg2, William Macharia3, Roseline Ochieng3, Dorothy Chomba3, Guohai Zhou4, Dustin Dunsmuir5, Shuai Xu6, J Mark Ansermino5.   

Abstract

BACKGROUND: Neonatal multiparameter continuous physiological monitoring (MCPM) technologies assist with early detection of preventable and treatable causes of neonatal mortality. Evaluating accuracy of novel MCPM technologies is critical for their appropriate use and adoption.
METHODS: We prospectively compared the accuracy of Sibel's Advanced Neonatal Epidermal (ANNE) technology with Masimo's Rad-97 pulse CO-oximeter with capnography and Spengler's Tempo Easy reference technologies during four evaluation rounds. We compared accuracy of heart rate (HR), respiratory rate (RR), oxygen saturation (SpO2), and skin temperature using Bland-Altman plots and root-mean-square deviation analyses (RMSD). Sibel's ANNE algorithms were optimized between each round. We created Clarke error grids with zones of 20% to aid with clinical interpretation of HR and RR results.
RESULTS: Between November 2019 and August 2020 we collected 320 hours of data from 84 neonates. In the final round, Sibel's ANNE technology demonstrated a normalized bias of 0% for HR and 3.1% for RR, and a non-normalized bias of -0.3% for SpO2 and 0.2°C for temperature. The normalized spread between 95% upper and lower limits-of-agreement (LOA) was 4.7% for HR and 29.3% for RR. RMSD for SpO2 was 1.9% and 1.5°C for temperature. Agreement between Sibel's ANNE technology and the reference technologies met the a priori-defined thresholds for 95% spread of LOA and RMSD. Clarke error grids showed that all HR and RR observations were within a 20% difference.
CONCLUSION: Our findings suggest acceptable agreement between Sibel's ANNE and reference technologies. Clinical effectiveness, feasibility, usability, acceptability, and cost-effectiveness investigations are necessary for large-scale implementation.

Entities:  

Mesh:

Substances:

Year:  2022        PMID: 35771801      PMCID: PMC9246120          DOI: 10.1371/journal.pone.0267026

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

Globally, neonatal mortality remains high with over 2.4 million deaths in 2019, the majority in resource-constrained settings [1]. In Sub-Saharan Africa, most neonatal mortality stems from largely preventable and treatable causes of death, including preterm birth, asphyxia, and infectious diseases [2]. Early detection and treatment of these life-threatening conditions using multiparameter continuous physiological monitoring (MCPM) technologies are critical to improving quality of care and averting deaths [3-6]. Currently, MCPM technologies are not commonly available at labor and delivery sites in resource-constrained settings, in part due to the high cost of equipment and lack of trained personnel [7]. The Evaluation of Technologies for Neonates in Africa (ETNA) is an African-based technology-testing platform established to optimize neonatal technologies and improve neonatal health outcomes in resource-constrained settings. ETNA endeavours to understand real-world clinical feasibility, performance, and accuracy of novel technologies. The current study analyzes the clinical accuracy of an investigational MCPM technology compared to verified reference technologies [8].

Methods

Study design and procedures

We conducted an iterative prospective study to assess agreement of heart rate (HR), respiratory rate (RR), peripheral oxygen saturation (SpO2) and chest skin surface temperature measurements from Sibel’s Advanced Neonatal Epidermal (ANNE) (Sibel Inc., IL, USA), investigational technology with those measurements from reference technologies. We conducted the study at Aga Khan University, Nairobi (AKU-N), a tertiary healthcare facility in Kenya. Sibel’s ANNE vital signs monitoring platform includes two neonatal-sized, non-invasive, adhesive skin sensors attached directly to the skin surface that are capable of continuously measuring and recording HR, RR, SpO2, and skin surface temperature (S1 Fig). Up to 30 hours of data is stored locally within the sensor and wirelessly transmitted to a central database supported by customized software. We compared Sibel’s ANNE HR, RR, and SpO2 measurements with those from Masimo’s Rad-97 pulse CO-oximeter with capnography (Masimo Corporation, USA) technology as reference. RR from the reference technology was measured by capnography using an infant/pediatric nasal cannula to collect the neonate’s exhaled carbon dioxide (CO2) levels. We compared Sibel’s ANNE temperature measurements with those measured using Spengler’s Tempo Easy non-contact infrared thermometer (SPENGLER HOLTEX Group, Aix-en-Provence, France) as reference. In order to identify agreement thresholds for comparison with Sibel’s ANNE technology, we assed functionality and estimated within- and between-neonate variability while verifying Masimo’s Rad-97 technology [8]. We ran an initial round of open-label data collection from both Sibel’s ANNE technology and the reference technologies to test the accuracy testing methods. In the open-label round, reference data for HR, RR, and SpO2 was shared with Sibel before analysis. These data included 1 hertz (Hz) trends data (including HR, RR, and SpO2 values), the raw plethysmograph waveform, signal quality data, and the capnography CO2 waveform from Masimo’s Rad-97 technology. We then conducted three rounds of closed-label testing and analyses. After each subsequent round of data analysis, Sibel was provided with all reference technology datasets in order to provide Sibel an opportunity to improve their detection and measurement algorithms. The study’s primary outcome was agreement between the HR, RR, SpO2, and temperature measurements for Sibel’s ANNE technology and the reference technologies. We hypothesized that Sibel’s ANNE technology would show good agreement within a priori-defined thresholds for each vital sign measurement and minimal bias when compared to the reference technologies. Trained study clinicians recruited, obtained informed consent, and enrolled eligible neonates from the neonatal intensive care unit (NICU), neonatal high dependency unit (NHDU), and postnatal and maternity wards at AKU-N (Table 1). Neonates were simultaneously monitored by Sibel’s ANNE technology and reference technologies for a minimum of 1 hour with no upper limit for the duration of monitoring. For temperature, trained and experienced study nurses conducted spot checks with the reference technology once every 10 minutes for the first hour and once per hour of participation thereafter. Neonates exited from the study upon discharge from the ward or following caregiver request to discontinue monitoring.
Table 1

Eligibility criteria and study definitions.

Eligibility criteria
Inclusion criteria• Corrected age of < 28 days• Caregivers willing and able to provide informed consent and available for follow-up for the duration of the study
Exclusion criteria• Receiving continuous positive airway pressure or mechanical ventilation• Skin abnormalities in the nasopharynx and/or oropharynx• Contraindication to skin sensor application• Known arrhythmia• Congenital abnormality requiring major surgical intervention• Any medical or psychosocial condition or circumstance that would interfere with study conduct or for which study participation could put the neonate’s health at risk
Study definitions
EpochA 60-second period of time
BreathOne cycle of neonate-initiated inhalation and exhalation
Breath startEnd of a waveform trough (low point) where the carbon dioxide level starts to ascend
Respiratory rate (RR) manual counting protocolA breath was counted if the waveform peak reached either 15 millimeters of mercury (mmHg) or the average peak of the epoch, AND the waveform trough dipped below the average trough of the epoch plus 10 mmHg
· Each plot was counted by two independent reviewers and averaged
· If the difference in the counts was > 5, a third independent reviewer counted the plot
· If the third count was within 5 breaths of either previous count, the average of the two closest counts was used
RR median calculationFor each breath in an epoch, a RR was calculated by determining how many breaths would fit in the epoch, and the median of the RR values in an epoch was calculated
RR epoch exclusion criteriaRR epoch excluded if: (a) the difference between the epoch count and median RR was > 10; (b) either value was < 15; c) the capnogram contained a digital artifact; or d) if there was lack of inter-reviewer manual count agreement
Heart rate (HR) median calculationMedian of the instantaneous HR values in the epoch
Adequate signal qualitySibel’s ANNE investigational technology: signal quality score > 0 for the duration of the epoch
Masimo’s Rad-97 reference technology: plethysmograph quality index (PO-SQI) threshold > 150 for HR and SpO2, and capnography quality score (CO2-SQI) threshold ≥2 for at least 90% of the epoch for RR
Spengler’s Tempo Easy: temperature measurement within normal skin temperature range (35.5–37°C)

Data processing and selection

HR, RR, and SpO2 data were collected from Masimo’s Rad-97 technology in real-time with a custom Android (Google LLC, Mountain View, USA) application. Temperature data were entered manually into a REDCap data collection application [9]. HR, RR, and SpO2 data were parsed in C (Dennis Ritchie & Bell Labs, USA) to obtain plethysmograph waveform and plethysmograph quality index (PO-SQI) data at 62.5 Hz and capnography waveform data at approximately 20 Hz. Instantaneous HR was obtained from the timing of the PO-SQI, which was calculated by Masimo’s Rad-97 technology for each heartbeat. We completed analysis of CO2 waveform data using a breath detection algorithm developed in MATLAB (Math Works, USA) based on adaptive pulse segmentation which has been validated internally and on the CapnoBase database [10] and is accurate to within ±5% for a neonate breathing at 60 breaths/minute [11]. The breath detection timing allowed for a breath duration calculation. An algorithm calculated the RR median for each epoch (Table 1). Furthermore, the custom MATLAB algorithm also provided a capnography quality index (CO2-SQI) based on capnography features. Values for SpO2 were provided by Masimo’s Rad-97 at 1 Hz. We performed manual RR counting from capnography in the reference technology. Two trained observers independently reviewed plotted capnogram waveforms and counted all breaths within each epoch based on standardized rules. The independent counts were averaged; if the number of breaths counted varied by more than three breaths, a third trained observer also counted the breaths, and the two closest results were averaged. Measurements of HR, RR, and SpO2 from Sibel’s ANNE technology were sampled at between 128 and 512 Hz from the output signal and down-sampled to provide values at 1 Hz. Temperature measurements were conducted once every 10 minutes. To evaluate agreement, we included 60-second HR, RR, and SpO2 epochs, with sufficient signal quality, which were randomly selected (S1 Table). All temperature measurements were included in the analysis. To calculate sample size for each closed-label round, we estimated that 20 neonates with ten replications each would provide a 95% upper and lower LOA between two methods of +/-0.76 times the standard deviation (SD) of their differences. Tight confidence intervals (CI) require sample sizes of roughly 100 to 200 samples which is generally sufficient for method comparison studies [12]. Mean HR, RR, and SpO2 values for the selected epochs were calculated (Table 2). Manual RR counting was performed for each epoch using capnograms for closed-label rounds one through three (S2 Fig).
Table 2

Results from Bland-Altman analysis for Sibel ANNE investigational technology versus reference technology.

Open-labelClosed-label round oneClosed-label round twoClosed-label round three
Sibel ANNE HR compared to Masimo Rad-97 HR
Included neonates10202020
Mean Masimo Rad-97 HR136.8137.6137.2131.0
Normalized bias (95% CI)0.2% (0.0 to 0.5%)2.2% (1.5 to 3.1%)1.4% (0.8 to 2.0%)0% (-2.1 to 1.3%)
Normalized spread between 95% LOA (upper and lower 95% LOA)4.6% (4.0 to 5.3%)23.1% (20.3 to 25.9%)16.2% (14.2 to 18.1%)4.7% (4.2 to 5.3%)
Normalized root-mean-square deviation1.2%6.3%4.3%1.2%
Sibel ANNE RR compared to manual RR count
Included neonatesNo manual RR count202020
Mean manual RR count52.652.150.2
Normalized bias (95% CI)1.1% (-2.5 to 2.8%)-19.9% (-16.0 to -23.8%)3.1% (2.0 to 4.1%)
Normalized spread between 95% LOA (upper and lower 95% LOA)75.1% (66.0 to 84.2%)110.1% (96.8 to 123.4%)29.3% (25.7 to 37.8%)
Normalized root-mean-square deviation19.1%34.4%8.0%
Sibel ANNE RR compared to Masimo Rad-97 RR median
Included neonates9202020
Mean Masimo Rad-97 RR51.554.854.752.5
Normalized bias (95% CI)-14.3% (-9.2 to -19.3%)-3.8% (-1.2 to -6.4%)-23.6% (-19.7 to -27.4%%)-1.5% (-0.7 to -2.2%)
Normalized spread between 95% LOA (upper and lower 95% LOA)119.8% (102.3 to 137.2%)74.4% (65.3 to 83.4%)108.57% (95.4 to 121.7%)20.6% (18.1 to 23.1%)
Normalized root-mean-square deviation33.6%19.3%36.5%5.4%
Sibel ANNE SpO2 compared to Masimo Rad-97 SpO2
Included neonates7202020
Mean Masimo Rad-97 SpO295.2%93.9%94.4%93.9%
Bias (95% CI)0.7% (0.1 to 1.2%)0.1% (-0.3 to 0.5%)2.7% (2.2 to 3.1%)-0.3% (-0.5 to 0%)
Spread between 95% LOA (upper and lower 95% LOA)9.8% (8 to 11.7%)11.3% (9.9 to 12.7%)13.9% (12.2 to 15.6%)7.7% (6.7 to 8.6%)
Root-mean-square deviation2.6%2.9%4.4%1.9%
Sibel ANNE skin surface temperature compared to Spengler Tempo Easy skin surface temperature
Included neonates10202020
Mean Spengler Tempo Easy skin surface temperature (°C)35.835.935.935.6
Bias (95% CI) (°C)-0.1 (-0.3 to 0.1)0.3 (0.1 to 0.4)0.5 (0.3 to 0.7)0.2 (-0.1 to 0.6)
Spread between 95% LOA (upper and lower 95% LOA) (°C)2.1 (1.4 to 2.8)2.5 (1.9 to 3.0)3.2 (2.6 to 3.8)5.9 (4.8 to 7.0)
Root-mean-square deviation (°C)0.50.71.01.5

Comparisons include heart rate (HR), respiratory rate (RR), oxygen saturation (SpO2) and skin surface temperature measurements during specific rounds of testing.

Comparisons include heart rate (HR), respiratory rate (RR), oxygen saturation (SpO2) and skin surface temperature measurements during specific rounds of testing.

Statistical analysis

To determine the normalized agreement between Sibel’s ANNE and reference technologies, we calculated the normalized bias (95% CI) and spread between the 95% limits of agreement (LOA) by dividing the bias and spread between the 95% LOA by the overall mean reference value [13]. Based on Masimo’s Rad-97 reference technology verification phase, the acceptable a priori-defined spread between the 95% upper and lower LOA of 30%, approximately equivalent to a root-mean-square deviation (RMSD) of 8, was selected for both RR and HR [8]. RMSD was calculated for each vital sign. We selected RMSD thresholds of ≤ 3.5% for SpO2 and ≤ 1.5°C for temperature, with a spread between the 95% upper and lower LOA of ≤ 4.5°C, based on a review of the literature and internal reference technology testing completed during the verification phase of the study [8]. Clarke error grids were constructed with zones of 20% discrepancy to improve clinical interpretability of RR and HR results. All analyses were conducted using R (version 3.6.2) with the following packages: readr (version 1.3.1), data.table (version 1.12.8), dplyr (version 0.8.5), stringr (version 1.4.0) and ggplot2 (version 3.3.0). This study was conducted in accordance with the Guideline for Good Clinical Practice/ International Standards Organization (ISO) 14155 to ensure accurate, reliable, and consistent data collection. The study protocol was approved by Western Institutional Review Board (20191102), Aga Khan University Nairobi Research Ethics Committee (2019/REC-02), and Kenya Pharmacy and Poisons Board (19/05/02/2019(078)). Written informed consent from each neonate’s caregiver was obtained in English or Swahili by trained study staff according to a checklist that included ascertainment of caregiver comprehension.

Results

Between November 7, 2019 and August 28, 2020, 137 neonates were enrolled and 84 were included for analysis (S3 Fig). Four neonates withdrew before the minimum data collection time. Data from 49 neonates were excluded from analysis due to poor quality data identified in Masimo’s Rad-97 reference device (31), insufficient length of recording for comparisons (15), or technology issues (3). Data from 40 (47.6%) female, 43 (51.2%) male, and one (1.2%) intersex neonates were included in the analysis. Neonates were recruited from the NHDU (69%), postnatal and maternity wards (29.7%), and NICU (1.2%). Median gestational age was 38 (range 25 to 42) weeks. Primary diagnoses upon admission were prematurity (36.9%), healthy post-delivery (32.1%), respiratory distress syndrome (9.5%), jaundice (8.3%), hypoglycemia (6%), low birth weight (3.6%), asphyxia (2.4%), and transient tachypnea (1.2%), all of which were mutually exclusive. We collected 320 hours of data with a median length of recording of 240 (range 29 to 417) minutes per neonate. In the open-label analysis round, 140 epochs were selected from nine neonates for RR, 153 epochs from 10 neonates for HR, 84 epochs from seven neonates for SpO2, and 28 measurements from 10 neonates for temperature. A total of 81.5% of the data from Sibel’s ANNE technology was considered sufficient quality in the open-label round, compared with 75.7% of the data from the reference technology (S1 Table). During each closed-label round, 10 epochs were selected from a minimum of 20 neonates for HR, RR, SpO2, and temperature, resulting in 200 measurement pairs per vital sign per round being included. More data from Sibel’s ANNE technology were accepted as being sufficient quality in each of the closed-label rounds, compared with the data from the reference technology (round 1: ANNE = 78.4% vs 63.3%; round 2: ANNE = 56.5% vs 50.1%; round 3: ANNE = 84.0% vs 76.1%). No overlapping epochs were in any of the analysis rounds. Analysis of the HR data showed a small positive normalized average bias (range 0 to 2.2%) with a normalized spread of LOA meeting or surpassing the a priori-defined threshold in each round (Table 2; Fig 1). We observed a decrease in the normalized spread between 95% LOA (16.2 to 4.7%) and RMSD (4.3 to 1.2%) between closed-label rounds two and three. All Sibel’s ANNE HR measurements were within 20% of Masimo’s Rad-97 values (Fig 2A, region A, Clarke error grid).
Fig 1

Bland-Altman plots for heart rate (HR).

(A) Open-label round. (B) Closed-label round one. (C) Closed-label round two. (D) Closed-label round three. Colors indicate which enrolled neonate is associated with the measurement pair.

Fig 2

Clarke error grids for closed-label round three.

(A) Comparison of heart rate (HR) measurements. (B) Comparison of Sibel ANNE respiratory rate (RR) to Masimo Rad-97 RR manual count. Each dot represents a data pair, with the color intensity proportional to density of data pairs. Region A (in green) contains data pairs that are within 20% of the Masimo Rad-97 device value. Region B (in yellow) contains data pairs not within 20% that would not lead to unnecessary treatment. Regions C, D and E are in red. C includes data pairs leading to unnecessary treatment. D includes data pairs with a failure in detecting low or high HR/RR events and E includes data pairs where low and high HR/RR events are confused.

Bland-Altman plots for heart rate (HR).

(A) Open-label round. (B) Closed-label round one. (C) Closed-label round two. (D) Closed-label round three. Colors indicate which enrolled neonate is associated with the measurement pair.

Clarke error grids for closed-label round three.

(A) Comparison of heart rate (HR) measurements. (B) Comparison of Sibel ANNE respiratory rate (RR) to Masimo Rad-97 RR manual count. Each dot represents a data pair, with the color intensity proportional to density of data pairs. Region A (in green) contains data pairs that are within 20% of the Masimo Rad-97 device value. Region B (in yellow) contains data pairs not within 20% that would not lead to unnecessary treatment. Regions C, D and E are in red. C includes data pairs leading to unnecessary treatment. D includes data pairs with a failure in detecting low or high HR/RR events and E includes data pairs where low and high HR/RR events are confused. RR analyses showed a large variation in average bias across rounds for Sibel’s ANNE technology compared to Masimo’s Rad-97 for both manually counted RR (range -12.9 to 1.5 breaths/minute) and algorithm-derived RR median (range -13.0 to -0.7 breaths/minute) values (Table 2; Fig 3). The normalized spread between 95% LOA decreased between the second and third closed-label rounds for manual RR count (110.1 to 29.3%) and median RR (110.3 to 20.6%), thereby meeting the a priori-defined threshold for both methods of calculating RR. Absolute and normalized spreads of 95% LOA for median RR values were smaller than manually counted RR values in all rounds. All Sibel’s ANNE RR measurements were within 20% of Masimo’s Rad-97 values (Fig 2B, region A, Clarke error grid).
Fig 3

Bland-Altman plots for Rad-97 respiratory rate (RR) median.

(A) Open-label round. (B) Closed-label round one. (C) Closed-label round two. (D) Closed-label round three. Colors indicate which enrolled neonate is associated with the measurement pair.

Bland-Altman plots for Rad-97 respiratory rate (RR) median.

(A) Open-label round. (B) Closed-label round one. (C) Closed-label round two. (D) Closed-label round three. Colors indicate which enrolled neonate is associated with the measurement pair. SpO2 analysis showed minimal change in bias (range -0.3 to 2.7%) for Sibel’s ANNE technology compared to Masimo’s Rad-97, with the largest change occurring between the second and third closed-label rounds (2.7 to -0.3%; Table 2; Fig 4). The RMSD increased between the open-label and second closed-label rounds (2.6 to 4.4%), followed by a decrease to 1.9% between the second and third closed-label rounds, meeting the a priori-defined threshold.
Fig 4

Bland-Altman plots for oxygen saturation (SpO2).

(A) Open-label round. (B) Closed-label round one.(C) Closed-label round two. (D) Closed-label round three. Colors indicate which enrolled neonate is associated with the measurement pair.

Bland-Altman plots for oxygen saturation (SpO2).

(A) Open-label round. (B) Closed-label round one.(C) Closed-label round two. (D) Closed-label round three. Colors indicate which enrolled neonate is associated with the measurement pair. Skin surface temperature analysis showed minimal bias and bias change (range -0.1 to 0.5°C) between rounds for Sibel’s ANNE technology compared to Spengler’s Tempo Easy reference technology (Table 2; Fig 5). The RMSD for temperature increased (0.5 to 1.5°C) between each round but met the a priori-defined accuracy threshold in each round.
Fig 5

Bland-Altman plots for skin surface temperature.

(A) Open-label round. (B) Closed-label round one. (C) Closed-label round two. (D) Closed-label round three. Colors indicate which enrolled neonate is associated with the measurement pair.

Bland-Altman plots for skin surface temperature.

(A) Open-label round. (B) Closed-label round one. (C) Closed-label round two. (D) Closed-label round three. Colors indicate which enrolled neonate is associated with the measurement pair.

Discussion

The a priori-defined agreement thresholds for neonatal HR, RR, SpO2, and skin surface temperature measurements were met after completing three rounds of closed-label analyses comparing Sibel’s ANNE technology and the reference technologies. Between the open and closed rounds, Sibel modified the HR-detection algorithm by adding edge case handlers in the ECG signal where significant motion artifact was detected. Between closed-label rounds two and three, Sibel’s ANNE chest sensor software algorithms were augmented to interrogate bio-impedance measurements for improved RR calculations. A modified calibration factor was also implemented for Sibel’s ANNE limb sensor at this stage. Following these modifications, the normalized spreads between 95% LOA for HR, RR, and SpO2 decreased and there was a reduction in bias for all vital signs. A normalized ±30% spread of 95% LOA for HR and RR was selected using real-world data obtained from neonates during Masimo’s Rad-97 reference technology verification phase [8]. A similar LOA has been widely accepted in determining thresholds of agreement for a new method in cardiac output method comparison studies which has been used extensively in the field since it was proposed in 1999 [14]. For a neonate breathing at 60 breaths/minute with a within-neonate variation of 2 breaths/minute, a 30% spread of LOA would equate to 3.3% variation. The Clarke error grids suggest that it is unlikely that treatment decisions would have significantly changed based on the differences between simultaneous observations made by the two technologies. Capnography has superior performance at higher RR, which is common in neonates, and was chosen as the reference standard for measuring RR [15]. Using a standardized protocol to carefully count breaths from capnograms allowed for manually counted values to be compared with the RR values provided by Sibel’s ANNE technology. We found that the accuracy of RR comparisons was dependent on the correct placement of Sibel’s ANNE sensors. The improved agreement seen in closed-label round three likely was due in part to a change in the chest sensor location from a horizontal placement across the central sternum. In closed-label round three, the chest sensor was placed at a 45-degree angle with one end on the xiphoid process and the other end on the abdomen. This change augmented the signal strength of the bio-impedance signal of RR in neonates, after which RR agreement improved sufficiently to meet the agreement threshold. Optimizing Sibel’s ANNE algorithm between closed-label rounds two and three also resulted in large improvements in SpO2 accuracy compared to the reference technology. These changes were introduced upon recognizing that the enrolled neonates had darker skin tones than those previously evaluated with Sibel’s ANNE technology. The SpO2 accuracy improved after the photoplethysmography light emission was increased. Surface thermometers do not reflect core body temperature due to their physical distance from the core [16]. The results from the skin surface temperature comparison showed agreement steadily decreasing between analysis rounds. The large spread in 95% LOA in closed-label round three might be due to three of the 84 (3.6%) temperature values being outside of the 95% upper and lower-LOA by more than 5 degrees. The outlier values may be due to non-compliance with measurement procedures rather than with the accuracy of the technology, but this cannot be verified. A strength of this study is the non-Sibel investigators’ independence in study design, data collection, and analyses. Further, we tested accuracy in the population where the technologies will be used. This led us to discover the impact of darker pigmentation. Our findings are supported by the raw and high-resolution photoplethysmography and capnography data, a manual counting of breaths by two independent reviewers and the randomized selection of comparison epochs. However, the AKU-N study site is relatively highly-resourced and Sibel’s ANNE technology may have performed differently in lower-resourced settings. Our recently completed clinical feasibility evaluation of Sibel’s ANNE technology at a publicly-funded high-volume maternity hospital is more typical of resource-constrained settings. Usability, acceptability, accuracy, and evaluation of agreement when identifying critical clinical events were also evaluated in this lower-resourced setting. Sibel’s ANNE technology is portable, lightweight, non-invasive and can be battery powered, wireless, and wearable during kangaroo mother care. Its only disposable component is hydrogel adhesive. Of note, data from critically ill neonates with higher or irregular HR, RR, SpO2, or temperature readings could affect Sibel’s ANNE sensor performance and impact accuracy comparisons; future accuracy evaluation of Sibel’s ANNE technology in neonates in intensive or critical care will be necessary.

Limitations

There are a number of limitations to the results reported in this study. Approximately one-third (36.8%) of neonate recordings were excluded from the analysis. This was in part due to some fragile neonates not tolerating Masimo’s Rad-97 reference technology’s nasal cannula. Exclusion due to nasal cannula usage was not a concern with Sibel’s ANNE technology because RR is collected from the chest sensor. Electrical outages further affected data quality and duration, contributing to data loss. Furthermore, only epochs with the highest quality reference data were chosen for analysis in order to minimize uncertainty. Bias could have been introduced by the breath detection algorithm during the creation of the capnography quality index (CO2-SQI) which was essential since capnography signal quality was not provided by the reference device. No clinical correlations or outcomes were analyzed as many of the neonates in this study were healthy or relatively healthy. The accuracy of Sibel’s ANNE non-invasive MCPM technology is promising; however, additional research is required prior to large-scale implementation. This could include investigations in clinical care process improvements, clinical outcomes, clinical feasibility, usability, acceptability, cost-effectiveness and clinical effectiveness. The development of a neonatal MCPM suitable for use in resource-constrained settings that can accurately monitor HR, RR, SpO2, and skin surface temperature has promising implications for clinical practice.

A computer rendering of the Sibel Advanced Neonatal Epidermal (ANNE) system investigational vital signs monitoring platform.

The system consists of a chest sensor (L) and a limb sensor (R). The system can respiratory rate, oxygen saturation and skin surface temperature. (TIF) Click here for additional data file.

Masimo Rad-97 reference technology 60-second capnogram demonstrating typical irregularity of respiratory rate.

The monitoring was conducted during a quiet period without external stimuli. (TIF) Click here for additional data file.

Study flow diagram.

(TIF) Click here for additional data file.

Overview of Masimo Rad-97 reference and Sibel ANNE investigational technology data from enrolled neonates in the open-label (test) round and subsequent closed-label rounds.

(TIF) Click here for additional data file.

Transfer Alert

This paper was transferred from another journal. As a result, its full editorial history (including decision letters, peer reviews and author responses) may not be present. 6 Jan 2022
PONE-D-21-28272
Evaluation of Sibel’s Advanced Neonatal Epidermal (ANNE) wireless continuous physiological monitor in Nairobi, Kenya PLOS ONE Dear Dr. Coleman, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Kindly address the points raised by both reviewers responding to each and providing references to the tracked changes in the revised manuscript. I am sorry it has taken as long as it did. It has been difficult to locate quickly the required number of reviewers during this difficult time. Please submit your revised manuscript by Feb 18 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Martin G Frasch Academic Editor PLOS ONE Journal requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf” 2. Thank you for stating the following in the Competing Interests section: “I have read the journal's policy and the authors of this manuscript have the following competing interests: Shuai Xu is Founder and Chief Executive Officer at Sibel Health; all other authors declare no competing interests.” Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: ""This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests).  If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf. 3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide. 4. Please note that in order to use the direct billing option the corresponding author must be affiliated with the chosen institute. Please either amend your manuscript to change the affiliation or corresponding author, or email us at plosone@plos.org with a request to remove this option. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: No ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The paper presents a comparative study to evaluate the accuracy of a non-invasive neonatal vital signal sensor (Sibel) with measurements available via standard (albeit expensive and inaccessible in many parts of the world) clinical devices. The paper is well-written, and the methods are explained clearly. One suggestion to improve the paper is to organize the limitations (some of which are now acknowledged by the authors within the text) under a specific heading and/or group them together to give the reader a better sense. In addition to the limitation on exclusion of the data, the rather small number of subjects in the open-label group is an important limitation, in my view. Reviewer #2: The manuscript reports the results of clinical study to validate the accuracy of the Sibel neonatal monitor against a reference device (Masimo) on a population of neonates in Kenya. The work is important for the adoption of new technology in sub-saharan, and relevant to the scientific community. The study design is sound. The analysis is good, but somehow incomplete as some important aspects are missing, as detailed below. [DATA PROCESSING & SELECTION] 1. The authors Reference, breath detection algo developed in MATLAB based on ref [10]. The authors should include the expected accuracy of this algorithm, as reported in [10]. Note that, according to [10], that algorithm was only validated on a very small dataset (2 pediatric recordings in the CSL benchmark dataset). I do not expect this to impact the results of this study, but it’s relevant to mention it here so the reader is aware. 2. The analysis is preformed on 60-seconds epochs with sufficient signal quality, randomly selected. The authors should provide more details on how signal quality is assessed, as this could be a possible source of bias in the analysis. Table 1 provides the thresholds applied for Sibel and Masimo to define “sufficient signal quality”. It’s unclear what these SQI mean however, and how the thresholds have been selected. 3. Why did the authors decide to sample epochs from the signals, rather than using all epochs of a pre-defined acceptable quality for the analysis? This would have provided more precision on the LoA estimates. The authors are encouraged to repeat their statistical analysis using the entire data, except maybe for the case of RR (since it requires manual annotations that can be cumbersome on the entire data). 4. It seems like the authors selected epochs independently for the different modalities in the open-label part of the study. Is that so, and if it is, why? [STATISTICAL ANALYSIS] 5. The acceptable a priori-defined 95% LOA should be specified for SpO2 and Temperature. 6. The acceptable a priori-defined RMSD thresholds for RR and HR should be specified. 7. How were RMSD thresholds set? 4.5C seems very high for temperature, since that could be a difference between a neonate having high fever vs. healthy temperature. 8. Similarly, for RR, the target of 30% LoA spread seems very wide. Reference [12] justifies it by looking at variability between manual and automated annotations of HR and RR. There could be many reasons behind that variability - human error, poor algorithm performance, that are not directly related to the accuracy of the monitoring device studied in this paper. This goes beyond the scope of this manuscript and therefore of this review. I've assumed for this review that the 30% are accepted by the community. But again, this sounds like a very loose performance criterion, and it may be worth adding a comment or remark about it in this paper so that the reader is aware of that assumption and why it's made. 9. The rationale behind the number of epochs and participants (in each branch of the study: open-label, closed-labels round 1-3) is missing. Was it based on LoA precision estimates done on previous data? What was the expected precision the authors were hoping to reach with that sample size? This should be added to the statistical analysis section. [RESULTS / DISCUSSION] 10. The authors should report the percentage of data that was considered of sufficient quality for both Sibel and the reference systems. It is included in Table S1, but it should be included in the results section as well, e.g. as the percentage of data that was discarded through that process of selecting good quality data. This is an important aspect of device performance as well, next to accuracy when the data is of good quality. 11. In the discussion section, it is stated that “The outlier values are more likely due to non-compliance with measurement procedures than with the accuracy of the technology.” If that’s indeed an issue with non-compliance to measurement procedure, these should be labelled as such, and removed from the analysis as part of the pre-processing and selection process. If it can’t be attributed to a non-compliance issue for sure, then they should be kept in the analysis indeed, and that comment should be rephrased. 12. Figure 1. How do you explain the large increase in LoA when going from open-label to closed-label round #1? The authors explain the reduction in spread between closed-label rounds by a modified calibration factor, but it’s clear why there is such a big jump between the open-label and the closed-label. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 7 Mar 2022 Response to Editorial Board and Reviewers We thank the editorial board and reviewers for their time and effort in providing valuable feedback and we are grateful to you for the insightful comments. We have incorporated changes to reflect the feedback provided. Our responses to specific points raised are below: Reviewer #1 Point 1: The paper presents a comparative study to evaluate the accuracy of a non-invasive neonatal vital signal sensor (Sibel) with measurements available via standard (albeit expensive and inaccessible in many parts of the world) clinical devices. The paper is well-written, and the methods are explained clearly. One suggestion to improve the paper is to organize the limitations (some of which are now acknowledged by the authors within the text) under a specific heading and/or group them together to give the reader a better sense. In addition to the limitation on exclusion of the data, the rather small number of subjects in the open-label group is an important limitation, in my view. Response:.We thank you for your suggestion. We have moved the discussion of each of these points to a section specific section on limitations that specifically addresses each of these concerns. Furthermore, we had added an explanation as to why the open-label group was necessarily smaller than the closed-label groups. The updated limitations section, on pages 22-23 reads as follows: “There are a number of limitations to the results reported in this study. Approximately one-third (36.8%) of neonate recordings were excluded from the analysis. This was in part due to some fragile neonates not tolerating Masimo’s Rad-97 reference technology’s nasal cannula. Exclusion due to nasal cannula usage was not a concern with Sibel’s ANNE technology because RR is collected from the chest sensor. Electrical outages further affected data quality and duration, contributing to data loss. Furthermore, only epochs with the highest quality reference data were chosen for analysis in order to minimize uncertainty. Bias could have been introduced by the breath detection algorithm during the creation of the capnography quality index (CO2-SQI) which was essential since capnography signal quality was not provided by the reference device. No clinical correlations or outcomes were analyzed as many of the neonates in this study were healthy or relatively healthy.” The updated methods section, which describes the open-label testing round in further detail on pages 5-6 now reads: “We ran an initial round of open-label data collection from both Sibel’s ANNE technology and the reference technologies to test the accuracy testing methods. In the open-label round, reference data for HR, RR, and SpO2 was shared with Sibel before analysis.” Reviewer #2: The manuscript reports the results of clinical study to validate the accuracy of the Sibel neonatal monitor against a reference device (Masimo) on a population of neonates in Kenya. The work is important for the adoption of new technology in sub-saharan, and relevant to the scientific community. The study design is sound. The analysis is good, but somehow incomplete as some important aspects are missing, as detailed below. [DATA PROCESSING & SELECTION] Point 1: The authors Reference, breath detection algo developed in MATLAB based on ref [10]. The authors should include the expected accuracy of this algorithm, as reported in [10]. Note that, according to [10], that algorithm was only validated on a very small dataset (2 pediatric recordings in the CSL benchmark dataset). I do not expect this to impact the results of this study, but it’s relevant to mention it here so the reader is aware. Response: Thank you for this feedback and we agree with the points raised. In response we have modified the text to include the following text on page 8: “We completed analysis of CO2 waveform data using a breath detection algorithm developed in MATLAB (Math Works, USA) based on adaptive pulse segmentation which has been validated internally and on the CapnoBase database [10] and is accurate to within ±5% for a neonate breathing at 60 breaths/minute.[11]” Point 2: The analysis is preformed on 60-seconds epochs with sufficient signal quality, randomly selected. The authors should provide more details on how signal quality is assessed, as this could be a possible source of bias in the analysis. Table 1 provides the thresholds applied for Sibel and Masimo to define “sufficient signal quality”. It’s unclear what these SQI mean however, and how the thresholds have been selected. Response: Thank you for that feedback. Monitoring for extended periods of time in a clinical environment will result in artifacts and other disturbances that will corrupt physiological monitoring performance. When comparing devices, it is important to ensure that the same disturbance rejection processes were followed to ensure compatibility of performance. In addition, disturbances may be present in only one device due to the different locations. We used the quality indices developed by the device manufacturers when possible. We did need to develop an SQI for RR for the reference device as this was not provided. We agree and have added that this may have introduced bias. We have added this to the limitations on page 23: “...bias could have been introduced by the breath detection algorithm when it built the capnography quality index (CO2-SQI) which was essential since capnography signal quality was not provided by the reference device.” Point 3: Why did the authors decide to sample epochs from the signals, rather than using all epochs of a pre-defined acceptable quality for the analysis? This would have provided more precision on the LoA estimates. The authors are encouraged to repeat their statistical analysis using the entire data, except maybe for the case of RR (since it requires manual annotations that can be cumbersome on the entire data). Response: We agree that this could have been done. We decided a priori to perform a random sample of epochs based on the literature and our sample size estimation (which is now included). We had also balanced samples across cases with the same number for each case. We agree that the LOA might be reduced if we had a larger sample. However, we performed a validation phase based on the selected sampling procedure to determine the target thresholds. During the initial verification phase, we used many more epochs per case (and provided appropriate correction for this). The closed-label analysis, therefore, had a similar number of epochs but in fewer cases. Your suggested analysis would be interesting but as there were many hundreds more epochs it would take significant effort and the result would not be comparable to the verification phase. Point 4: It seems like the authors selected epochs independently for the different modalities in the open-label part of the study. Is that so, and if it is, why? Response: The reason for selecting independent epochs for each variable (modality) is that each variable has its own data file that includes the signal quality index. It would have been extremely complex and onerous to try to find an epoch with all variables (modalities) on both devices with appropriate signal quality all at the exact same time. We hope this addresses this concern as the question was somewhat unclear. [STATISTICAL ANALYSIS] Point 5: The acceptable a priori-defined 95% LOA should be specified for SpO2 and Temperature. Response: Thank you for this feedback. We believe that agreement (with 95% LOA) is a better method for these method comparison studies (see for example Abu-Arafeh A, Jordan H, Drummond G. Reporting of method comparison studies: a review of advice, an assessment of current practice, and specific suggestions for future reports. Br J Anaesth. 2016 Nov;117(5):569-575. doi: 10.1093/bja/aew320. PMID: 27799171.) However, there are ISO standards that still use RMSD, such as ISO 80601-2-61. To allow for comparisons with other devices and studies (especially for SpO2) we chose to focus on RMSD for SpO2 and Temperature. To clarify this issue, we have modified the text on page 10: “We selected RMSD thresholds of ≤ 3.5% for SpO2 and ≤ 1.5ºC for temperature, with a spread between the 95% upper and lower LOA of ≤ 4.5ºC, based on a review of the literature and internal reference device testing completed during the verification phase of the study.[14]” Point 6: The acceptable a priori-defined RMSD thresholds for RR and HR should be specified. Response: We appreciate this input. We have updated the text to include RMSD for RR and HR on page 10 to read: “Based on Masimo’s Rad-97 reference technology verification phase, the acceptable a priori-defined spread between the 95% upper and lower LOA of 30%, approximately equivalent to a root-mean-square deviation (RMSD) of 8, was selected for both RR and HR.[14]” Point 7: How were RMSD thresholds set? 4.5C seems very high for temperature, since that could be a difference between a neonate having high fever vs. healthy temperature. Response: Thank you for your comment as you have identified a mistake in the manuscript. The RMSD threshold for temperature should read ≤ 1.5ºC. It was the spread between the 95% upper and lower LOA that is ≤ 4.5ºC. To confirm, these RMSD thresholds were selected based on an extensive review of the literature and on the verification phase. To ensure clarity we have corrected the mistake and updated the text on page 10 to read: “We selected RMSD thresholds of ≤ 3.5% for SpO2 and ≤ 1.5ºC for temperature, with a spread between the 95% upper and lower LOA of ≤ 4.5ºC, based on a review of the literature and internal reference technology testing completed during the verification phase of the study.[14]” Point 8: Similarly, for RR, the target of 30% LoA spread seems very wide. Reference [12] justifies it by looking at variability between manual and automated annotations of HR and RR. There could be many reasons behind that variability - human error, poor algorithm performance, that are not directly related to the accuracy of the monitoring device studied in this paper. This goes beyond the scope of this manuscript and therefore of this review. I've assumed for this review that the 30% are accepted by the community. But again, this sounds like a very loose performance criterion, and it may be worth adding a comment or remark about it in this paper so that the reader is aware of that assumption and why it's made. Response: Thank you for this comment. We believe that this is in keeping with other similar comparisons of physiological monitoring such as cardiac output (Critchley LA, Critchley JA. A meta-analysis of studies using bias and precision statistics to compare cardiac output measurement techniques. J Clin Monit Comput. 1999;15: 85–91. doi:10.1023/a:1009982611386). We found a similar spread during the verification phase using two reference devices. This should not be confused with a mean percentage difference. We have added a justification to the text on page 20: “A similar LOA has been widely accepted in determining thresholds of agreement for a new method in cardiac output method comparison studies which has been used extensively in the field since it was proposed in 1999.[14] For a neonate breathing at 60 breaths/minute with a within-neonate variation of 2 breaths/minute, a 30% spread of LOA would equate to 3.3% variation.” Point 9: The rationale behind the number of epochs and participants (in each branch of the study: open-label, closed-labels round 1-3) is missing. Was it based on LoA precision estimates done on previous data? What was the expected precision the authors were hoping to reach with that sample size? This should be added to the statistical analysis section. Response: Thank you for this comment. The precision was calculated before the study was conducted and reported in previously published articles (including Coleman J, Ginsburg AS, Macharia WM et al. Identification of thresholds for accuracy comparisons of heart rate and respiratory rate in neonates [version 2]. Gates Open Res 2021, 5:93) We have added the precision overview to the text on pages 9: “To calculate sample size for each closed-label round, we estimated that 20 neonates with ten replications each would provide a 95% upper and lower LOA between two methods of +/-0.76 times the standard deviation (SD) of their differences. Tight confidence intervals (CI) require sample sizes of roughly 100-200 samples which is generally sufficient for method comparison studies.[12]” [RESULTS / DISCUSSION] Point 10: The authors should report the percentage of data that was considered of sufficient quality for both Sibel and the reference systems. It is included in Table S1, but it should be included in the results section as well, e.g. as the percentage of data that was discarded through that process of selecting good quality data. This is an important aspect of device performance as well, next to accuracy when the data is of good quality. Response: We agree with this recommendation. The following text has been added to the results section on pages 11 and 12: “In the open-label analysis round, 140 epochs were selected from nine neonates for RR, 153 epochs from 10 neonates for HR, 84 epochs from seven neonates for SpO2, and 28 measurements from 10 neonates for temperature. A total of 81.5% of the data from Sibel’s ANNE technology was considered sufficient quality in the open-label round, compared with 75.7% of the data from the reference technology (Table S1). During each closed-label round, 10 epochs were selected from a minimum of 20 neonates for HR, RR, SpO2, and temperature, resulting in 200 measurement pairs per vital sign per round being included. More data from Sibel’s ANNE technology were accepted as being sufficient quality in each of the closed-label rounds, compared with the data from the reference technology (round 1: ANNE = 78.4% vs 63.3%; round 2: ANNE = 56.5% vs 50.1%; round 3: ANNE = 84.0% vs 76.1%). No overlapping epochs were in any of the analysis rounds.” Point 11: In the discussion section, it is stated that “The outlier values are more likely due to non-compliance with measurement procedures than with the accuracy of the technology.” If that’s indeed an issue with non-compliance to measurement procedure, these should be labelled as such, and removed from the analysis as part of the pre-processing and selection process. If it can’t be attributed to a non-compliance issue for sure, then they should be kept in the analysis indeed, and that comment should be rephrased. Response: Thank you for highlighting this concerning wording. We have rephrased the text on page 21 to read: “The outlier values may be due to non-compliance with measurement procedures than with the accuracy of the technology, but this cannot be verified.” Point 12: Figure 1. How do you explain the large increase in LoA when going from open-label to closed-label round #1? The authors explain the reduction in spread between closed-label rounds by a modified calibration factor, but it’s clear why there is such a big jump between the open-label and the closed-label. Response: You are correct in identifying the large increase in spread of 95% upper and lower LOA for HR between the open- and closed-label rounds. Sibel used the open-label round to optimize their device performance. To reflect this, we’ve added the following text in page 20: “Between the open and closed rounds, Sibel modified the HR-detection algorithm by adding edge case handlers in the ECG signal where significant motion artifact was detected.” Submitted filename: SibelAccuracyManuscript-ResponsetoReviewers-18Feb2022.docx Click here for additional data file. 1 Apr 2022 Evaluation of Sibel’s Advanced Neonatal Epidermal (ANNE) wireless continuous physiological monitor in Nairobi, Kenya PONE-D-21-28272R1 Dear Dr. Coleman, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Martin G Frasch Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: No ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The paper is organized well. The revision adequately addresses my comments on the earlier version of the manuscript. Reviewer #2: All comments have satisfactorily been addressed by the authors. Regarding Point 3 of the initial review, the authors explained that it would be hard to redo the analysis with all data points, and provided a justification for their approach. This justification is ok, and the point is considered addressed. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Soheil Ghiasi Reviewer #2: No 20 Jun 2022 PONE-D-21-28272R1 Evaluation of Sibel’s Advanced Neonatal Epidermal (ANNE) wireless continuous physiological monitor in Nairobi, Kenya Dear Dr. Coleman: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Martin G Frasch Section Editor PLOS ONE
  13 in total

Review 1.  Measuring agreement in method comparison studies.

Authors:  J M Bland; D G Altman
Journal:  Stat Methods Med Res       Date:  1999-06       Impact factor: 3.021

2.  A meta-analysis of studies using bias and precision statistics to compare cardiac output measurement techniques.

Authors:  L A Critchley; J A Critchley
Journal:  J Clin Monit Comput       Date:  1999-02       Impact factor: 2.502

3.  Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support.

Authors:  Paul A Harris; Robert Taylor; Robert Thielke; Jonathon Payne; Nathaniel Gonzalez; Jose G Conde
Journal:  J Biomed Inform       Date:  2008-09-30       Impact factor: 6.317

4.  Adaptive pulse segmentation and artifact detection in photoplethysmography for mobile applications.

Authors:  Walter Karlen; J Mark Ansermino; Guy Dumont
Journal:  Conf Proc IEEE Eng Med Biol Soc       Date:  2012

5.  Statistical methods for assessing agreement between two methods of clinical measurement.

Authors:  J M Bland; D G Altman
Journal:  Lancet       Date:  1986-02-08       Impact factor: 79.321

6.  Temperature-Adjusted Respiratory Rate for the Prediction of Childhood Pneumonia.

Authors:  Richard G Bachur; Kenneth A Michelson; Mark I Neuman; Michael C Monuteaux
Journal:  Acad Pediatr       Date:  2019-01-16       Impact factor: 3.107

7.  Evaluation of non-invasive continuous physiological monitoring devices for neonates in Nairobi, Kenya: a research protocol.

Authors:  Amy Sarah Ginsburg; Evangelyn Nkwopara; William Macharia; Roseline Ochieng; Mary Waiyego; Guohai Zhou; Roman Karasik; Shuai Xu; J Mark Ansermino
Journal:  BMJ Open       Date:  2020-04-12       Impact factor: 2.692

8.  Four million neonatal deaths: counting and attribution of cause of death.

Authors:  Joy E Lawn; David Osrin; Alma Adler; Simon Cousens
Journal:  Paediatr Perinat Epidemiol       Date:  2008-09       Impact factor: 3.980

9.  Continuous pulse oximetry and respiratory rate trends predict short-term respiratory and growth outcomes in premature infants.

Authors:  Venkatesh Sampath; Navin Kumar; Alyssa Warburton; Ranjan Monga
Journal:  Pediatr Res       Date:  2019-01-14       Impact factor: 3.756

10.  Identification of thresholds for accuracy comparisons of heart rate and respiratory rate in neonates.

Authors:  Jesse Coleman; Amy Sarah Ginsburg; William M Macharia; Roseline Ochieng; Guohai Zhou; Dustin Dunsmuir; Walter Karlen; J Mark Ansermino
Journal:  Gates Open Res       Date:  2021-10-08
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.