BACKGROUND: Quantitative pupillometry is part of multimodal neuroprognostication of comatose patients after out-of-hospital cardiac arrest (OHCA). However, the reproducibility, repeatability, and reliability of quantitative pupillometry in this setting have not been investigated. METHODS: In a prospective blinded validation study, we compared manual and quantitative measurements of pupil size. Observer and device variability for all available parameters are expressed as mean difference (bias), limits of agreement (LoA), and reliability expressed as intraclass correlation coefficients (ICC) with a 95% confidence interval. RESULTS: Fifty-six unique quadrupled sets of measurement derived from 14 sedated and comatose patients (mean age 70±12 years) were included. For manually measured pupil size, inter-observer bias was -0.14±0.44 mm, LoA of -1.00 to 0.71 mm, and ICC at 0.92 (0.86-0.95). For quantitative pupillometry, we found bias at 0.03±0.17 mm, LoA of -0.31 to 0.36 mm and ICCs at 0.99. Quantitative pupillometry also yielded lower bias and LoA and higher ICC for intra-observer and inter-device measurements. Correlation between manual and automated pupillometry was better in larger pupils, and quantitative pupillometry had less variability and higher ICC, when assessing small pupils. Further, observers failed to detect 26% of the quantitatively estimated abnormal reactivity with manual assessment. We found ICC >0.91 for all quantitative pupillary response parameters (except for latency with ICC 0.81-0.91). CONCLUSION: Automated quantitative pupillometry has excellent reliability and twice the reproducibility and repeatability than manual pupillometry. This study further presents novel estimates of variability for all quantitative pupillary response parameters with excellent reliability.
BACKGROUND: Quantitative pupillometry is part of multimodal neuroprognostication of comatose patients after out-of-hospital cardiac arrest (OHCA). However, the reproducibility, repeatability, and reliability of quantitative pupillometry in this setting have not been investigated. METHODS: In a prospective blinded validation study, we compared manual and quantitative measurements of pupil size. Observer and device variability for all available parameters are expressed as mean difference (bias), limits of agreement (LoA), and reliability expressed as intraclass correlation coefficients (ICC) with a 95% confidence interval. RESULTS: Fifty-six unique quadrupled sets of measurement derived from 14 sedated and comatose patients (mean age 70±12 years) were included. For manually measured pupil size, inter-observer bias was -0.14±0.44 mm, LoA of -1.00 to 0.71 mm, and ICC at 0.92 (0.86-0.95). For quantitative pupillometry, we found bias at 0.03±0.17 mm, LoA of -0.31 to 0.36 mm and ICCs at 0.99. Quantitative pupillometry also yielded lower bias and LoA and higher ICC for intra-observer and inter-device measurements. Correlation between manual and automated pupillometry was better in larger pupils, and quantitative pupillometry had less variability and higher ICC, when assessing small pupils. Further, observers failed to detect 26% of the quantitatively estimated abnormal reactivity with manual assessment. We found ICC >0.91 for all quantitative pupillary response parameters (except for latency with ICC 0.81-0.91). CONCLUSION: Automated quantitative pupillometry has excellent reliability and twice the reproducibility and repeatability than manual pupillometry. This study further presents novel estimates of variability for all quantitative pupillary response parameters with excellent reliability.
Critically ill patients admitted to the cardiac intensive care unit (cICU) commonly experience neurological complications [1,2]. Especially in patients resuscitated from out-of-hospital cardiac arrest (OHCA) anoxic brain injury is the leading cause of death [3,4].It is challenging to identify these patients with poor neurological outcome early [5,6], and evaluation of pupil size and reactivity is of great prognostic importance [7,8]. When additional neurological evaluation (imaging of the brain, electroencephalography and somatosensory evoked potential) is needed before deciding about withdrawal of life-sustaining therapy, the timing can be guided by results from serial pupillometry as part of accurate multidisciplinary neuroprognostication [7-9]. Hence, great reliability for pupillometry is essential.In the clinical setting, pupillary assessments are often performed manually using a penlight for reactivity and a pupil gauge for pupil size, even though several studies investigating reliability find that automated quantitative pupillometry is superior to manual pupillometry [10-17]. In current American and European resuscitation guidelines [5,6], quantitative pupillometry is recommended as part of the multimodal prognostication of comatose patients resuscitated from OHCA. However, these guidelines recognize that evidence is still limited, and the lack of knowledge on variability of quantitative pupillometry may inflict on its clinical usefulness and the subsequent decision-making [18]. Furthermore, most studies predicting outcome of morbidity and mortality with quantitative pupillometry rely only on the overall pupillary reactivity (degree of contraction) [15,17]. Hence, reliability for all the individual quantitative pupillary response parameters (including contraction and dilation velocity etc.) have not been thoroughly investigated [10-16]. However, a recent post-hoc analysis from a multicenter prospective observational study found that several individual parameters were associated with poor neurologic outcome [19].To accommodate the demand for knowledge of variability, we investigated the observer and device reproducibility, repeatability and reliability of quantitative pupillometry in a clinical setting at a specialized cICU and compared the performance to the standard manual assessment. Additionally, we aimed to present novel estimates of variability for all the individual quantitative pupillary response parameters of an automated pupillometer for future scientific and clinical use.
Methods
Study design and population
We conducted a single-center prospective double-blinded validation study in a cICU at a tertiary heart center. Throughout August and September 2020, all patients admitted to the cICU were considered eligible for inclusion and otherwise treated in agreement with guidelines [5]. This comprised hemodynamically unstable patients requiring specialized intensive cardiac care. No elective patients were included.No patients fell for exclusion, with any preexisting ophthalmic condition that would hinder examination to both eyes were found.The aim was to include a minimum of 20 quadruple measurements to cover the range of pupil sizes, a pragmatic sample size used in previous validation studies [20,21].
Standard manual and quantitative pupillometry
Manual pupillary assessments were performed using a penlight for reactivity and a pupil gauge for measuring pupil size. Pupil size was recorded on an ordinal scale from 1 to 10 millimeters, and the pupil reactivity was graded as an absent, sluggish, normal and hyperreactive response. No hyperreactive response were registered and this category was omitted from further analysis.Quantitative pupillometry measurements were performed using two NPi®-200 pupillometers (NeurOptics®, Irvine, CA, USA). With a calibrated light stimulation of fixed intensity (1000 Lux) and duration (3.2 s), the device uses an infrared camera to achieve a rapid measure (0.05 mm limit) of the pupil size and a series of several dynamic pupillary variables. A single-use plastic ‘chin guard’ (SmartGuard®) was used for every pupillary measurement, ensuring a uniform distance from the eye and the pupil targeted on an LCD screen. The quantitative assessment of the pupillary light reflex (PLR) divides the reactivity into parameters of latency of constriction (LAT, seconds), percental change (%CH, %), average/maximum constriction velocity (CV/MCV, millimeters/seconds), and the subsequent dilation velocity (DV, millimeters/seconds), besides maximum/absolute and minimum pupil size (MAX/MIN, millimeters) [22]. As previously, abnormal pupil reactivity is defined as %CH <15% [14].Reactivity can be expressed as Neurological Pupil indexTM (NPi), a scalar value (between 0 and 5) derived from an algorithm including quantitative parameters [23]. The higher the NPi score, the more reactive response, and on the contrary the lower the NPi score, the more abnormal response [23]. For the NPi value, a score <3 is considered abnormal (a sluggish response) and a score ≥3 is considered normal (a brisk response).All parameters were measured automatically and collected in the SmartGuard® data storage device. The pupillometer measures human pupils sizing varying from 1 mm to 9 mm, with an accuracy of 0.03 mm [22].
Terminology of measurement error
In this study we use different terms describing studies of measurement error.When measurements are made on the same subject agreement is the quantification of how close measurements are to each other, with the error given in the same scale as the measurements. It is independent of the population and refers to the methods used. Reliability, however, relates the observed measurement errors to the, “true”, inherent variability between subjects, and depends on population heterogeneity. In this study we evaluate reliability with intraclass correlation coefficient (ICC) [20,24,25].Repeatability refers to the variability in repeated measurements to the same subject under the same conditions, and reproducibility to the variation when conditions are different between measurements. Contentions differ in relation to the desired point of interest as in this study when investigating measurements between different observers and devices [26].
Patient assessment procedure
To collect pupillometry in the clinical setting patients at the cICU were selected for additional study-related pupillary measurements in the period from admission through the initiation and discontinuation of sedation, targeted temperature management (TTM) to 36 degrees Celsius, and treatment with vasoactive agents, to discharge from the cICU or withdrawal of life-sustaining treatment. Patients were subjected to multiple assessments during their admission.During the inclusion period, pairs of two trained nurses (observers) from the cICU staff pool (n = 24),performed sets of quadruple measurement-pairs, each consisting of a manual and then a quantitative pupillometry measurement to both left and right eye, in a pre-specified sequence, all completed within 5 minutes. The observer reproducibility (inter-observer variability) was addressed by the two nurses, each performing measurements (measurement-pairs #1 and #2) subsequently to each other. The observer repeatability (intra-observer variability) and device repeatability (intra-device variability, only for quantitative pupillometry) were investigated by comparing the first measurements from the first nurse with the repeated measures from the same nurse (measurement-pairs #1 and #3). Finally, device reproducibility (inter-device variability, only for quantitative pupillometry) was addressed by comparing the first measurements from the second nurse with repeated measurement of quantitative pupillometry, but with a second pupillometer (measurement-pairs #2 and #4). The sequence of each patient’s set of quadruple measurement-pairs is presented in Fig 1.
Fig 1
Patient assessment procedure.
Flowchart of “Patient assessment procedure” depicting a complete set of quadruple measurement-pairs to the same patient within 5 minutes, with every measurement-pair comprising one manual, and one quantitative measurement, to both left and right eye, subsequently, blinded to the other observer.
Patient assessment procedure.
Flowchart of “Patient assessment procedure” depicting a complete set of quadruple measurement-pairs to the same patient within 5 minutes, with every measurement-pair comprising one manual, and one quantitative measurement, to both left and right eye, subsequently, blinded to the other observer.Each observer used REDCap (Research Electronic Data Capture) [27,28] surveys to enter the manual measurements directly in a database immediately after an assessment, blinding the data to the other observers. Results from quantitative pupillometry were automatically stored directly in the pupillometers SmartGuard, blinded for all observers. All data were anonymized and blinded for the outcome assessors analyzing the data.The data collection was performed as part of the quality assessment for clinical procedures and equipment, approved by the hospital administration. Formal informed consent is therefore waived as per Danish legislation. The study conforms to the guiding principles of the Declaration of Helsinki and all authors have read and approved the manuscript in its present form. The manuscript is not under consideration for publication elsewhere and none of the results have previously been published. All authors meet the authorship criteria.
Statistical analysis
Continuous variables were tested for normality by visualization of the histogram presentation of data and presented as mean values ± standard deviation (SD). Categorical variables are presented as frequency and percentage.Absolute variation between measurements were analyzed with Bland-Altman (BA) statistics and presented in plots including mean differences (bias) adapted 95% limits of agreement (LoA), as ±1.96 SD [20]. For test of reliability we use ICC presented with 95% confidence interval (ICC (95% CI)) [29]. ICC values less than 0.50 were interpreted as poor, values between 0.50 and 0.75 as moderate, values between 0.75 and 0.90 as good, and values greater than 0.90 as excellent reliability [30].As sensitivity analysis, we stratified pupil sizes into two groups of “small” and “large” pupils (below and above median pupil size of 2.2 mm) and into left and right eye and repeated the analyses.When investigating agreement between measurements of the two methods we used BA statistics and tested the association between measurements of categorical and continuous variables with Spearman’s rank correlation test.All statistical analyses were performed using R version 4.0.3 [31], and a p-value <0.05 was considered significant.
Results
Baseline characteristics
We included 14 patients, either sedated or comatose, with a mean age of 70±12 years. Patients were predominantly males (57%), 10 (71%) were resuscitated from OHCA, and 4 were other critically ill patients admitted to the cICU. We collected 56 sets of quadruple measurement-pairs for analysis, wherein 14 (25%) pairs were obtained while the patients underwent TTM to 36 degrees Celsius, 44 (79%) while patients were sedated, and 46 (82%) while patients received a vasoactive agent. All baseline characteristics are presented in Table 1.
Table 1
Baseline characteristics.
Patients
n = 14a
Demography
Age (years)
70±12
Male sex
8 (57%)
Prior medical/surgical conditions
Arrhythmia
3 (21%)
Chronic obstructive pulmonary disease
3 (21%)
Diabetes
2 (14%)
Ischemic heart disease
5 (36%)
Heart failure
3 (21%)
Hypertension
8 (57%)
Valvular heart disease
1 (7%)
Admission diagnosis
Out-of-hospital cardiac arrest
10 (71%)
Cardiogenic shock
2 (14%)
ST elevation myocardial infarction
1 (7%)
Epidural hematoma
1 (7%)
Procedures performed
Coronary angiography
11 (79%)
Percutaneous coronary intervention
7 (50%)
Left ventricular ejection fraction by echocardiography, %
32±16
Patient assessments
n = 56aa
Treatment during measurement
Sedating agents
44 (79%)
Propofol
28 (50%)
Fentanyl
30 (54%)
Remifentanil
14 (25%)
Midazolam
2 (4%)
Dexmedetomidin
2 (4%)
Other opioids
8 (14%)
Targeted temperature management
14 (25%)
Vasoactive agents
46 (82%)
Norepinephrine
44 (79%)
Dopamine
6 (11%)
Other
14 (25%)
aStatistics presented: Mean±SD, n (%).
SD = standard deviation.
aStatistics presented: Mean±SD, n (%).SD = standard deviation.
Standard manual versus quantitative pupillometry—Pupil size
Intra-, and inter-observer variability for maximum size using standard manual pupillometry had a bias of -0.02±0.31 mm and LoA of -0.58 to 0.63 mm, and bias at -0.14±0.44 mm with LoA of -1.00 to 0.71 mm, respectively.Intra-observer variability for maximum pupil size measured with quantitative pupillometry was associated with a bias of 0.04±0.13 mm with LoA of -0.21 to 0.28 mm, and for minimum pupil size, a bias of 0.01±0.13 mm with LoA of -0.24 to 0.26 mm. Assessment of inter-observer variability resulted in a bias of 0.03±0.17 mm with LoA of -0.31 to 0.36 mm, and a bias of 0.03±0.18 mm with LoA of -0.33 to 0.38 mm for maximum, and minimum pupil size, respectively.Intra-, and inter-observer measurements of manual measurements resulted in ICC at 0.89 (0.83–0.93), and 0.92 (0.86–0.95), respectively. Quantitative pupillometry produced ICCs at 0.99 for all measurements. All individual results of pupil size for manual and quantitative pupillometry are presented in Table 2 with BA plots in Fig 2.
Table 2
Statistical analysis.
Manuel Pupillometry
Quantitative Pupillometry
Maximum diameter, mm
Maximum diameter, mm
Minimum diameter, mm
Percental change, %
Constriction Velocity, mm/s
Maximum Constriction Velocity, mm/s
Dilation Velocity, mm/s
Latency of constriction, s
Neurological Pupil index
Bias, mean±SD
Intra-observer
0.02±0.31
0.04±0.13
0.01±0.13
0.84±3.27
0.03±0.25
0.07±0.30
0.01±0.10
<0.01±0.04
0.04±0.21
Inter-observer
-0.14±0.44
0.03±0.17
0.03±0.18
-0.26±4.42
-0.06±0.33
-0.08±0.43
<0.01±0.12
<0.01±0.02
-0.02±0.21
Inter-device
-
0.01±0.17
-0.02±0.15
1.42±4.47
0.06±0.32
0.14±0.39
0.03±0.10
<0.01±0.04
0.04±0.19
Mean value, mean±SD
Intra-observer
2.11±0.54
2.17±0.73
1.74±0.43
18.29±9.91
1.17±0.87
1.66±1.25
0.41±0.31
0.24±0.04
4.42±0.40
Inter-observer
2.23±0.81
2.28±0.85
1.78±0.48
19.70±10.16
1.28±0.88
1.85±1.38
0.46±0.35
0.24±0.04
4.47±0.36
Inter-device
-
2.02±0.31
1.66±0.28
17.32±8.87
1.02±0.47
1.45±0.77
0.35±0.23
0.24±0.04
4.45±0.41
Limit of Agreement (lower, upper)
Intra-observer
-0.58, 0.63
-0.21, 0.28
-0.24, 0.26
-5.56, 7.25
-0.45, 0.52
-0.52, 0.66
-0.19, 0.21
-0.07, 0.07
-0.36, 0.45
Inter-observer
-1.00, 0.71
-0.31, 0.36
-0.33, 0.38
-8.93, 8.41
-0.69, 0.58
-0.93, 0.77
-0.24, 0.24
-0.05, 0.05
-0.43, 0.40
Inter-device
-
-0.33, 0.35
-0.32, 0.27
-7.35, 10.19
-0.56, 0.69
-0.63, 0.91
-0.18, 0.23
-0.07, 0.08
-0.34, 0.42
Intraclass Correlation Coefficients (95% CI)
Intra-observer
0.89 (0.83–0.93)
0.99 (0.99–1.00)
0.98 (0.97–0.99)
0.97 (0.96–0.98)
0.98 (0.97–0.99)
0.99 (0.98–0.99)
0.98 (0.96–0.98)
0.81 (0.70–0.88)
0.93 (0.88–0.95)
Inter-observer
0.92 (0.86–0.95)
0.99 (0.98–0.99)
0.96 (0.94–0.98)
0.95 (0.92–0.97)
0.96 (0.94–0.98)
0.97 (0.96–0.98)
0.97 (0.95–0.98)
0.92 (0.88–0.95)
0.91 (0.86–0.94)
Inter-device
-
0.99 (0.98–0.99)
0.97 (0.95–0.98)
0.95 (0.92–0.97)
0.97 (0.95–0.98)
0.98 (0.97–0.99)
0.98 (0.97–0.99)
0.82 (0.72–0.89)
0.94 (0.90–0.96)
SD = standard deviation, 95% CI = 95% confidence interval, mm = millimeters, s = seconds.
Fig 2
Standard manual versus quantitative pupillometry—Pupil size.
Bland Altman plots depicting intra-, and inter-observer variability of pupil size measured with manual and quantitative pupillometry.
Standard manual versus quantitative pupillometry—Pupil size.
Bland Altman plots depicting intra-, and inter-observer variability of pupil size measured with manual and quantitative pupillometry.SD = standard deviation, 95% CI = 95% confidence interval, mm = millimeters, s = seconds.When comparing manual and quantitative assessment of pupil size, we found bias at -0.02±0.45 mm with LoA of -0.91 to 0.87 mm, and ICC at 0.90 (0.87–0.92). We found significant correlation between manual and quantitative pupillometry for both intra-observer (Spearman’s rho = 0.63, 95%CI: 0.36–0.80; p<0.001) and inter-observer (Spearman’s rho = 0.71, 95%CI: 0.49–0.86; p <0.001) measurements of pupil size.For measurements stratified according to pupil size below and above 2.2 mm, quantitative pupillometry still presented higher ICCs, with lower values of LoA in both groups.Comparing manual and quantitative assessment in the different size groups, we found correlation of methods for both small pupils (Spearman’s rho = 0.65, 95%CI: 0.35–0.83; p<0.001 for intra-observer, and Spearman’s rho = 0.53, 95%CI: 0.17–0.77; p = 0.003 for inter-observer), and for large pupils (Spearman’s rho = 0.72, 95%CI: 0.26–0.94; p<0.010 for intra-observer, and Spearman’s rho = 0.85, 95%CI: 0.45–0.97; p<0.001 for inter-observer).When addressing pupil size for left and right eyes isolated, manual pupillometry presented intra-observer variability with LoA at -0.59, 0.63 mm (for both left and right), and inter-observer variability with LoA at -1.10, 0.80 mm (left) and -0.95, 0.83 mm (right). Intra-observer variability for quantitative pupillometry resulted in a LoA at -0.19, 0.24 mm (left) and -0.23, 0.32 mm (right), and inter-observer variability with LoA at -0.33, 0.39 mm (left) and -0.30, 0.34 mm (right), respectively.ICC for manual pupillometry was found at 0.89 (95%CI: 0.80–0.94) for intra-observer measurements in both eyes. For inter-observer measurements we found ICC at 0.91 (95%CI: 0.82–0.95) and 0.92 (95% CI: 0.84–0.96) for left and right eye respectively. Quantitative pupillometry yielded ICC >0.98 (95%CI: 0.98–0.99) for intra-, and inter-observer measures in both eyes.
Standard manual versus quantitative pupillometry–Pupil reactivity
When measuring pupil reactivity with manual assessments, ICC was 0.89 (0.84–093) and 0.76 (0.63–0.85) for intra-, and inter-observer respectively. Reactivity measured with quantitative pupillometry (%CH) produced intra-, and inter-observer ICC at 0.97 (0.96–0.98) and 0.95 (0.92–0.97) respectively.When comparing manual reactivity and %CH we found a correlation for both intra-observer (Spearman’s rho at 0.66, 95%CI: 0.43–0.80; p <0.001) and for inter-observer measurements (Spearman’s rho at 0.69, 95%CI: 0.47–0.84; p <0.001). In 26% of the cases when quantitative pupillometry detected abnormal pupil reactivity (%CH <15%), a normal reactivity was noted for manual pupillometry.In measurements performed while patients were sedated the mean %CH value was 17% and 25% when not sedated, although lower with sedation, this difference was not significant (p = 0.234). When testing reliability for measurements without sedation, we found an ICC at 0.98 (0.95–0.99) and 0.99 (0.96–0.99) for intra-observer and inter-observer measurements, respectively. For the sedated we found ICC at 0.96 (0.94–0.98) and 0.92 (0.86–0.95) for intra-observer and inter-observer measurements.
The statistical results of NPi and the additional quantitative pupillary response parameters are shown in Table 2 and presented with BA plots in Fig 3.
Bland Altman plots depicting intra-, and inter-observer variability of quantitative pupillary response parameter. %CH = percental change, CV = average constriction velocity, MCV = maximum constriction velocity, DV = dilation velocity, LAT = latency of constriction, NPi = Neurological Pupil index, LoA = limit of agreement.
Bland Altman plots depicting intra-, and inter-observer variability of quantitative pupillary response parameter. %CH = percental change, CV = average constriction velocity, MCV = maximum constriction velocity, DV = dilation velocity, LAT = latency of constriction, NPi = Neurological Pupil index, LoA = limit of agreement.The NPi value presented a bias at -0.05±0.21 with LoA of -0.36 to 0.45 for intra-observer variability, and at -0.02±0.21 with LoA of -0.43 to 0.40 for inter-observer variability. NPi presented ICCs at 0.93 (0.88–0.95) and 0.91 (0.86–0.94) for intra-, and inter-observer measurements, respectively. When assessing the remaining quantitative parameters, we found low bias and LoA narrow, relative to the mean, and the ICCs for all parameters were good to excellent with values at 0.81 (0.70–0.88) and 0.92 (0.88–0.95) for LAT, and 0.95 to 0.99 for all other parameters.
Device reproducibility, repeatability and reliability
The bias of maximum and minimum pupil size between the two quantitative pupillometry devices (inter-device variability) was measured to be 0.01±0.17 mm with LoA of -0.33 to 0.35 mm and -0.02±0.15 mm with LoA of -0.32 to 0.28 mm, respectively. The ICCs was at 0.99 (0.98–0.99).The inter-device measurements of NPi resulted in a bias of -0.04±0.20 with LoA of -0.34 to 0.42, combined with an ICC at 0.94 (0.90–0.96). Overall reliability of all the quantitative pupillary response parameters for inter-device measurements resulted in ICC at 0.94–0.99 (except for LAT with ICC at 0.82 (0.72–0.89)).Intra-device (intra-observer) measurements are presented in the previous chapter, and, collectively with inter-device measurements, are further shown in Table 2 with BA plots in Fig 4.
Fig 4
Device reproducibility and repeatability.
Bland Altman plots depicting intra-, and inter-device variability of quantitative pupillometry. MAX size = maximum pupillary diameter, MIN size = minimum pupillary diameter, %CH = percental change, CV = average constriction velocity, MCV = maximum constriction velocity, DV = dilation velocity, LAT = latency of constriction, NPi = Neurological Pupil index, LoA = limit of agreement.
Device reproducibility and repeatability.
Bland Altman plots depicting intra-, and inter-device variability of quantitative pupillometry. MAX size = maximum pupillary diameter, MIN size = minimum pupillary diameter, %CH = percental change, CV = average constriction velocity, MCV = maximum constriction velocity, DV = dilation velocity, LAT = latency of constriction, NPi = Neurological Pupil index, LoA = limit of agreement.
Discussion
In our study of unconscious and critically ill patients, primarily with cardiac OHCA, we found twice the observer reproducibility and repeatability for quantitative measurements of pupil size with better measurements of reliability, for both size and reactivity, compared to the standard manual assessment.When investigating the individual agreement (both intra-, and inter-observer) of the two methods, we found that assessment of the pupil size by the standard manual method by experienced observers, resulted in LoA twice as wide compared to the automated pupillometer. Furthermore, the ICC values suggest that intra- and inter-observer reliability was lower for the manual assessments than for the automated. These results corroborate with earlier data of greater variability in measurements of the standard manual method verifying the inferiority compared to automated quantitative pupillometry [10-15]. As a consequence of this variability Couret et al. [14] revealed that observers, using standard manual examinations, missed 50% of anisocoria (difference in pupil size between left and right eye, here defined as >1.0 mm) in patients and falsely detected 16 cases compared to the automated pupillometry. In concordance with this, we found that measurements of automated quantitative pupillometry have higher reproducibility and repeatability, than manual assessment, for both left and right eye isolated.There is some disagreement regarding the correlation between standard manual and automated pupillometer in the literature. Meeker et al. and Couret et al. [11,14,16] report a poor correlation of the two methods in contrast to Yan et al and latest Smith et al. [32,33] that present a closer correlation. A fundamental problem with the manual pupillometry is that the reaction and difference in size is relatively smaller in smaller pupils, and thereby is more difficult to identify. In concordance, poor correlation was most pronounced when assessing small pupils for absolute size and reactivity (39% discordance for pupil size < 2.0 mm, and 4% in pupil size >4 mm in Couret et al.), and Yan et al. and Smith et al. did not stratify for size [14,32,33]. This is supported by our results of better correlation between the methods in larger pupils and measurements with higher reliability and less variability for the quantitative assessment than for manual, when assessing small pupils.Smith et al. [33] find it unlikely that the small mean difference (bias 0.15 with LoA of ±1.4) in pupil size found with manual assessment in their study should result in a clinically significant outcome. However, when the cutoff point used to score anisocoria is set to 1.0 mm (Couret et al.) or 0.5 mm (Taylor et al.), a variability (LoA) of ±1.4 mm (Couret et al.), or even ±0.9 mm as found in our study, could prove to be a significant margin of error in the clinical setting [14,34].While automated quantitative pupil assessment outperforms the standard manual evaluation, even a LoA of ±0.3 mm, with a pupillometer accuracy of 0.03 mm [22], for a pupil with a mean size of 2.2 mm, contracting to a minimum size of 1.7 mm, should be taken in to account.The quantitative pupillometer yielded excellent reliability for measurement of reactivity (%CH), even under sedation, compared to a considerably lower value for the manual assessments, and when the two methods were compared, the reliability was only moderate. We found that observers failed to detect 26% of the quantitatively estimated abnormal PLR with the manual assessment, in concordance with an error rate at 18% for Couret et al. [14].This lack of reliability and inaccuracies in estimating pupil size, and reactivity in manual pupil assessment with a penlight is problematic, as this still seems to be the standard regime for evaluating pupil size and reactivity in the clinical setting. If pupillometry is to be included in an accurate post-CA neuroprognostication protocol, it warrants the clinical application of automated quantitative pupillometry in favor of the standard manual evaluation.A recent study concludes that measurements of pupil size, constriction, and latency were not always interchangeable in the two different devices studied [10]. This could be a critical challenge as several devices are often used interchangeably in the clinical setting and relied upon for multimodal prognostication. However, our study finds excellent inter-device reliability for all the individual quantitative pupillary response parameters, with overall identical results regarding size and reactivity as for intra- and inter-observer measurements. These results are consistent with the only other study prior to this [13].For all the additional quantitative pupillary response parameters, recently presented with promising prognostic value by Tamura et al. [19], this study offers novel data on reproducibility and repeatability.”Overall, this study support that automated quantitative pupillometry performs well in the clinical setting even when several observers, multiple patients and separate pupillometry devices are involved. However, further studies, implementing the additional quantitative pupillary response parameters, are needed.
Limitations
As stated, the 56 sets of quadruple pupil measurement-pairs were obtained from 14 patients; hence, measurements are in themselves not entirely independent, which may underestimate the inherent variability between subjects. The prevalence of extreme pupil sizes (i.e., very small or very large initial pupil size) was low; however, we still cover the majority of pupil sizes in the current sample.The PLR can be affected by any interference of the balance of autonomic control of pupil size. Thus, any anatomical, physical or pharmacological condition that challenges this balance, can affect the pupillary size and reactivity pattern [18,35,36]. Our pupillary measurements were made by experienced observers in a clinical setting at the cICU, with or without sedation, TTM and vasoactive agents. In this setting, no control could be made regarding the type or amount of anesthesia or inotropic agents given, or the exact timing of pupillometry measurements. However, individual measurements for comparison were made at the same patients within 5 minutes, keeping similar conditions regarding the state of sedation, opioid treatment, and ambient lighting. Other studies found no difference in PLR agreement for patients with or without sedation when assessed within a short time frame [14,36].
Conclusion
In this prospective blinded validation study, we found excellent reliability and twice the reproducibility and repeatability for automated quantitative pupillometry compared to manual pupillometry.We present novel estimates of variability for all quantitative pupillary response parameters with excellent reliability.8 Feb 2022
PONE-D-22-00255
Observer and device reproducibility, repeatability, and reliability of automated quantitative pupillometry in critically ill cardiac patients
PLOS ONE
Dear Dr. Nyholm,Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.
Two expert reviewers in the field have evaluated your manuscript. I want to encourage you to adress the minor points raised by them.
Please submit your revised manuscript by Mar 25 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.We look forward to receiving your revised manuscript.Kind regards,Andreas SchäferAcademic EditorPLOS ONEJournal Requirements:When submitting your revision, we need you to address these additional requirements.1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found athttps://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf andhttps://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf2. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.3. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.We will update your Data Availability statement to reflect the information you provide in your cover letter.4. Your ethics statement should only appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please move it to the Methods section and delete it from any other section. Please ensure that your ethics statement is included in your manuscript, as the ethics statement entered into the online submission form will not be published alongside your manuscript.[Note: HTML markup is below. Please do not edit.]Reviewers' comments:Reviewer's Responses to Questions
Comments to the Author1. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: YesReviewer #2: Yes********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: YesReviewer #2: Yes********** 3. Have the authors made all data underlying the findings in their manuscript fully available?The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: NoReviewer #2: Yes********** 4. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: YesReviewer #2: Yes********** 5. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I congratulate the authors for the submitted work in which the evaluation of the pupils is examined in comparison of the conventional manual measurement to the apparative measurement. Overall, I think it is a promising article. However, I have a few small comments.1. i would suggest to choose a concise title that shows the focus of this work (comparison with classical manual measurement to quantitative pupil measurement).2. page 4 line 44-48: Please change the sentence structure. Discontinuation of therapy and good or poor neurological outcome are no synonymous. Poor outcome does not lead automatically to discontinuation of therapy (think about CPC class 3 and 4 or Mod Rank score of 3-5)3. in page 6, lines 71-72: is the patient population studied OHCA patients, or does it also include electively intubated ventilated patients in the context of cardiogenic shock?4. page 7 line 88: Please explain PLRPage 8 Line 119: How many patients had withdrawal of life-sustaining treatment? Did the results of pupillometry lead to an extension of the diagnosis? Were there differences between classical and apparative measurement here?5: Page 11, line 168-72: Very misleading! Is the mean age of the 8 male patients reflected?You write that 71% resuscitated from OHCA. 71% of 14 patients would result in 9.94 patients. This does not make sense.To what degree were the patients cooled? At what time (patient core body temperature) were the measurements taken?6. In the tables, the values should be given as Mean ±SD and the percentages in parentheses. Otherwise seemed very confusing.7. In: page 12 Table 1: Male sex with 8 of 14 corresponding to 57% and not 62%, and 7 of 14 equal to 50% for percutaneous coronary intervention (missing the word intervention here).8. i have a question about the admission diagnosis in table 1. were all patients now cardiopulmonary resuscitated? It was stated that all patients had out-of-hospital cardiac arrest. According to this, how should cardiogenic shock or ST-segment elevation myocardial infarction and epidural hematoma be interpreted? (Does it make sense to compare a patient with epidural hematoma with patients with diagnoses from the cardiology field?)9. page 12 line 174: The heading of the section should start on a new page. Again, the question arises at what time and under what circumstances the measurements were taken (Immediately after arrival?, Under hypothermia at 36 degrees? 32 degrees?).10. page 14 Table 2: Again, values should be expressed as mean plus minus SD.11. since 19 line 256 ... primarily with cardiac OHCA... Did all patients have OHCA or not? If necessary, only specialize on one group here to avoid bias.12. page 22 Limitations. This section is too long and partly redundant. For example, the content of page 21 lines 310-316 is already in the Discussion section.Some grammatical reworking is necessary, e.g.in the title: repeatability, and reliability. The comma before the and should be removed if this heading is to be retained. Page2, line 18: repeatability, and... as well as: Background: these when sitting should not be linked to a but. -Page5 line 64: repeatability, ... (See above)-Page8 line 115: Remove the colons from the heading-Page8 line 116:... were,selected... (The comma must go)-Page 17 line 239: Again:repeatability, and ... (The comma character !)Reviewer #2: The authors studied validity of manual and automated pupillometry in 14 sedated and comatose patients and found that manual evaluation was less accurate in small pupils and poorly performed in the assessment of reactivity in 1 out of 4 evaluations.Although limited by small sample size and the selected cohort, the study is of interest and useful for the broadened use of pupillometry in critical care patientsI have only minor comments to the authors:Please provide data on absolute pupil size.Was there a difference in reactivity among pts with and without opioid treatment?Any difference among NPI>3 and < 3 for the comparison with manual assessment?********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose “no”, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: NoReviewer #2: No[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
5 Apr 2022Dear editor and reviewer,Following your letter regarding our manuscript “Observer and device reproducibility, repeatability, and reliability of automated quantitative pupillometry in critically ill cardiac patients”, we are sending this response letter explaining the changes made in the revised manuscript.First, we would like to thank you for taking your valuable time to evaluate our manuscript and for the insightful comments. Below is given a point-by-point response to the comments of the reviewers.All page and lines references refer to the manuscript before edited.Academic Editor: When submitting your revision, we need you to address these additional requirements.1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found athttps://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf andhttps://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdfResponse from authors:We have applied these style requirements as requested.2. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.Response from authors:There should be no citation for retracted papers, hence, no changes has been made.3. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.We will update your Data Availability statement to reflect the information you provide in your cover letter.Response from authors:We have updated our Data Availability statement in the editorial manager accordingly.4. Your ethics statement should only appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please move it to the Methods section and delete it from any other section. Please ensure that your ethics statement is included in your manuscript, as the ethics statement entered into the online submission form will not be published alongside your manuscript.Response from authorsOur ethics statement resides in the sub-section “Patient assessment procedure” in the “Methods” section.Reviewer #1: I congratulate the authors for the submitted work in which the evaluation of the pupils is examined in comparison of the conventional manual measurement to the apparative measurement. Overall, I think it is a promising article. However, I have a few small comments.1. I would suggest to choose a concise title that shows the focus of this work (comparison with classical manual measurement to quantitative pupil measurement).Response from authors:The suggestion is much appreciated, and we recognize that the title must be changed to better emphasize the subject of the manuscript.Changes in the revised manuscript:We have changed the full title (title page) to “Superior reproducibility and repeatability in automated quantitative pupillometry compared to standard manual assessment, and quantitative pupillary response parameters present high reliability in critically ill cardiac patients”, and the short title from “Observer and device reproducibility, repeatability, and reliability of automated quantitative pupillometry” to “Superior reproducibility and repeatability in automated quantitative pupillometry compared to manual”.2. page 4 line 44-48: Please change the sentence structure. Discontinuation of therapy and good or poor neurological outcome are no synonymous. Poor outcome does not lead automatically to discontinuation of therapy (think about CPC class 3 and 4 or Mod Rank score of 3-5)Response from authors:We thank the reviewer for this valid point and will rephrase the mentioned sentence.Changes in the revised manuscript:The sentence at page 4 line 42-48 has been changed to the following: “It is challenging to identify these patients with poor neurological outcome early [7,8], and evaluation of pupil size and reactivity is of great prognostic importance [5,6]. When additional neurological evaluation (imaging of the brain, electroencephalography and somatosensory evoked potential) is needed before deciding about withdrawal of life-sustaining therapy, the timing can be guided by results from serial pupillometry as part of accurate multidisciplinary neuroprognostication [5,6,9]. Hence, great reliability for pupillometry is essential.”3. in page 6, lines 71-72: is the patient population studied OHCA patients, or does it also include electively intubated ventilated patients in the context of cardiogenic shock?Response from authors:We thank the reviewer for this relevant question. This issue has further been raised in question #8 and #11 as well. To collectively clarify this, we have addressed all questions within the answer for question #11.4. page 7 line 88: Please explain PLRPage 8 Line 119: How many patients had withdrawal of life-sustaining treatment? Did the results of pupillometry lead to an extension of the diagnosis? Were there differences between classical and apparative measurement here?Response from authors:Regarding the first issue we thank you for pointing out this mistake which will be corrected. Concerning the second issue, of the 14 patients included 5 patients died (4 due to anoxic brain injury, and 1 due to refractory cardiogenic shock) at the ICU and all had withdrawal of life-sustaining treatment (WLST) prior to this. In the cases of anoxic brain injury, the decision WLST did rely upon a multidisciplinary assessment of neuron specific enolase, imaging of the brain, electroencephalography and somatosensory evoked potential combined thorough neurological examination. The results of pupillometry did not lead to an extension of the diagnosis.Changes in the revised manuscript:The “PLR” at page 7 line 88 has been replaced with “pupillary light reflex (PLR)”.5: Page 11, line 168-72: Very misleading! Is the mean age of the 8 male patients reflected?You write that 71% resuscitated from OHCA. 71% of 14 patients would result in 9.94 patients. This does not make sense.To what degree were the patients cooled? At what time (patient core body temperature) were the measurements taken?Response from authors:We thank the reviewer for making us aware of this need for clarification. We understand that the phrasing of the baseline characteristics leads the reader to believe that the statements refers only to the 8 males, instead of the total of the population.The issue of TTM and timing of measurement have been raised again in question #9 where we have elaborated on this matter.Changes in the revised manuscript:We have rephrased the sentence at page 11 lines 168-172 to the following: “We included 14 patients, either sedated or comatose, with a mean age of 70±12 years. Patients were predominantly males (57%), 10 (71%) were resuscitated from OHCA, and 4 were other critically ill patients admitted to the cICU. We collected 56 sets of quadruple measurement-pairs for analysis, wherein 14 (25%) pairs were obtained while the patients underwent TTM to 36 degrees Celsius, 44 (79%) while patients were sedated, and 46 (82%) while patients received a vasoactive agent. All baseline characteristics are presented in Table 1.”6. In the tables, the values should be given as Mean ±SD and the percentages in parentheses. Otherwise seemed very confusing.Response from authors:We agree that this setup can be confusing and thank the reviewers for emphasizing this point.Changes in the revised manuscript:In Table 1 (page 12) we have replaced alle parentheses containing mean values with “value±SD” instead and left the percentages in parentheses.7. In: page 12 Table 1: Male sex with 8 of 14 corresponding to 57% and not 62%, and 7 of 14 equal to 50% for percutaneous coronary intervention (missing the word intervention here).Response from authors:These are of course miscalculations and a neglection of a missing word on our part. We thank you for pointing out these errors.Changes in the revised manuscript:In page 12 Table 1, “Male sex” we have replaced the percental value from “62” to “57”, replaced “Percutaneous coronary” with “Percutaneous coronary intervention” and the percental value from “47” to “50”,8. I have a question about the admission diagnosis in table 1. were all patients now cardiopulmonary resuscitated? It was stated that all patients had out-of-hospital cardiac arrest. According to this, how should cardiogenic shock or ST-segment elevation myocardial infarction and epidural hematoma be interpreted? (Does it make sense to compare a patient with epidural hematoma with patients with diagnoses from the cardiology field?)Response from authors:Again, we thank the reviewer for this question and refer to the answer for question #11, later in the text.9. page 12 line 174: The heading of the section should start on a new page. Again, the question arises at what time and under what circumstances the measurements were taken (Immediately after arrival?, Under hypothermia at 36 degrees? 32 degrees?).Response from authors:We thank the reviewer for emphasizing these points. We agree that the heading should be moved to a new page.Regarding the circumstances the measurements in #5 and in #9 we have added the temperature of our TTM regime in rewriting the first sentence of the result section. The timing of the individual measurements is discussed in the “Methods” section of the manuscript. They were obtained “in the period from admission through the initiation and discontinuation of sedation, targeted temperature management (TTM), and treatment with vasoactive agents, to discharge from the cICU or withdrawal of life-sustaining treatment.” (page 8 line 117-119).For OHCA patients treated with TTM, some measurements were obtained during TTM and some after TTM as well. For non-OHCA no measurements were obtained during TTM. In all, 14 (25%) of all measurements in this study were obtained during TTM (36 degree Celsius), and 48 (75%) were obtained from a patient not treated with TTM at that moment.Changes in the revised manuscript:At page 12, line 174 the heading “Standard manual versus quantitative pupillometry - Pupil size”, have been moved to page 13, line 175. We have added “to 36 degrees Celsius” at page 8 line 118.10. page 14 Table 2: Again, values should be expressed as mean plus minus SD.Response from authors:Again, we thank the reviewer pointing out this issue.Changes in the revised manuscript:As with Table 1, we In Table 2 (page 14) we have also replaced alle parentheses containing mean values with “value±SD” instead and left the percentages in parentheses.11. since 19 line 256 ... primarily with cardiac OHCA... Did all patients have OHCA or not? If necessary, only specialize on one group here to avoid bias.Response from authors:In the context of question #3, #8 and #11, we acknowledge that the inclusion criteria are not presented clearly regarding patient population and thank the reviewer for emphasizing this.Question #3 In this study we included all patients admitted to our cardiac ICU, primarily OHCA patients but also patients with cardiogenic shock and other hemodynamically instable patients requiring specialized intensive care. No elective patients were included.Question #8: Regarding the admission diagnosis in table 1, we admitted 10 patients resuscitated from OHCA, 2 patients with ongoing cardiogenic shock, 1 hemodynamically unstable patient with STEMI, and 1 hemodynamically unstable patient initially admitted with epidural hematoma. Hence, not all patients were OHCA patients included were OHCA patients, but the vast majority were.Answer for question #11: As the results state, we included 10 OHCA patients and 4 “other” patients according to patients admitted at the cardiac ICU in the study period.The overall concern seems to be whether it makes sense to compare patients with different diagnosis (#8) and how to avoid subsequent bias if doing so (#11). This is a very valid point and we thank the reviewer for bringing this up.In this study we focus on validating the measuring methods of standard manual and quantitative pupillometry in the clinical setting. Whether it yields reproducible data between measurements, observers and devices or not. Our setup does not examine pupillometry in regards of patient outcome.The concern should then be if the pupillometry would yield the same reproducibility, repeatability, and reliability for OHCA than for different diagnosis and these consequently diluting the signal from the primary group. However, several earlier studies have investigated the intra/inter- and inter-device reliability separately for different diagnosis yielding the same high reliability for quantitative pupillometry and superiority compared to manual investigation (Phillips et. al, Neurocrit Care, 2019). Even studies using healthy controls (Couret et al., Critical Care, 2016). Thus, we had less concern for this bias in our validation study. Nevertheless, we have compared the reliability of quantitively assessed size for both intra-, and interobserver measurements for OHCA and non-OHCA patients.With intraobserver measurements the OHCA group yielded an ICC (95%CI) at 0.99 (0.98-0.99) and non-OHCA group at 0.97 (0.86-0.99), with no significant difference between means of the two groups (p=643). With interobserver measurements we found ICC at 0.97 (0.96-0.98) for the OHCA group, and 0.94 (0.72-0.99) non-OHCA group. Again, with no significant difference between means of the two groups (p=0.324).Changes in the revised manuscript:We have changed the sentence at page 6, lines 71-72 to the following: “Throughout August and September 2020, all patients admitted to the cICU were considered eligible for inclusion and otherwise treated in agreement with guidelines [7]. This comprised hemodynamically unstable patients requiring specialized intensive cardiac care. No elective patients were included.”12. page 22 Limitations. This section is too long and partly redundant. For example, the content of page 21 lines 310-316 is already in the Discussion section.Response from authors:We have looked through page 22 “Limitations” and have some difficulty finding the redundance in this section. If the reviewer feels that the redundance is within this section, we much ask for a specific reference.However, the reviewer makes a point that the content of page 21 lines 310-316 is already in the “Discussion”, hence we interpret the inquiry to be with this section. We thank the reviewer for making this point aware to us and will revise the section accordingly.Changes in the revised manuscript:We have rewritten the Discussion section with the following editionsIn page 21 line 310-316 we have corrected the sentence the following: “For all the additional quantitative pupillary response parameters, recently presented with promising prognostic value by Tamura et al. [19], this study offers novel data on reproducibility and repeatability.”In page 22 line 318-324 we have corrected the sentence to the following: “Overall, this study support that automated quantitative pupillometry performs well in the clinical setting even when several observers, multiple patients and separate pupillometry devices are involved. However, further studies, implementing the additional quantitative pupillary response parameters, are needed.”Some grammatical reworking is necessary, e.g.in the title: repeatability, and reliability. The comma before the and should be removed if this heading is to be retained.-Page 2, line 18: repeatability, and... as well as: Background: these when sitting should not be linked to a but.-Page 5 line 64: repeatability, ... (See above)-Page 8 line 115: Remove the colons from the heading-Page 8 line 116:... were,selected... (The comma must go)-Page 17 line 239: Again:repeatability, and ... (The comma character !)Response from authors:We recognize these grammatical errors and thank the reviewer for pointing this out.Changes in the revised manuscript:We have made the following corrections in the manuscript:The title has been changed in answer for question #1.At page 2 line 17-19 we have rewritten the paragraph: “Quantitative pupillometry is part of multimodal neuroprognostication of comatose patients after out-of-hospital cardiac arrest (OHCA). However, the reproducibility, repeatability, and reliability of quantitative pupillometry in this setting have not been investigated.”We removed a “,” at page 5 line 64, page 8 line 116, page 17 line 239, and removed a “:” at page 8 line 115.Reviewer #2: The authors studied validity of manual and automated pupillometry in 14 sedated and comatose patients and found that manual evaluation was less accurate in small pupils and poorly performed in the assessment of reactivity in 1 out of 4 evaluations.Although limited by small sample size and the selected cohort, the study is of interest and useful for the broadened use of pupillometry in critical care patients. I have only minor comments to the authors:Please provide data on absolute pupil size.Response from authors:In this study we refer to the absolute pupil size as the pupillometer parameter “maximum pupil size (MAX)”, according to “NPI®-200 Pupillometer System - Instructions for Use” by NeurOptics®. Mean values of absolute pupil size is provided in “Table 2” under the subheading “Maximum diameter, mm” for both “Manuel Pupillometry” and “Quantitative Pupillometry”. However, we fully acknowledge that the term “maximum pupil size” can be ambiguous and will make an adjustment to clarify this. We thank the reviewer for emphasizing this.Changes in the revised manuscript:We have added “/absolute” to the sentence in page 7 line 91: “…velocity (DV, millimeters/seconds), besides maximum/absolute and minimum pupil size (MAX/MIN, millimeters) [22].”Was there a difference in reactivity among pts with and without opioid treatment?Response from authors:This is very interesting and relevant point, and we thank the reviewer for bringing this up.We would expect that the mean value of reactivity (quantitively measured as %CH) were lower in the sedated patients. However, we made analyses to investigate if pupillometry were still reliable under sedation.Overall we found a difference in mean values between sedated and unsedated patients, however the difference were not significant, and although the interobserver reliability were slightly lower for the sedated patients, all measurements had excellent reliability, in regards of the usual criteria (Portney &Watkins, 2000).Changes in the revised manuscript:We have added “In measurements performed while patients were sedated the mean %CH value was 17% and 25% when not sedated, although lower with sedation, this difference was not significant (p= 0.234). When testing reliability for measurements without sedation, we found an ICC at 0.98 (0.95-0.99) and 0.99 (0.96-0.99) for intra-observer and inter-observer measurements, respectively. For the sedated we found ICC at 0.96 (0.94-0.98) and 0.92 (0.86-0.95) for intra-observer and inter-observer measurements.” to page 16 line 222 and “even under sedation,” to page 20 line 291.Any difference among NPI>3 and < 3 for the comparison with manual assessment?Response from authors:Again, the reviewer raises a very interesting focus. However, in this study the lowest NPi value measured were 3.3, hence we obviously could not calculate any difference below. We ran test for NPI>4 and <4 regarding reliability of measurements between observers for pupil size.For quantitative pupillometry mean values of pupil size were 2.23mm for NPi > 4, and 2.00mm for NPi < 4 (p-value = 0.029). For manual assessed pupil size, mean pupil size were 2.29mm for NPi > 4, and 1.87mm for NPi < 4 (p-value < 0.001). When testing reliability for quantitative pupillometry for NPi > 4 we found an ICC at 0.99 (0.99-1.00) and 0.99 (0.98-0.99) for intraobserver and interobserver measurements, respectively. For NPi < 4 we found ICC at 0.97 (0.91-0.99) and 0.97 (0.89-0.99) for intraobserver and interobserver measurements, respectively. The manual assessment yielded for NPi > 4 yielded ICC at 0.94 (0.90-0.97) and 0.91 (0.85-0.95) for intraobserver and interobserver measurements, respectively. For NPi < 4 we found ICC at 0.78 (0.28-0.9) and 0.80 (0.45-0.94) for intraobserver and interobserver measurements, respectively.Overall, we observed worse ICC for measurements with NPi < 4, especially for manualPupillometry. This important tendency should be further investigated in a larger population, in future studies, for NPi value below and above 3.Submitted filename: Response to Reviewers.docxClick here for additional data file.18 Jul 2022Superior reproducibility and repeatability in automated quantitative pupillometry compared to standard manual assessment, and quantitative pupillary response parameters present high reliability in critically ill cardiac patientsPONE-D-22-00255R1Dear Dr. Nyholm,We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.Kind regards,Andreas SchäferAcademic EditorPLOS ONEAdditional Editor Comments (optional):Reviewers' comments:Reviewer's Responses to Questions
Comments to the Author1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressedReviewer #2: All comments have been addressed********** 2. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: YesReviewer #2: Yes********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: YesReviewer #2: Yes********** 4. Have the authors made all data underlying the findings in their manuscript fully available?The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: YesReviewer #2: Yes********** 5. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: YesReviewer #2: Yes********** 6. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I thank the authors for this revised work. All objections have been satisfactorily dealt with. I have no new objectionsReviewer #2: All points have been adequately adressed in the revised version of the manuscript. I have no further comments********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose “no”, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Dr. Muharrem AkinReviewer #2: No**********20 Jul 2022PONE-D-22-00255R1Superior reproducibility and repeatability in automated quantitative pupillometry compared to standard manual assessment, and quantitative pupillary response parameters present high reliability in critically ill cardiac patientsDear Dr. Nyholm:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.If we can help with anything else, please email us at plosone@plos.org.Thank you for submitting your work to PLOS ONE and supporting open access.Kind regards,PLOS ONE Editorial Office Staffon behalf ofProf. Dr. Andreas SchäferAcademic EditorPLOS ONE
Authors: Rose Du; Michele Meeker; Peter Bacchetti; Merlin D Larson; Martin C Holland; Geoffrey T Manley Journal: Neurosurgery Date: 2005-07 Impact factor: 4.654
Authors: Lise Witten; Ryan Gardner; Mathias J Holmberg; Sebastian Wiberg; Ari Moskowitz; Shivani Mehta; Anne V Grossestreuer; Tuyen Yankama; Michael W Donnino; Katherine M Berg Journal: Resuscitation Date: 2019-01-30 Impact factor: 5.262
Authors: Michele Meeker; Rose Du; Peter Bacchetti; Claudio M Privitera; Merlin D Larson; Martin C Holland; Geoffrey Manley Journal: J Neurosci Nurs Date: 2005-02 Impact factor: 1.230