Literature DB >> 34407131

The evaluation of a web-based tool for measuring the uncorrected visual acuity and refractive error in keratoconus eyes: A method comparison study.

Marc B Muijzer1,2, Janneau L J Claessens1, Francesco Cassano2, Daniel A Godefrooij1, Yves F D M Prevoo2, Robert P L Wisse1,2.   

Abstract

PURPOSE: To evaluate the outcome of a web-based digital assessment of visual acuity and refractive error, compared to a conventional supervised assessment, in keratoconus patients with complex refractive errors.
MATERIAL AND METHODS: Keratoconus patients, aged 18 to 40, with a refractive error between -6 and +4 diopters were considered eligible. An uncorrected visual acuity and an assessment of refractive error was taken web-based (index test) and by manifest refraction (reference test) by an optometrist. Corrected visual acuity was assessed with the prescription derived from both the web-based tool and the manifest refraction. Non-inferiority was defined as the 95% limits-of-agreement (95%LoA) of the differences in spherical equivalent between the index and reference test not exceeding +/- 0.5 diopters. Agreement was assessed by a Bland-Altman analyses.
RESULTS: A total of 100 eyes of 50 patients were examined. The overall mean difference of the uncorrected visual acuity measured -0.01 LogMAR (95%LoA:-0.63-0.60). The variability of the differences decreased in the better uncorrected visual acuity subgroup (95%LoA:-0.25-0.55). The overall mean difference in spherical equivalent between the index and reference test exceeded the non-inferiority margin: -0.58D (95%LoA:-4.49-3.33, P = 0.008). The mean differences for myopic and hyperopic subjects were 0.09 diopters (P = 0.675) and -2.06 diopters (P<0.001), respectively. The corrected visual acuities attained with the web-based derived prescription underachieved significantly (0.22±0.32 logMAR vs. -0.01±0.13 LogMAR, P <0.001).
CONCLUSIONS: Regarding visual acuity, the web-based tool shows promising results for remotely assessing visual acuity in keratoconus patients, particularly for subjects within a better visual acuity range. This could provide physicians with a quantifiable outcome to enhance teleconsultations, especially relevant when access to health care is limited. Regarding the assessment of the refractive error, the web-based tool was found to be inferior to the manifest refraction in keratoconus patients. This study underlines the importance of validating digital tools and could serve to increase overall safety of the web-based assessments by better identification of outlier cases.

Entities:  

Mesh:

Year:  2021        PMID: 34407131      PMCID: PMC8372909          DOI: 10.1371/journal.pone.0256087

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

Globally, an estimated 1 billion people have a visual impairment that can be prevented or is undetected and this number is expected to rise [1, 2]. Main causes of visual impairment are refractive errors, cataract or chronic ophthalmic conditions (e.g. macular degeneration). In an aging population, the demand for eye care is increasing rapidly, which imposes a challenge for providers of eye care in both developed and less-developed countries [1, 3]. The acutely reduced access to healthcare during the COVID-19 outbreak underlined the need for a paradigm shift in the delivery of eye care [4]. To become less reliant on hospital facilities and trained professionals for monitoring eye conditions, there is a growing interest in telemedicine and digital tools. In particular, promising advances have been made for automatic assessment of retinal images [5]. The refractive error and visual acuity are considered important clinical parameters for diagnosing and monitoring eye conditions and as such various tools to remotely assess refractive errors and visual acuity are developed and clinically validated [6, 7]. Currently, the manifest refraction–consisting of the measurement of the refractive error and the visual acuity–by a trained professional is considered the gold standard [8]. Automated assessment of the refractive error using an autorefractor is considered non-inferior to the manifest refraction in healthy eyes [9, 10]. However, the reliability of the autorefractor decreases in eyes suffering from complex refractive errors (a hallmark sign of keratoconus) and it does not measure visual acuity [11]. Moreover, both the manifest and automated techniques require (expensive) medical equipment and/or qualified personnel and makes them unsuitable for home monitoring or use in less-developed countries. Recently the authors published the outcomes of the”manifest versus online refraction evaluation”-trial, which reports the validation of a web-based refractive assessment in healthy adults [6]. The tool was found non-inferior to a manifest refraction performed by a trained optometrist and is accessible via https://easee.online. Notably, the tool does not require any specialized equipment–it only requires a mobile phone and a computer screen–and thus can be performed in most home environments. However, the authors acknowledge that the results within a healthy population are not necessarily representative for individuals with a suboptimal visual performance. Therefore the design of the MORE-trial included a cohort of keratoconus patients to evaluate the outcome of the digital refraction tool in a population with an eye condition. Keratoconus was chosen as these patients are often still able to achieve an acceptable visual acuity with a proper prescription, despite their complex refractive errors, including irregular astigmatism [11, 12]. Keratoconus is an uncommon condition typically diagnosed in adolescents, where gradual thinning of the cornea leads to a progressive ectasia and subsequent irregular astigmatism [13, 14]. With its insidious onset, many early patients are unaware of this diagnosis, and keratoconus patients therefore pose a challenge for any assessment of refractive error [11]. Here, we present the results of the keratoconus cohort of the MORE trial, a validation study of a web-based tool for the assessment of visual acuity and refractive error.

Material and methods

Study design and recruitment

Data were prospectively collected in the single-center method comparison ‘manifest versus online refraction evaluation (MORE)’-trial, performed at the University Medical Center Utrecht in Utrecht, the Netherlands. This study consisted of two subgroups; healthy individuals (n = 100) and keratoconus patients (n = 50). The outcomes of the web-based tool in healthy individuals is reported elsewhere [6]. The current manuscript pertains to the keratoconus patients of the MORE trial. Participants enrolled into the study were patients who visited the keratoconus clinic at the University Medical Center Utrecht in Utrecht. Inclusion criteria were an established diagnosis of keratoconus (diagnosed by an ophthalmologist based on clinical signs and Scheimpflug corneal tomography and graded using the Amsler-Krumeich classification), a clear central cornea, and an age between 18 to 40 years [13]. Subjects were excluded if their manifest refractive error, converted to spherical equivalent, was worse than -6 diopter (D; for myopia) or +4 D (for hyperopia), or with co-existing visual acuity limiting conditions. The boundaries regarding both age and refractive error are determined by the technical & regulatory limits of the web-based tool. Furthermore, we excluded subjects who had undergone corneal crosslinking 6 months prior to study participation to diminish effects of corneal haze, who had diabetes, who were pregnant or lactated, or who were unable to perform the web-based refraction assessment. Consecutively presenting patients that matched our criteria were invited to participate in the study (Fig 1).
Fig 1

STARD flow diagram illustrating participant flow of the keratoconus population of the MORE-trial.

All included participants underwent the web-based (index test) and manifest assessments (reference test) of visual acuity and refractive error.

STARD flow diagram illustrating participant flow of the keratoconus population of the MORE-trial.

All included participants underwent the web-based (index test) and manifest assessments (reference test) of visual acuity and refractive error. All procedures were performed in accordance with the Declaration of Helsinki, local and national laws regarding research (i.e. the Act on Scientific Research Involving Humans), European directives with respect to privacy (General Data Protection Regulation 2016/679) and medical devices (Medical Device Regulation 2017/745), and the 2015 Standards for Reporting Diagnostic Accuracy Studies [15]. The study protocol was approved by our institution’s Ethics Review Board (Medical Ethical Review Committee Utrecht; number: 17–524), and it was registered at clinicaltrials.gov (number: NCT03313921) and the CCMO (number: NL61478.041.17). All participants provided written informed consent and were enrolled in the study between January 31, 2018 and June 23, 2019. According to the MORE trial protocol, all subjects underwent three consecutive tests designed to determine the refractive state of both eyes in the following order and the subject was blinded for the outcome of all tests. First, the refractive error was measured using autorefraction (Topcon RM 8800, Topcon Corporation, Japan) and corneal imaging was performed using Scheimpflug tomography (Pencatam HR, Oculus GmbH, Germany). Second, an optometrist performed the reference test (manifest subjective refraction). Finally, the subject performed the index test using the digital refraction tool. The digital refraction tool is web-based and was a custom version of the commercially available Easee refractive assessment tool, specifically built for this clinical trial and is identical to the second-generation algorithm we have described previously [6]. In short, a smartphone functions as a remote control by which the user submits input from a distance of 3 meters to a computer screen that displays the Web-based assessment. The user is presented a sequence of optotypes and astigmatism dials. Any visual acuity below 1.0 (worse than 20/20) is considered to be caused by a refractive error. The web-based refraction assessment is classified as a Conformité Européenne class 1m medical device, which is in accordance with European Union Medical Device Regulation 2017/745, and the software is classified as class A, which is in accordance with International Electro technical Commission standard 62304:2014. The uncorrected distance visual acuity (UDVA) was recorded using an Early Treatment Diabetic Retinopathy Study (ETDRS) visual acuity wallchart and the web-based visual acuity test. Corrected distance visual acuity (CDVA) was measured using a correction based on the results of the manifest and web-based refraction assessment outcome. Visual acuity was tested in accordance with ISO 8596, with regard to optotypes and room illumination [16]. The projected optotypes were randomized to mitigate any possible test-retest effect. The study protocol did not cover the assessment of CDVA with the autorefraction result, as previous research has shown a low reliability of autorefraction measurement in keratoconus eyes [11]. The following data were recorded for each participant/eye: age, gender, laterality, (ophthalmic) medical history, Amsler-Krumeich stage (AK1: mild, AK2: moderate, AK3: severe), mean & maximum keratometry, previous prescription (if known), use of spectacles or contact lenses, UDVA, CDVA, and refractive outcome, including spherical and cylindrical power (in D) and axis (in degrees).

Statistical analysis

The primary study outcome was refractive error as measured using the web-based refractive assessment and compared with the subjective manifest refraction and autorefraction. The signation of the refractive error (+/-), spherical power, cylindrical power were converted into a spherical equivalent. In concordance with the MORE trial protocol, the agreement of the spherical equivalent of the various methods was compared using a two-way mixed effect intraclass correlation coefficient (ICC) of a single measurement. In addition, the difference between the measurements of the web-based and manifest assessment were compared using a Fourier analysis. Specifically, we analyzed the signation of the refractive error (+/-), spherical power, cylindrical power, and axis, which were converted into power vectors [10, 17, 18]. Subsequently, the difference between the power vectors of the various methods was calculated as a residual vector (i.e. a vector of the difference) [11]. The power vectors are non-linear in nature, which precludes statistical analysis of differences between power vectors. Secondary study outcomes included the UDVA and CDVA, the latter measured using the outcome of both the manifest and web-based refractive assessment. A post-hoc subgroup analysis was performed for high and low uncorrected visual acuity ranges (high: ≤0.5 logMAR, low >0.5 LogMAR; based on ETDRS chart outcomes), with a cut-off as defined by the World Health Organization [19]. The data were assessed for normality of the distribution. Differences with a P value <0.05 were considered statistically significant. The outcomes were stratified for myopia, hyperopia, and keratoconus stages. Groups were compared using the two-tailed paired Student’s t test. Non-inferiority was defined as the 95% limits of agreement of the difference in spherical equivalent between the web-based refractive assessment and the manifest refraction not exceeding ±0.5D, assessed using a Bland-Altman analysis [20]. In addition, a multivariable generalized estimates equation (GEE) analysis of the difference between the power vectors, was used to correct for bilaterality (both eyes of the same patient included), keratoconus severity, age, and sex. For keratoconus severity, Amsler-Krumeich stages 2 and 3 were combined, due to the low number of cases. Missing cases were not included in the analysis nor imputed. The power calculation of the MORE-trial addressed the outcome in healthy subjects (n = 100), not in keratoconus subjects (n = 50). For methodological clarity we performed a post-hoc sample size calculation to determine if there was sufficient power to assess the non-inferiority limit of ± 0.5D. Here we assumed no difference between the web-based and manifest refraction assessment, an accepted differences of 0.5 D and a SD of 1.64 D. Using an alpha of 0.05 and a power of 0.90, and based on a one-sided one-sample t-test, 93 eyes are required. The actual study size of the keratoconus population (n = 100 eyes) was therefore considered sufficient to reject a null hypothesis. Data were analyzed using SPSS version 25.0 (IBM, Armonk, New York, USA) and R statistical software version 4.0.3 (CRAN, Vienna, Austria). For the Bland-Altman analysis the “BlandAltmanLeh” (version 0.3.1) package was used.

Results

A total of 100 eyes from 50 keratoconus patients were included in the study, no subjects were excluded. The clinical characteristics of the study population are summarized in Table 1 and all relevant variables are stratified for Amsler-Krumeich keratoconus stage [13]. The majority of the participants was male (78%, n = 39), and used visual aids to correct their refractive error (78%, n = 39). Both results were expected; keratoconus is more prevalent in males and the hallmark cone shape in keratoconus induces myopia and (irregular) astigmatism [13]. A total of 23 underwent corneal crosslinking (n = 37 eyes) with >6 months follow-up, all resolved without sequelae. A total of 9 subjects reported ocular complaints at the time of the measurement; all 9 subjects reported blurred vision. No adverse events or complications were recorded during the trial. The refractive error and visual acuity data was missing in 13 subjects (11 myopic and 2 hyperopic) because of technical errors in the web-based refractive assessment.
Table 1

Clinical characteristics of the study population (100 eyes of 50 patients).

Age (years), mean ± SD25.6 ± 5.4
Sex (male), n (%)39 (78)
Current use of visual aids, n (%)38 (78)
Spectacles, n (%)28 (56)
Contact lenses, n (%)17 (34)
Ocular complaints, n (%)9 (18)
Medication use, n (%)10 (20)
Previous corneal crosslinking (CXL) treatment, n (%)24 (48)
Distribution of refractive errors and keratoconus severity classification a
Total (n = 100)Mild keratoconusModerate keratoconusSevere keratoconus
AK stage 1 (n = 84)AK stage 2 (n = 13)AK stage 3 (n = 3)
Mean keratometry, mean ± SD45.66 ±2.9045.27 ±2.7947.46 ±1.7548.63 ±5.37
Maximum keratometry, mean ± SD52.55 ±6.3952.08 ±6.3054.50 ±4.4857.33 ±13.68
Refractive error110084133
Hyperopia, n292351
Emmetropia, n5500
Mild myopia, n444040
Severe myopia221642

Abbreviations: AK; Amsler-Krumeich stage, SD; standard deviation.

a Mild myopia was defined as refractive error of –3 diopter or less; severe myopia was defined as refractive error worse than –3 diopter. Refractive error was determined on the basis of the spherical equivalent of the manifest refraction value, (reference test), and is reported for both eyes separately. The distribution of refractive errors was not significantly different between the classifications (P = 0.30).

Abbreviations: AK; Amsler-Krumeich stage, SD; standard deviation. a Mild myopia was defined as refractive error of –3 diopter or less; severe myopia was defined as refractive error worse than –3 diopter. Refractive error was determined on the basis of the spherical equivalent of the manifest refraction value, (reference test), and is reported for both eyes separately. The distribution of refractive errors was not significantly different between the classifications (P = 0.30).

Web-based visual acuity testing

UDVA was measured using the digital index test and with an ETDRS visual acuity wall chart as reference. Mean UDVA measured digitally was logMAR 0.57±0.39 (Snellen 0.38±0.33), and with the ETDRS wall chart logMAR 0.58±0.52 (Snellen 0.46±0.40). The mean difference between the measurements was considered small and non-significant (-0.01 LogMAR, P = 0.76). There was a considerable distribution in measurement differences (95% LoA: -0.63–0.60). In the better visual acuity subgroup (≤0.5 LogMAR) this variability decreased (mean difference: 0.15 LogMAR, 95% LoA: -0.25–0.55). In the low visual acuity subgroup the mean difference between both measurements was -0.20 LogMAR (95% LoA: -0.82–0.41). Fig 2 depicts the correlation of the web-based visual acuity assessment compared with the ETDRS measurement in a Bland-Altman plot. The visual acuity assessments do not agree equally through the range of measurements. As can be observed in the high visual acuity subgroup, the majority of the measures show little difference and, importantly, the mean difference is mostly attributed to outliers. In this visual acuity range, the web-based tool slightly underestimates visual acuity scores. In the lower visual acuity range, the measurements appear to be much more variable.
Fig 2

A Bland-Altman plot displaying the differences in logarithmic minimum angle of resolution (LogMAR) between the web-based uncorrected distance visual acuity assessment (index test) and the ETDRS uncorrected distance visual acuity measurement (reference test).

The differences between the reference test and index test shown on the Y-axis are expressed as the difference of the web-based uncorrected distance visual acuity assessment outcome minus the ETDRS uncorrected distance visual acuity outcome. The x-axis shows the mean visual acuity in LogMAR of the two assessments, where a more negative value represents a higher visual acuity. The outcome is stratified for a ‘better visual acuity’ subgroup (uncorrected distance visual acuity ≤0.5 LogMAR) highlighted with a red circle.

A Bland-Altman plot displaying the differences in logarithmic minimum angle of resolution (LogMAR) between the web-based uncorrected distance visual acuity assessment (index test) and the ETDRS uncorrected distance visual acuity measurement (reference test).

The differences between the reference test and index test shown on the Y-axis are expressed as the difference of the web-based uncorrected distance visual acuity assessment outcome minus the ETDRS uncorrected distance visual acuity outcome. The x-axis shows the mean visual acuity in LogMAR of the two assessments, where a more negative value represents a higher visual acuity. The outcome is stratified for a ‘better visual acuity’ subgroup (uncorrected distance visual acuity ≤0.5 LogMAR) highlighted with a red circle.

Web-based refractive error assessment

Intraclass correlation coefficients

The concordance of the refractive error among the refraction assessments was assessed using the ICC of the spherical equivalent. The overall ICC of all 3 assessments was 0.32 (95% CI 0.19–0.46). The ICC for the manifest refraction and web-based refractive assessment was overall 0.36 (95% CI 0.22–0.53) and for the mild keratoconus subgroup 0.48 (95% CI 0.29–0.64). All eyes were included in the ICC because of the asymmetrical manifestation of keratoconus. ICC calculations for either all right or left eyes separately did not lead to new insights.

Overall outcome of the web-based refraction assessment

Web-based and manifest assessments of refractive error are reported stratified for myopia and hyperopia (see Table 2). Detailed outcomes per keratoconus stage are found in the S1 and S2 Tables. Overall, the spherical equivalent of the refractive error measured using the manifest refraction differed from the web-based refraction assessment by 0.09D for myopic subjects (p = 0.675), and -2.06D for hyperopic subjects (p<0.001). Albeit the relatively small and non-significant mean difference in the myopic group, the 95% LoA of the differences of the spherical equivalent extended beyond the a priori set non-inferiority limit of 0.5 D in both myopic subjects (95% LoA-3.28–3.47) and hyperopic subjects (95% LoA -5.52–1.39). When transposed into power vectors these differences measure -1.08D for myopic subjects and -0.69D for hyperopic subjects (Table 2, top row).
Table 2

Measured refractive error and visual acuity.

Emmetropic and myopic subjectHyperopic participants
Refractive error and visual acuityWeb-based refractiona (n = 60)Manifest Refractiona (n = 71)Difference b,c95% CIP-valuedWeb-based refractiona (n = 27)Manifest refractiona (n = 29)Differenceb,c95% CIP-valued
Power vectore,f (D)2.342.69-1.08-1.41 –-0.74N.A.1.352.02-0.69-0.92 –-0.46N.A.
±1.02±1.74±0.68±1.04
Power vector J0f (X)-0.08-0.61-1.16-1.41 –-0.90N.A.-0.08-1.27-1.47-2.00 –-0.93N.A.
±0.54±1.17±0.26±1.01
Power vector J45f (Y)0.050.02-0.72-0.91 –-0.53N.A.-0.04-0.16-0.98-1.41 –-0.54N.A.
±0.62±0.86±0.23±1.00
SEQ (D)-1.96-2.050.09-0.35–0.540.675-1.021.04-2.06-2.76 –-1.37<0.001
±1.20±1.76±1.40±0.83
Sphere-1.53-0.87-0.76-1.23–0.28N.A.-0.592.68-3.28-4.22–2.33N.A.
±1.15±1.79±1.39±1.45
Cylinder-1.00-2.441.581.06–2.12N.A.-0.93-3.142.211.39–3.03N.A.
±1.02±1.84±0.91±1.86
Axis8392-9-20–1N.A.8193-53-143–37N.A.
±45±45±64±20
CDVA LogMAR0.230.000.260.15–0.37<0.0010.190.010.170.08–0.270.001
±0.35±0.12±0.27±0.13
CDVA Snellen0.721.030.38-0.51 - -0.27N.A.0.731.01-0.27-0.39 - -0.16N.A.
±0.32±0.26
±0.37±0.28

Abbreviations: CDVA: corrected distance visual acuity, CI: confidence interval, D: diopter, N.A.: not assessed, SEQ: spherical equivalent, logMAR: logarithm of the minimum angle of resolution for visual acuity.

aUnless otherwise specified, reported as mean ±SD.

b Unless otherwise specified, reported as mean difference (web-based minus manifest assessment).

c Differences are based on the 54 and 26 cases with both manifest and digital refraction data available, leading to small deviations when subtracting the reported mean data in the table.

d Paired-sample Student t test was performed for predefined primary and secondary outcome parameters only.

e Spherical and cylindrical power and axes were translated into vectors using Fourier analysis and the difference is calculated as a power vector of the difference between the power vectors.

f The difference between power vectors and the vector specific parameters are calculated as a residual vector and is non-linear.

Abbreviations: CDVA: corrected distance visual acuity, CI: confidence interval, D: diopter, N.A.: not assessed, SEQ: spherical equivalent, logMAR: logarithm of the minimum angle of resolution for visual acuity. aUnless otherwise specified, reported as mean ±SD. b Unless otherwise specified, reported as mean difference (web-based minus manifest assessment). c Differences are based on the 54 and 26 cases with both manifest and digital refraction data available, leading to small deviations when subtracting the reported mean data in the table. d Paired-sample Student t test was performed for predefined primary and secondary outcome parameters only. e Spherical and cylindrical power and axes were translated into vectors using Fourier analysis and the difference is calculated as a power vector of the difference between the power vectors. f The difference between power vectors and the vector specific parameters are calculated as a residual vector and is non-linear. Fig 3 depicts the Bland-Altman plot of the web-based refraction assessment versus the reference test. The visualization shows a wide distribution of differences between the assessments overall. However, within the emmetropic/myopic refractive error group the agreement between the methods improves in the low refractive error ranges. Furthermore, it can be observed that the web-based refractive assessment yields a more hyperopic outcome in higher myopes (i.e. undercorrected) and a more myopic refractive outcome in low myopics (i.e. overcorrected). When outcomes of the refractive assessment are broken down per keratoconus stage, the more advanced keratoconus cases (AK stage 2 and 3) show an increased difference between the index and reference test (S1 and S2 Tables).
Fig 3

A Bland-Altman plot displaying the differences in refractive error between the web-based refractive assessment (index test) and the manifest refraction (reference test).

The difference between the reference and index test shown on the Y-axis is expressed as the difference of the web-based refractive assessment outcome compared to the manifest refraction. The x-axis shows the mean spherical equivalent of the two assessments. Myopia and hyperopia were based on the spherical equivalent of the manifest refraction.

A Bland-Altman plot displaying the differences in refractive error between the web-based refractive assessment (index test) and the manifest refraction (reference test).

The difference between the reference and index test shown on the Y-axis is expressed as the difference of the web-based refractive assessment outcome compared to the manifest refraction. The x-axis shows the mean spherical equivalent of the two assessments. Myopia and hyperopia were based on the spherical equivalent of the manifest refraction. We performed a multivariable Generalized Estimating Equations (GEE) analysis to correct for the inclusion of two eyes of one patient, age, sex, and keratoconus severity (S3 Table). As expected the Amsler-Krumeich stage >1 (B = 1.167, P = 0.027) had a significant effect on the power vector of the manifest refraction, indicating the refractive error increases with keratoconus severity. The web-based refraction did not identify this increase in power vector. Stratification of outcomes for myopia and hyperopia revealed no new insights. The algorithm of the digital refraction was not always able to correctly determine the participant’s refractive error as either myopia (-) or hyperopia (+). In a total of 21 cases (21%) the signation between the index test and the reference test differed, with an average absolute difference in refractive error of -2.38±1.96. Determining the correct signation in hyperopic subject proved challenging: in 20 of 27 (74%) of hyperopic subjects the signation switched between the index and reference test. Strikingly, only one myopic case was incorrectly identified by the online test (98% success). Naturally, this has profound effect on the attained corrected distance visual acuities with the web-based prescription.

Corrected visual acuity measurements using the web-based prescription

The overall achieved corrected distance visual acuity was significantly lower with the web-based derived prescription (0.22±0.32 logMAR) versus the reference test (-0.01±0.13 LogMAR, P <0.001). For myopic cases the mean difference in CDVA was 0.23 logMAR (95%CI -0.37 to -0.15; Snellen 0.31 95%CI 0.27 to 0.51). For hyperopic cases these outcomes were comparable (-0.22 logMAR; 95%CI -0.27 to -0.08; Snellen 0.31 95%CI 0.16 to 0.39). This underlines that the variation in refractive outcomes translate to corrected visual acuity outcomes on average 3 lines less read on a visual acuity chart. In 51 eyes (n = 26 subjects) the CDVA was not assessed with the prescription of the web-based refractive error assessment. In 13 eyes the web-based refraction assessment did not yield an outcome because of the previously mentioned technical errors. The outcomes were particularly missing in cases with higher refractive errors and lower visual acuities and were considered not missing at random. In 19 consecutive assessed patients the CDVA was not assessed due to an incorrect instruction of a member of the research team. We identified no other clinical associations for these 19 patients, and considered these data missing at random.

Discussion

In this clinical method comparison study we compared a web-based tool for measuring refractive errors with a manifest refraction, the current gold standard, in keratoconus patients. The relevance of delivering remote eye care has been illustrated during the COVID-19 outbreak [4]. With an acute reduction in access to care during this period, the most relevant finding of our studies is that visual acuity can be assessed using a web-based exam, in healthy subjects as well as in individuals with a complex refractive error and an ophthalmic condition [6]. It should be noted that repeated visual acuity assessments always demonstrate variability due to measurement variation. Cross-sectional studies on repeatability of clinical logMAR wallcharts revealed 95% limits of agreements of +- 0.15 logMAR [21]. When looking at the visualized differences between the web-based test and the reference test, we consider the agreement to be clinically acceptable, particularly in the better visual acuity subgroup (i.e. with visual acuity scores ≤0.5 LogMAR). Notwithstanding, the spread in the outcomes of the web-based refraction assessment exceeded the predefined non-inferiority margins (of <0.5D), and poorly correlated with the gold standard manifest refraction (for myopia 95% LoA: -3.28–3.47, and for hyperopia 95% LoA: -5.52–1.39; ICC: 0.36). As a result, the attained CDVA with the prescription of the web-based tool was significantly lower than the traditional manifest refraction (logMAR 0.23 vs. 0.00, P<0.001). It should be noted that the refractive assessments do not agree equally throughout the range of measurements. The agreement appears to improve in the low myopic refractive error range, suggesting a better performance of the web-based refractive assessment in this range. Furthermore, we observed that the web-based refractive assessment undercorrected higher myopics and overcorrected lower myopics. Both effects might by mitigated by a judicial re-calibration of the algorithm of the web-based test. The performance of the web-based refractive assessment in this study population can be explained by the design of the algorithm. The algorithm translates the measured visual acuity in a refractive error. Next, the astigmatic refractive error is assessed and the spherical and cylindrical components are determined. The algorithm assumes that the loss of visual acuity is proportional to the increase in refractive error, and that all vision loss is caused by a refractive error. These two assumptions obviously do not necessarily stand in eyes with an ophthalmic condition, such as keratoconus of different stages. The signation (either + or -) is assessed by a red/green test, and by asking the participant questions on the experience of their visual function (e.g. “do you have problems reading?”, “do you recognize faces from afar?” etc.). These questions appear to suffice in a healthy population (98% success) [6], but are not considerable reliable in this population (80% success): keratoconus patients often have problems with both near and far tasks, and the questions have not always been discriminative. This effect is more pronounced in hyperopic refractive errors and could be mitigated by feeding the algorithm more data than was available in the clinical trial, in particular data on any previous prescriptions. This is the case in the commercially available exam: an optometrist assesses any existing prescription and validates the findings that are produced by the algorithm. The here employed clinical trial algorithm functioned completely independent. Several considerations of the study deserve attention. Firstly, a consideration should be made regarding the missing data. The primary outcome–refractive error- was missing in 13 eyes because of technical errors. The number of technical errors was evidently higher when compared to the healthy study population previously described (13/100 vs. 6/200 eyes) [6]. The data were particularly missing in cases with higher refractive errors and lower visual acuities (i.e. the more severe cases), suggesting these cases are not missing at random. Apparently, the complex refractive errors pose a challenge for the algorithm. This may have impacted the study power. The missing data are considered to have little impact on our conclusions, since we already concluded that the algorithm’s performance is poor in this group. Furthermore, no randomization of the test order was performed, which could have impacted our results as subject may become tired during the assessments. However, because of the fixed test order, this should have impacted all subject similarly. The learning or training effect is considered negligible since the three refractive assessment methods are very different and randomized optotypes were used for assessment of the visual acuity. Observer bias cannot be excluded as the observer had access to all test results. However, we consider the risk of bias low as the web-based tool is a self-assessment performed by the patient and the operator has little influence on the autorefractor outcome. Lastly, the participants were young (25.6 years on average) and presumably digital natives: the uptake of these novel tools in an elderly population warrants a design tailored to their needs and digital aptitude. The digital refraction fits into the current trend of health care digitalization as health care demand is increasing [22]. Importantly, this is expected to increase because of an aging population, whereas health care budgets are capped, and countries experience a shrinking workforce [23-25]. In addition, telemedicine has the ability to alleviate the urgent challenges our health care systems are currently facing. Several studies have shown the potential of digital eye care tools and the use of digital tools for diagnosis and monitoring of patients [7, 26–29]. In particular, the PEEK vision tool developed by Bastawrous et al., provided health care professionals with accurate and vital information regarding a person’s ocular health status, and improved eye health in a rural African community [7]. Notwithstanding, our results show that validation is of upmost importance preceding clinical implementation of these tools and new iterations are needed to further improve the accuracy of the here studied web-based assessment tool.

Conclusions

The web-based digital eye exam is a promising tool for obtaining visual acuity outcomes, assessed independently and remotely by the patient. The agreement with conventional ETDRS assessments is acceptable, particularly for subjects within a better visual acuity range. The web-based refraction assessment is inferior to a subjective refraction, in this keratoconus population. Contrary to the previously published results with healthy volunteers, the assessment of complex refractive errors posed too big a challenge for the digital algorithm and, consequently, its refraction resulted in a significantly poorer visual performance. These data provide insights in the web-based exam’s limitations and will aid in better identification of outliers of the remote assessments. A web-based exam should not be considered a replacement for a comprehensive manual examination by an eye care professional. Notwithstanding, the outcome of the digital eye exam can provide doctors with a quantifiable measurement of visual function which enhances a teleconsultation. The latter is especially important in times or areas with limited access to health care.

The Standards for Reporting of Diagnostic Accuracy checklist.

(DOCX) Click here for additional data file.

The refractive error and visual acuity measured in emmetropic and myopic eyes stratified for keratoconus severity.

(DOCX) Click here for additional data file.

The refractive error and visual acuity measured in hyperopic eyes stratified for keratoconus severity.

(DOCX) Click here for additional data file.

Multivariate analysis to identify the associations between independent variables and the refractive error outcome for the difference between the web-based refractive assessment and manifest refraction, the web-based refractive assessment and the manifest refraction.

(DOCX) Click here for additional data file. 11 Mar 2021 PONE-D-20-36881 The evaluation of a web-based tool for measuring the uncorrected visual acuity and refractive error in keratoconus eyes: a prospective open-label method comparison study PLOS ONE Dear Dr. Muijzer, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Apr 25 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Timo Eppig Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. In your revised cover letter, please address the following prompts: a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially identifying or sensitive patient information) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. Please see http://www.bmj.com/content/340/bmj.c181.long for guidelines on how to de-identify and prepare clinical data for publication. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. We will update your Data Availability statement on your behalf to reflect the information you provide. 3. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. 4.We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match. When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section. 5.Thank you for providing the following Funding Statement: "This investigator initiated study was sponsored by a grant from Easee BV." We note that one or more of the authors have an affiliation to the commercial funders of this research study : Easee BV a) Please provide an amended Funding Statement declaring this commercial affiliation, as well as a statement regarding the Role of Funders in your study. If the funding organization did not play a role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript and only provided financial support in the form of authors' salaries and/or research materials, please review your statements relating to the author contributions, and ensure you have specifically and accurately indicated the role(s) that these authors had in your study. You can update author roles in the Author Contributions section of the online submission form. Please also include the following statement within your amended Funding Statement. “The funder provided support in the form of salaries for authors [insert relevant initials], but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.” If your commercial affiliation did play a role in your study, please state and explain this role within your updated Funding Statement. b) Please also provide an updated Competing Interests Statement declaring this commercial affiliation along with any other relevant declarations relating to employment, consultancy, patents, products in development, or marketed products, etc. Within your Competing Interests Statement, please confirm that this commercial affiliation does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests). If this adherence statement is not accurate and  there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. Please include both an updated Funding Statement and Competing Interests Statement in your cover letter. We will change the online submission form on your behalf. Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly Reviewer #3: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: I Don't Know Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I will focus on methods and reporting. The abstract is clearly written and balanced. Overall, the methods appear appropriate, but more references need to be added about the less standard analyses (vectors etc). Some are already there, but a couple more need to be added where appropriate. the multivariable model is not clear (e.g. how bilaterality modelled, keratoconus severity etc). I am not convinced by the decision to use complete case analysis. How do the authors know the missingness mechanism? did they test for it? Also a complete case analysis tends to be more problematic that multiple imputation even if data are MNAR, see https://pubmed.ncbi.nlm.nih.gov/28068910/ A major concern is that the power calculation was for the whole trial (which is not clearly reported here, besides the formula). This subgroup analysis is very likely underpowered, In non-inferiority trials this is more crucial, since lower power biases the results towards non-inferiority. The authors either need to demonstrate they have enough power or they need to move away from non-inferiority and present exploratory comparisons, with strong caveats. Reviewer #2: 1) You are including keratoconus (KC) patients, but you have no clear definition of the methods of such diagnosis? Did you include overt KC cases only which are diagnosed by signs in the cornea? Who was responsible to diagnose your cases and with which criteria? You did not mention anything about clinical signs of your cases. How many of your patients had clinical signs of KC? 2) Your inclusion criteria omit the patients outside the refractive error +4 to -6 diopters of spherical equivalent( SE). Where those numbers come from? It suggests that based on some kind of primitive analysis you decided to omit higher SE patients because you had found those patients are not responsive to your algorithms. If not, mention the exact cause of such decision. Many KC patients' refractions are outside the SE of +4 to -6 diopters and you can not generalize your data to the majority of KC cases. You may excluded many severe cases and this is a source of bias. In this regard, you can not mention the general term of KC in your topic and you have to say 'KC cases with certain refractive error'. 3)In your exclusion criteria, you only mentioned diabetes as a cause of exclusion. What about other retinal or optic nerve problem? Did you include glaucoma patients , other retinal disorders like RP case and macular hole, cataract, PCO ,media opacity,  etc ? As we know all those cases may have some problems in contrast sensitivity and visual acuity. 4) You mentioned 24 patients have a history of previous ocular surgery. Mention the type of surgeries for further clarification. 5) In the last paragraph of your results( line 275), you mentioned that you could not assess the CDVA of more than half of your patients with your web system. In 13 of eyes evaluation did not have an outcome and 19 eyes had a problem because of bad instruction. These cases are a large proportion of your sample size and you cannot overlook this large numbers by saying " missing at random" !!!. You have to separately speak about this data by clearly defining their refractive error, severity, etc in a separate table. Reviewer #3: The authors tested the possibility of a web-based too that measures visual acuity to translate the visual acuity results into refractive correction in patients with various amounts of presbyopia. As the authors present original research that is of interest for the field, especially with the background of field testing of vision, the methods need to be detailed and the data needs some more work regarding the interpretation. Abstract: The authors did not mention that they assessed variability but present data on it – how was it assessed? Line 66: I am not sure about that the content of the statement the authors want to make. For sure, an autorefractor measures refractive errors. But out of that data, it is possible to make an assumption of the uncorrected visual acuity. Also, there might be autorefractors on the market that allow to measure visual acuity (either corrected as well as uncorrected). Line 91: could patient acquisition has been biased by the use of already known patients of the clinic? This could also influence the (subjective) refractive measurements, as the operator might not have been blind to the medical history of each patient. The authors should also add some details on the question if operator was blind for the measurements of the refractive errors and if the same operator did all the measurements? Line 102: why was the testing not randomized between subjects? What a about a possible learning effect in visual acuity testing? Line 120: I guess that correction was inserted in a trial frame and trial lenses were used? Was refractive correction standardized for the same back vertex distance. Line 129: if subjects were normally wearing contact lenses, how long was the wash-out time before participants could participate in the study? Line 156: due to the mirrorsymmetrie of both eyes, only one should be analyzed. Is the use of the multivariable generalized estimates equation a method to reduce potential bias? Line 164: this statement should not be parts of the Material / Methods section Line 174: the web-based tool also measured myopia in several patients with hyperopic refractive errors. Is this a typical behavior also in subjects without any ocular diseases? What would be the implication for a screening? Line 183: is this a typical of measurements that are missing? Line 190: what was the ICC for the autorefractor vs. web based measurement and autorefractor vs. manifest refraction? Line 202: the definition of the low and high visual acuity group should be done in the methods section. Line 225: what was the significance value? Line 233: is this because the level of keratoconus is smaller in this group? ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 17 May 2021 Rebuttal manuscript Manuscript ID: PONE-D-20-36881 Subject: rebuttal for manuscript “The evaluation of a web-based tool for measuring the uncorrected visual acuity and refractive error in keratoconus eyes: a prospective open-label method comparison study” Reviewers' comments: Reviewer #1: I will focus on methods and reporting. The abstract is clearly written and balanced. 1. Overall, the methods appear appropriate, but more references need to be added about the less standard analyses (vectors etc). Some are already there, but a couple more need to be added where appropriate. Reply: We have included additional relevant literature on the subject of power vectors and changed were the literature is cited to be more appropriate with the information provided. 2. the multivariable model is not clear (e.g. how bilaterality modelled, keratoconus severity etc). Reply: In the study we corrected for bilaterality (two eyes of the same subject) using a generalized linear model (i.e. generalized estimating equation). This model tests the independent association for each variable correcting for two eyes of the same patient with an unknown association. The model used is comparable to a linear mixed-model regression and frequently used for ophthalmologic studies. The keratoconus severity was determined using the Amsler-Krumeich staging as a covariate. We have clarified both in the manuscript (see line: 170-171) 3. I am not convinced by the decision to use complete case analysis. How do the authors know the missingness mechanism? did they test for it? Also a complete case analysis tends to be more problematic that multiple imputation even if data are MNAR, see https://pubmed.ncbi.nlm.nih.gov/28068910/ Reply: The author addresses an important point. We are aware of the missingness mechanism as we have tested for association of missing cases with other parameters. In 13 eyes the outcome was missing because of technical errors in the web-based tool and associated with a higher refractive error. High refractive errors appeared more challenging for the web-based tool, and these are more prevalent in severe cases of keratoconus These missing data where therefore considered MNAR. In the remaining 19 cases the missing outcomes were associated with a timeframe, during which one member of the study team did not take the CDVA with the online refraction in a trial frame. This behaviour was non-compliant with the study procedures,, and the outcomes were not associated with relevant patient characteristics. They were therefore considered MAR. We agree with the reviewer that a complete case analysis can be problematic and tried to impute these missing data. Unfortunately - because of the heterogeneity between cases and the extent of missing data - imputing the data proved impossible. Notwithstanding, we do not expect that the conclusions we draw would materially differ between imputing or complete case analysis: we conclude that the complex refractive errors of keratoconus patients appeared challenging for the algorithms of the web-based tool and do not intent to make claims on the validity of the web-based assessment of refractive error in this group of patients. Imputing data will not alter this conclusion. 4. A major concern is that the power calculation was for the whole trial (which is not clearly reported here, besides the formula). This subgroup analysis is very likely underpowered, In non-inferiority trials this is more crucial, since lower power biases the results towards non-inferiority. The authors either need to demonstrate they have enough power or they need to move away from non-inferiority and present exploratory comparisons, with strong caveats. Reply: We thank the reviewer for this comment. The current study was indeed part of an overarching prospective trial with both healthy and keratoconus patients, as can be seen on clinicaltrials.gov/NCT03313921. The results of the healthy peers are referred to extensively throughout this paper (Wisse et al. JMIR 2019), and the study was powered on this healthy subgroup. We expected upfront that this keratoconus group with complex refractive errors could provide a big challenge for the algorithms. Advancing methodological insights even made the non-inferiority margins stricter in this current manuscript (when compared to the previous publication): we consider 95% levels of agreements rather than mean differences. We agree that the current method section is confusing and does deserve more explanation regarding abovementioned. We have rewritten the method section to better introduce the two groups of the MORE trial, removed the power calculation formula pertaining to the healthy subgroup, and included a post-hoc power calculation for the keratoconus group to prove that we have sufficient power to assess the non-inferiority limit of ± 0.5. Reviewer #2: 1. You are including keratoconus (KC) patients, but you have no clear definition of the methods of such diagnosis? Did you include overt KC cases only which are diagnosed by signs in the cornea? Who was responsible to diagnose your cases and with which criteria? You did not mention anything about clinical signs of your cases. How many of your patients had clinical signs of KC? Reply: The diagnosis of keratoconus was established by an ophthalmologist specialized in corneal diseases based on clinical signs and Scheimpflug corneal tomography. As our center is a tertiary expertise & referral center, all patients had clinical signs of keratoconus. We have clarified this section in the manuscript, and added that the involved corneal specialist graded all cases accordingly. 2. Your inclusion criteria omit the patients outside the refractive error +4 to -6 diopters of spherical equivalent( SE). Where those numbers come from? It suggests that based on some kind of primitive analysis you decided to omit higher SE patients because you had found those patients are not responsive to your algorithms. If not, mention the exact cause of such decision. Reply: These inclusion criteria do not pertain directly to the keratoconus population, but to the design of the overall trial, including healthy individuals. In the revised version of the manuscript we’ve made it much clearer in the method section that these data pertain to a keratoconus subgroup of the larger MORE trial. The outcomes of the web-based tool are referred to extensively in the manuscript (Wisse et al. JMIR 2019). When we designed the trial, we aimed to assess the validity of web-based refraction for people with relatively normal refraction errors. Owing to our status as an academic expert center for keratoconus care, we felt obliged to study the validity in this group with complex refractive errors. The +4 / -6D ranges are dictated by the algorithm, not by the clinical population. This is now better explained in the methods section. As such, there was no primitive analysis, and the inclusion of keratoconus patients followed chronologically in time after the healthy subjects. But we understand the reviewers remarkt, and are confident that a better structured methods section now prevents readers from reaching the same conclusion. 3. Many KC patients' refractions are outside the SE of +4 to -6 diopters and you can not generalize your data to the majority of KC cases. You may excluded many severe cases and this is a source of bias. In this regard, you can not mention the general term of KC in your topic and you have to say 'KC cases with certain refractive error'. Reply: We understand the important suggestion of the reviewer regarding the generalizability of our results for a complete KC population. But we do not agree that we should say ‘KC cases with certain refractive error’. The study population is a representative sample of the patient population in our tertiary expert center for keratoconus. The majority of our population had an Amsler Krumeich classification of either 1 or 2 and their refractive error fell within the test limits of the algorithm. More importantly, the complex refractive errors seen in keratoconus patients appeared challenging for the web-based tool: high refractive errors led to missing data, the signation (+ vs -) was not robust (79% correct), and the correlation between the web-based assessment and in-hospital manifest refraction was rather poor (ICC 0.32). Our conclusion is that a web-based assessment of refraction in keratoconus patients should not be done. There is no reason to assume that this advice would be different for keratoconus cases with even higher refractive errors or Amsler-Krumeich grading. The visual acuity assessments proved more robust, and this was retained in the conclusion of the manuscript. 4. In your exclusion criteria, you only mentioned diabetes as a cause of exclusion. What about other retinal or optic nerve problem? Did you include glaucoma patients , other retinal disorders like RP case and macular hole, cataract, PCO ,media opacity, etc ? As we know all those cases may have some problems in contrast sensitivity and visual acuity. Reply: The inclusion criteria for this keratoconus study were commensurate to the larger MORE study in healthy subjects, where any ophthalmic condition or previous surgery led to study exclusion. Apparently, for this keratoconus subgroup report, we failed to explicitly state that the subjects were devoid of other pathologies than keratoconus. None of the subjects had co-existing visual acuity limiting conditions. This method section was extended regarding this aspect. 5. You mentioned 24 patients have a history of previous ocular surgery. Mention the type of surgeries for further clarification. Reply: We thank the reviewer for noticing this textual error. In the text the procedure performed was specified (in all cases corneal crosslinking with >6m follow-up). 6. In the last paragraph of your results( line 275), you mentioned that you could not assess the CDVA of more than half of your patients with your web system. In 13 of eyes evaluation did not have an outcome and 19 eyes had a problem because of bad instruction. These cases are a large proportion of your sample size and you cannot overlook this large numbers by saying " missing at random" !!!. You have to separately speak about this data by clearly defining their refractive error, severity, etc in a separate table. Reply: We agree our formulation of this paragraph may give a false impression we assume the technical errors to be missing at random. In 13 eyes the outcome was missing because of technical errors in the web-based tool and associated with a higher refractive error. High refractive errors appeared more challenging for the web-based tool, and these are more prevalent in severe cases of keratoconus. These missing data where therefore considered not missing at random. In the remaining 19 cases the missing outcomes were associated with a timeframe, during which one member of the study team did not take the CDVA with the online refraction in a trial frame. This behaviour was non-compliant with the study procedures, and the outcomes were not associated with relevant patient characteristics. They were therefore considered missing at random. . We have clarified this specific paragraph. Reviewer #3: The authors tested the possibility of a web-based too that measures visual acuity to translate the visual acuity results into refractive correction in patients with various amounts of presbyopia. As the authors present original research that is of interest for the field, especially with the background of field testing of vision, the methods need to be detailed and the data needs some more work regarding the interpretation. 1. Abstract: The authors did not mention that they assessed variability but present data on it – how was it assessed? Reply: The term “variability” addresses the 95%LoA, meaning the “variability of the differences” in the Bland-Altman analysis. We have clarified this section of the abstract. 2. Line 66: I am not sure about that the content of the statement the authors want to make. For sure, an autorefractor measures refractive errors. But out of that data, it is possible to make an assumption of the uncorrected visual acuity. Also, there might be autorefractors on the market that allow to measure visual acuity (either corrected as well as uncorrected). Reply: Previous research from our study group indicated that the autorefractor measurements in irregular keratoconus corneas are not reliable (Soeters N, Muijzer MB, Molenaar J, Godefrooij DA, Wisse RPL. Autorefraction Versus Manifest Refraction in Patients With Keratoconus. J Refract Surg. 2018;34(1):30–4). Subsequently it is not possible to calculate or derive an exact visual acuity from these measurements (other than a rough estimation). To our knowledge there are no devices on the market that measure exact values of visual acuity as a visual acuity measurement is a psychophysical test. The here presented web-based tool is a self-assessment and does not require specialized equipment or medically trained personal. In this revision we made it more clear that this report is a subgroup of the larger MORE-trial in healthy subjects, which is referred to extensively. In these healthy subjects, auto-refraction measurements are reliable estimates, and we considered that a valid additional test to put the web-based outcomes in perspective. The study design for this keratoconus group is exactly the same, but upfront we knew the autorefractor measurements would be of little value. For methodological clarity and completeness we choose to also report these outcomes. 3. Line 91: could patient acquisition has been biased by the use of already known patients of the clinic? This could also influence the (subjective) refractive measurements, as the operator might not have been blind to the medical history of each patient. The authors should also add some details on the question if operator was blind for the measurements of the refractive errors and if the same operator did all the measurements? Reply: In practice, the corneal specialist and specialized optometrists invited their patients for the MORE trial, and involved the executive researcher subsequently. These clinicians were aware of the disease status/refractive error/clinical data and an inclusion bias can therefore not be excluded. Notwithstanding, the clinicians were not involved in the collection of research data/web-based measurements. Nor the operator, nor the patients could be blinded in this open-label study., Notwithstanding, we consider the effect of bias negligible, because the three assessments of refractive error differ in nature. The operator has virtually no influence on the auto-refractor measurements and the web-based tool is a self-assessment that reports an objective outcome which was recorded without interpretation. We included this remark in the manuscript. 4. Line 102: why was the testing not randomized between subjects? What a about a possible learning effect in visual acuity testing? Reply: We considered taken the 3 tests in a random order. Rather we choose to change the optotypes randomly for each visual acuity measurement. Therefore,we considered the learning effect negligible Now, the order of test was the same in all patients so that any learning impact would be similar for all patients. 5. Line 120: I guess that correction was inserted in a trial frame and trial lenses were used? Was refractive correction standardized for the same back vertex distance. Reply: Yes, the final measurement was a CDVA with the prescription derived from the web-based assessment. We thank the reviewer for this detailed comment. We haven’t standardized the refractive correction for the vertex distance. In practice, the trial frame was fitted by a trained optometrist with extensive knowledge on this particular matter, and no irregularities were recorded.. So, technically, in some subjects with large refractive errors the vertex distance could have influenced their visual acuity outcome, however, we do not consider this effect clinically relevant, nor will it impact our conclusions. We have included this consideration in the discussion section. 6. Line 129: if subjects were normally wearing contact lenses, how long was the wash-out time before participants could participate in the study? Reply: All patient included in the study have had an 4 week washout period commensurate to our clinical protocol. 7. Line 156: due to the mirror symmetrie of both eyes, only one should be analyzed. Is the use of the multivariable generalized estimates equation a method to reduce potential bias? Reply: The reviewed makes an important remark on including both eyes of one patients for statistical analysis. We corrected for bilaterality using a generalized estimating equation analysis, which distributes confounding factors at a within-person or within-group level. This has become common practice in ophthalmic studies and increases the power of a study, though not by a factor 2. In keratoconus studies in particular, where the disease can be very asymmetric, there is good reason to include both eyes: here the within-person variation would be lost if only one eye was considered. We report the outcomes of this analysis extensively in supplementary table 3. 8. Line 164: this statement should not be parts of the Material / Methods section Reply: we agree with the reviewer and removed this sentence in the methods section. 9. Line 174: the web-based tool also measured myopia in several patients with hyperopic refractive errors. Is this a typical behavior also in subjects without any ocular diseases? What would be the implication for a screening? Reply: This is not typical behaviour in these patients, but a technical limitation of the presented algorithm. The refractive error is measured a an absolute value (without plus/minus indication) and based on a questionnaire and duochrome test the signation is assigned. Keratoconus patient have a range of symptoms which can be seen in both myopic and hyperopic patient. This method of assigning the signation proved especially challenging for hyperopic patients who encounter visual acuity difficulties for both distance and near tasks. We have addressed this in the discussion section. For screening purposes this does not have implications, as one would mostly be interested in detecting visual impairment (i.e. low visual acuity). 10. Line 183: is this a typical of measurements that are missing? Reply: We don’t completely understand the question of the reviewer. In 13 eyes the outcome was missing because of technical errors in the web-based tool and associated with a higher refractive error. High refractive errors appeared more challenging for the web-based tool, and these are more prevalent in severe cases of keratoconus These missing data where therefore considered not missing at random. This is now addressed in the discussion section. 11. Line 190: what was the ICC for the autorefractor vs. web based measurement and autorefractor vs. manifest refraction? Reply: As mentioned above, the study design for this keratoconus group is exactly the same as for the healthy individuals, and upfront we knew the autorefractor measurements would be of little value. We deliberately decided to report the overall ICC, and don’t consider these other outcomes of interest for our study. First, because we do not study the autorefractor, and second because previous research showed that autorefractor measurements in keratoconus eyes are inaccurate (Soeters N, Muijzer MB, Molenaar J, Godefrooij DA, Wisse RPL. Autorefraction Versus Manifest Refraction in Patients With Keratoconus. J Refract Surg. 2018;34(1):30–4). 12. Line 202: the definition of the low and high visual acuity group should be done in the methods section. Reply: We have clarified this in the methods section. 13. Line 225: what was the significance value? Reply: The significance values have been included. 14. Line 233: is this because the level of keratoconus is smaller in this group? Reply: The agreement between the methods improves in the low refractive error ranges. Uncorrected visual acuity assessments will be more accurate in this range, making it easier for the web-based algorithm to determine the refractive error. To answer the question of the reviewer, we state that in general, keratoconus severity is lower in the low refractive error ranges. Funding statement: This investigator initiated study was sponsored by Easee BV. The funder provided support in the form of salaries for author [FC], but did not have any additional role in the data analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section. Author contributions: The following authors were involved in acquiring funding (RW), study design (RW, DG, YP), developing the algorithm (YP, FC), data collection (MM, FC), data analysis and interpretation (MM, JC, DG, RW), preparation of the manuscript and critical revision (MM, JC, DG, RW), decision to publish (MM, JC, RW). Competing interest: MM is a consultant for Easee BV, FC is an employee of Easee BV, YP is the CEO/founder and shareholder of Easee BV, RW is a consultant and shareholder of Easee BV. The competing interest of the authors do not alter our adherence to PLOS ONE policies on sharing data and materials. Submitted filename: Rebuttal manuscript PLOS-ONE.docx Click here for additional data file. 10 Jun 2021 PONE-D-20-36881R1 The evaluation of a web-based tool for measuring the uncorrected visual acuity and refractive error in keratoconus eyes: a prospective open-label method comparison study PLOS ONE Dear Dr. Muijzer, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. There are some additional minor comments from Reviewer #3 which shall be considered. Please submit your revised manuscript by Jul 25 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Timo Eppig Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #2: All comments have been addressed Reviewer #3: All comments have been addressed ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #3: Line 36 306: visual acuity measurement is not objective. What is the typical intra-individual variation of the web-based test in this patients? (0.1logMAR as with standard acuity test)? As you discuss this in line 306 for VA measurements in subjects with no ocular diseases, do you have data on subjects with keratoconus? Line 38 and following: statement biased by the sponsor? I doubt that algorithm is trainable with this data. General: when you only measure VA, how easy is it to diagnose for keratoconus, especially in early stages? Its not, or? In case of “telemedicine” any drop in VA is most likely related to a refractive error and no diagnosis can be made w/o corneal topography. Is the last section of the introduction still valid having that in mind? Line 284: please change the order and put the limitation on the number of available measurements first. This makes it easier to the reader to classify the results. Please also put the number of cases in brackets for available measurements. Line 318, please add: as such with keratoconus of different stages. Line 361: is this statement true for patients without diseases? Please be specific Line 364: results did not only resulted in poor visual performance (as in terms of VA), but also in a correction that would not be acceptable (in terms of diopters). Line 368: Again: VA is subjective, not objective. [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 16 Jul 2021 Manuscript ID: PONE-D-20-36881R1 Subject: rebuttal for manuscript “The evaluation of a web-based tool for measuring the uncorrected visual acuity and refractive error in keratoconus eyes: a prospective open-label method comparison study” Reviewers' comments: Reviewer #3: All comments have been addressed Reviewer #2: All comments have been addressed Reviewer #1: Line 36 306: visual acuity measurement is not objective. What is the typical intra-individual variation of the web-based test in this patients? (0.1logMAR as with standard acuity test)? As you discuss this in line 306 for VA measurements in subjects with no ocular diseases, do you have data on subjects with keratoconus? Reply: We believe that a visual acuity assessment will enhance teleconsultations, but agree with the reviewer a visual acuity test is not an objective measurement and thus removed this word from the abstract and manuscript and replaced it with “quantifiable” (as it provides the health care professional with an exact value). We did not investigate the test-retest performance of the web-based assessment, so the intra-individual variation between measurements is unknown. The reported variation between visual acuity measurements that we refer to in the manuscript is derived from a study performed in an eye clinic in patient with various ocular diseases, however, no data specifically for keratoconus patients is available. Line 38 and following: statement biased by the sponsor? I doubt that algorithm is trainable with this data. Reply: We thank the author for this comment. It is indeed true that the algorithm’s calculations cannot with this inadequate data, so we agree that this statement needs some nuance. The study has provided us with valuable insights in the limitations of this web-based tool. We will use this knowledge to increase overall safety of the web-based tool by better identification of outlier cases. We have adjusted this sentence in the Abstract Conclusion and Manuscript conclusion accordingly. General: when you only measure VA, how easy is it to diagnose for keratoconus, especially in early stages? Its not, or? In case of “telemedicine” any drop in VA is most likely related to a refractive error and no diagnosis can be made w/o corneal topography. Is the last section of the introduction still valid having that in mind? Reply: We agree that using visual acuity only keratoconus cannot be diagnosed and to our knowledge we do not claim or suggest this in the manuscript. The aim and focus of this study was to evaluate the performance of the algorithm in complex refraction. Keratoconus patients often/always have a complex refraction (e.g., irregular astigmatism) and were therefore a suitable study population for our research question. In the general population, complex refractive errors are also present with or without an underlying eye condition and this study provided insight in the limitations and accuracy (of in this case inaccuracy) of the algorithm in these cases. Line 284: please change the order and put the limitation on the number of available measurements first. This makes it easier to the reader to classify the results. Please also put the number of cases in brackets for available measurements. Reply: We thank the reviewer for this comment. We have changed the order of the discussed limitations as suggested by the reviewer. We have added the “13/100” in between brackets to clarify the ratio between missing data and available measurements. Line 318, please add: as such with keratoconus of different stages. Reply: We’ve revised the sentence and included this suggestion. Line 361: is this statement true for patients without diseases? Please be specific Reply: We revised the sentence to make it clear we are referring to our study outcomes. In addition, to answer the question; it is indeed true that visual acuity can be more reliably assessed in better visual acuity ranges regardless of ocular conditions. Line 364: results did not only resulted in poor visual performance (as in terms of VA), but also in a correction that would not be acceptable (in terms of diopters). Reply: For clarification, we have included in the conclusion that the web-based refractive assessment was found to be inferior to the manifest assessment. However, we deem a prescription unacceptable based on visual performance rather than the exact dioptric values as these can differ without directly affecting visual performance. Line 368: Again: VA is subjective, not objective. Reply: see comment 1. Other changes: We replaced “higher visual acuity” subgroup with “better visual acuity” in the legend and footnote of figure 2, so it is now similar to the manuscript. Submitted filename: Rebuttal.docx Click here for additional data file. 2 Aug 2021 The evaluation of a web-based tool for measuring the uncorrected visual acuity and refractive error in keratoconus eyes: a method comparison study PONE-D-20-36881R2 Dear Dr. Muijzer, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Timo Eppig Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #2: All comments have been addressed Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2: Yes Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2: Yes Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #2: Yes Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #2: Yes Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #2: The manuscript is well written. Although it would be better to do further research in this area with larger sample size,I personally agree that it should be published because of Novelty. Reviewer #3: (No Response) ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #2: No Reviewer #3: No 10 Aug 2021 PONE-D-20-36881R2 The evaluation of a web-based tool for measuring the uncorrected visual acuity and refractive error in keratoconus eyes: a method comparison study Dear Dr. Muijzer: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Timo Eppig Academic Editor PLOS ONE
  23 in total

1.  Autorefraction Versus Manifest Refraction in Patients With Keratoconus.

Authors:  Nienke Soeters; Marc B Muijzer; Jurrian Molenaar; Daniel A Godefrooij; Robert P L Wisse
Journal:  J Refract Surg       Date:  2018-01-01       Impact factor: 3.573

2.  Preventing blindness and visual impairment in Europe: What do we have to do?

Authors:  János Németh; Gábor Tóth; Serge Resnikoff; Jan Tjeerd de Faber
Journal:  Eur J Ophthalmol       Date:  2018-12-20       Impact factor: 2.597

3.  Options for slowing the growth of health care costs.

Authors:  James J Mongan; Timothy G Ferris; Thomas H Lee
Journal:  N Engl J Med       Date:  2008-04-03       Impact factor: 91.245

Review 4.  Keratoconus.

Authors:  Y S Rabinowitz
Journal:  Surv Ophthalmol       Date:  1998 Jan-Feb       Impact factor: 6.048

5.  Higher-order aberrations 1 year after corneal collagen crosslinking for keratoconus and their independent effect on visual acuity.

Authors:  Robert P L Wisse; Stijn Gadiot; Nienke Soeters; Daniel A Godefrooij; Saskia M Imhof; Allegonda van der Lelij
Journal:  J Cataract Refract Surg       Date:  2016-07       Impact factor: 3.351

6.  Age-specific Incidence and Prevalence of Keratoconus: A Nationwide Registration Study.

Authors:  Daniel A Godefrooij; G Ardine de Wit; Cuno S Uiterwaal; Saskia M Imhof; Robert P L Wisse
Journal:  Am J Ophthalmol       Date:  2016-12-28       Impact factor: 5.258

7.  REPRODUCIBILITY AND COMPARISON OF VISUAL ACUITY OBTAINED WITH SIGHTBOOK MOBILE APPLICATION TO NEAR CARD AND SNELLEN CHART.

Authors:  Lam Phung; Ninel Z Gregori; Angelica Ortiz; Wei Shi; Joyce C Schiffman
Journal:  Retina       Date:  2016-05       Impact factor: 4.256

8.  How will country-based mitigation measures influence the course of the COVID-19 epidemic?

Authors:  Roy M Anderson; Hans Heesterbeek; Don Klinkenberg; T Déirdre Hollingsworth
Journal:  Lancet       Date:  2020-03-09       Impact factor: 79.321

9.  Validation of an Independent Web-Based Tool for Measuring Visual Acuity and Refractive Error (the Manifest versus Online Refractive Evaluation Trial): Prospective Open-Label Noninferiority Clinical Trial.

Authors:  Robert P L Wisse; Marc B Muijzer; Francesco Cassano; Daniel A Godefrooij; Yves F D M Prevoo; Nienke Soeters
Journal:  J Med Internet Res       Date:  2019-11-08       Impact factor: 5.428

10.  Clinically applicable deep learning for diagnosis and referral in retinal disease.

Authors:  Jeffrey De Fauw; Joseph R Ledsam; Bernardino Romera-Paredes; Stanislav Nikolov; Nenad Tomasev; Sam Blackwell; Harry Askham; Xavier Glorot; Brendan O'Donoghue; Daniel Visentin; George van den Driessche; Balaji Lakshminarayanan; Clemens Meyer; Faith Mackinder; Simon Bouton; Kareem Ayoub; Reena Chopra; Dominic King; Alan Karthikesalingam; Cían O Hughes; Rosalind Raine; Julian Hughes; Dawn A Sim; Catherine Egan; Adnan Tufail; Hugh Montgomery; Demis Hassabis; Geraint Rees; Trevor Back; Peng T Khaw; Mustafa Suleyman; Julien Cornebise; Pearse A Keane; Olaf Ronneberger
Journal:  Nat Med       Date:  2018-08-13       Impact factor: 53.440

View more
  2 in total

1.  Correction: The evaluation of a web-based tool for measuring the uncorrected visual acuity and refractive error in keratoconus eyes: A method comparison study.

Authors:  Marc B Muijzer; Janneau L J Claessens; Francesco Cassano; Daniel A Godefrooij; Yves F D M Prevoo; Robert P L Wisse
Journal:  PLoS One       Date:  2021-12-09       Impact factor: 3.240

2.  Correction: Correction: The evaluation of a web-based tool for measuring the uncorrected visual acuity and refractive error in keratoconus eyes: A method comparison study.

Authors: 
Journal:  PLoS One       Date:  2022-04-19       Impact factor: 3.240

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.