Literature DB >> 35202417

Concordance rate of radiologists and a commercialized deep-learning solution for chest X-ray: Real-world experience with a multicenter health screening cohort.

Eun Young Kim1, Young Jae Kim2, Won-Jun Choi3, Ji Soo Jeon2, Moon Young Kim4,5, Dong Hyun Oh6,7, Kwang Nam Jin4,5, Young Jun Cho6,7.   

Abstract

PURPOSE: Lunit INSIGHT CXR (Lunit) is a commercially available deep-learning algorithm-based decision support system for chest radiography (CXR). This retrospective study aimed to evaluate the concordance rate of radiologists and Lunit for thoracic abnormalities in a multicenter health screening cohort. METHODS AND MATERIALS: We retrospectively evaluated the radiology reports and Lunit results for CXR at several health screening centers in August 2020. Lunit was adopted as a clinical decision support system (CDSS) in routine clinical practice. Subsequently, radiologists completed their reports after reviewing the Lunit results. The DLA result was provided as a color map with an abnormality score (%) for thoracic lesions when the score was greater than the predefined cutoff value of 15%. Concordance was achieved when (a) the radiology reports were consistent with the DLA results ("accept"), (b) the radiology reports were partially consistent with the DLA results ("edit") or had additional lesions compared with the DLA results ("add"). There was discordance when the DLA results were rejected in the radiology report. In addition, we compared the reading times before and after Lunit was introduced. Finally, we evaluated systemic usability scale questionnaire for radiologists and physicians who had experienced Lunit.
RESULTS: Among 3,113 participants (1,157 men; mean age, 49 years), thoracic abnormalities were found in 343 (11.0%) based on the CXR radiology reports and 621 (20.1%) based on the Lunit results. The concordance rate was 86.8% (accept: 85.3%, edit: 0.9%, and add: 0.6%), and the discordance rate was 13.2%. Except for 479 cases (7.5%) for whom reading time data were unavailable (n = 5) or unreliable (n = 474), the median reading time increased after the clinical integration of Lunit (median, 19s vs. 14s, P < 0.001).
CONCLUSION: The real-world multicenter health screening cohort showed a high concordance of the chest X-ray report and the Lunit result under the clinical integration of the deep-learning solution. The reading time slight increased with the Lunit assistance.

Entities:  

Mesh:

Year:  2022        PMID: 35202417      PMCID: PMC8870572          DOI: 10.1371/journal.pone.0264383

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

The data-intensive nature of medicine makes it one of the most promising fields for the application of artificial intelligence (AI) and machine learning algorithms [1]. Health care centers have become increasingly interested in implementing AI-enabled clinical decision support systems (CDSSs) to improve efficiency and patient outcomes [2]. The system may improve the accuracy and inter-reader variability of physicians in making diagnoses, as well as medical care in resource-constrained environments where healthcare experts are not available. However, there are currently limited examples of successful implementation of AI techniques in clinical practice, and it is not clear how AI tools can be effectively integrated with human decision-making. Lunit INSIGHT CXR (Lunit) is a commercially available deep-learning algorithm-based CDSS for the automatic detection of thoracic abnormalities on chest X-ray (CXR). Recent studies have reported that AI systems using deep learning techniques can detect various diseases on CXRs, showing performance comparable to that of expert radiologists [3-9]. In previous studies, Lunit showed excellent diagnostic performance, which was similar to that of expert radiologists, and improved the performance of physicians in diagnosing pneumonia, lung cancer, tuberculosis, and multiple abnormal findings [6, 10, 11]. Based on this evidence, Lunit was approved by the Korean Ministry of Food and Drug Safety, and several hospitals have adopted it in routine clinical practice as a decision support system for radiology. However, to the best of our knowledge, no study has evaluated the extent to which radiologists accept the Lunit results in real-world clinical practice. Accordingly, the purpose of this study was to evaluate the concordance of radiology reports and Lunit results for thoracic abnormalities on CXR using a multicenter health screening cohort. In addition, we wanted to compare the reading times before and after the clinical integration of the AI system.

Materials and methods

This retrospective cohort study was approved by the institutional review boards of three participating institutions (approval number: GBIRB2020-413 for Gil Medical Center, 10-2020-227 for Boramae Medical Center, 2020-10-015-001 for Konyang University Hospital). All the data were de-identified, and the requirement for written informed consent was waived. In the health screening centers of three institutions, Lunit has been adopted in clinical practice since March 2020. We present the following article based on the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting checklist (S1 Appendix).

Study population

The data of 3,113 consecutive participants, who visited the health screening center of three institutions and underwent CXR in August 2020, were retrieved from the radiology database and medical records system and retrospectively analyzed. The data of the participants in the control group (n = 3,284), who visited the health screening center the previous year (November 2019) before the clinical integration of the Lunit assistance, were also collected and used to compare the reading times for CXR. Data on age, sex, and smoking history (pack-years) were retrospectively collected. Based on age and smoking history, the cohort was classified as having a high-risk of lung cancer (aged 55–74 years with ≥30 pack-years of smoking history). Fig 1 shows a flowchart of the study population.
Fig 1

Flow chart of the study population.

CXR = chest radiography.

Flow chart of the study population.

CXR = chest radiography.

Radiology report for chest radiographs

The original clinical radiology reports by three board-certified radiologists from three health screening centers (one per institution; C.S.Y., K.R.H., K.S., with 11, 7, and 20 years of experience in radiology, respectively) were retrospectively analyzed. In practice, radiologists evaluate CXR images and review the Lunit results, which are shown as secondary captured images in the reading workstation (picture archiving and communication system PACS, INFINITT Healthcare), to complete the radiology report. Using the original radiology reports, the cases were retrospectively re-categorized by one adjudicator (K.E.Y. with 12 years of experience in thoracic imaging), based on the following semantic descriptions in the radiology reports into normal, inactive lesion, insignificant abnormal lesion, or significant abnormal lesion. Inactive lesions were described as “calcified”, “adhesion”, “sequelae”, “linear”, “pleural thickening”, “bulla”, “s/p pneumonectomy”. Insignificant abnormal lesions included “bronchiectasis”, “interstitial opacity”, “interstitial lung disease”, “tiny nodule”, and “emphysema”. Finally, “focal increased opacity”, “nodule”, and “consolidation” were allocated to the significant abnormal lesions. A normal CXR (CXR category 0) was categorized as "CXR negative”, and the inactive lesions, insignificant abnormal lesions, and significant abnormal lesions (CXR category 1–3) were categorized as “CXR positive”. The descriptions unrelated to lung lesions (“elevation of the diaphragm”, “scoliosis”, “kyphosis”, “bone island”, “old rib fracture”, “bone cement”, “cardiomegaly”, “situs inversus”, “right-side aortic arch”, “prominent pericardial fat pad” and “nipple shadow”) were not included as positive findings in the radiology report.

Lunit for chest radiographs

We used a commercially available deep learning algorithm (Lunit INSIGHT for Chest Radiography Version 2.5.7.4; Lunit, Seoul, South Korea). This version was developed for the detection of three major radiologic findings (nodule/mass, consolidation, and pneumothorax) using a deep convolutional neural network [6]. The raw pixel map of the DICOM images was passed through 34 convolutional layers (ResNet-34 based architecture) that served as feature extractors for projecting the CXR to a good representation space. This was followed by four one-by-one convolution heads that create a color map of each of the four findings. Pixel-wise binary cross-entropy loss and image-level binary cross-entropy loss were used during the training of the model. AI-detected thoracic lesions were marked using a heatmap with an abnormality score (%). The abnormality score indicated the probability (0–100%) that the CXR contained malignant nodule/mass, consolidation, or pneumothorax. Using a predefined cut-off value of 15%, which showed high sensitivity (95%) in internal verification studies [11], lesions with an abnormality score of 15% or more were categorized as “Lunit positive” The Lunit results were integrated with separate images from the original CXR images of the patient in PACS. To complete the radiology report, radiologists reviewed the original CXR image and checked the results of the Lunit integrated as a secondary image.

Concordance rate for radiology report and Lunit

We determined whether the Lunit results were described in the radiology reports. The cases showed that the CXR negative/Lunit negative cases were designated as “accept”, and the CXR positive/Lunit negative or CXR negative/Lunit positive cases were designated as “reject”. For the CXR positive/Lunit positive cases, the designations were based on the lesions as follows: accept, when the lesions described on CXR report and Lunit result were in agreement; edit, when the lesions in the radiology reports were in partial agreement with those detected by Lunit; add, the radiology reports had additional lesions compared with those detected by Lunit. When the CXR lesion was different from the Lunit lesion, the case was designated as “reject”. The “accept”, “edit”, and “add” designations represented concordance, and “reject” represented discordance. We evaluated the concordance rate based on the CXR lesion category.

Reading time before and after the clinical integration of Lunit

The reading time per case was extracted from a PACS log record and calculated as the duration between the opening time and closure time for creating a radiology report. To exclude the cases that remained open for long durations due to unexpected interruptions, we considered more than 120s as an unreliable reading time because readers may have been interrupted and excluded from the analysis.

System usability scale

Usability refers to ease of use of software technology and the user interface and attributes commonly described include learnability, efficiency, effectiveness, usefulness, accessibility and user satisfaction [12]. System usability scale (SUS) is a tool for measuring both usability and learnability for practically any kind of system. The SUS scores calculated from individual questionnaires represent the system usability. SUS yields a single number representing a composite measure of the overall usability of the system being studied. SUS is a Likert Scale which includes 10 questions. A total of 24 radiologists and physicians (n = 14 for radiologists, n = 3 for radiology residents, n = 6 for physician) who had any experienced the Lunit were asked to rank each question from 1 to 5 based on their level of agree; 5 means they agree completely, 1 means they disagree strongly. Scoring involves subtracting 1 from all odd items, and subtracting all even numbered item responses from 5, which scales each item from 0 to 4. The total is multiplied by 2.5 to provide a score out of 100, which is interpreted as a percentile ranking and not as a percentage [13, 14]. According to validation studies, the acceptable SUS score is above than industry standards (i.e. above 68) [13, 15].

Statistical analysis

The descriptive statistics were calculated using SPSS (ver. 20) and are presented as percentages for categorical variables and as means (± standard deviation) or medians (interquartile range) for continuous variables. The continuous variables were compared using the Student t-test or Mann-Whitney U test, and the categorical variables were analyzed using the two-sided Pearson chi-squared test. For multiple testing, pairwise comparisons and post-hoc analyses were performed, and the P-values were corrected using Bonferroni’s method. Statistical significance was set at P < 0.05. The concordance rate was defined as the percentage of the cases designated as “accept”, “edit”, and “add”. The reading time and CXR lesion categories were compared before and after the clinical integration of Lunit. The reading times were compared using a generalized linear model with gamma distribution. A subgroup analysis was used for reading time comparisons for the different CXR lesion categories.

Results

Baseline characteristics

Table 1 shows the demographic features of the study participants. Compared with the control group, the experimental group was significantly younger (mean age, 49 ± 15 years vs. 52 ± 15 years, P < 0.001) and had more women (62.8% vs. 59.3%, P = 0.003). However, the frequency of participants at a high-risk of lung cancer was not significantly different (4.1% vs. 4.9%, P = 0.161). The CXR radiology reports showed that a total of 383 (11.0%) participants had abnormalities, including those with inactive (n = 266, 7.3%), insignificant abnormal (n = 27, 0.9%), and significant abnormal (n = 90, 2.9%) lesions and those recommended for further studies (n = 37, 1.2%). Normal CXRs was less frequent after than before the adoption of Lunit (89.0% vs. 91.3% adjusted P = 0.015).
Table 1

Demographic information.

After adoption of Lunit (n = 3,113)Before adoption of Lunit (n = 3,284)P value
Sex, men1,157 (37.2%)1,338 (40.7%)0.003
Age (years)49±1552±15< 0.001
High-risk of lung cancer129 (4.1%)160 (4.9%)0.161
CXRnormal2,770 (89.0%)2,998 (91.3%)0.017**
Inactive226 (7.3%)186 (5.7%)
insignificant abnormal27 (0.9%)19 (0.6%)
significant abnormal90 (2.9%)81 (2.5%)
Further study recommendation37 (1.2%)49 (1.5%)0.292
Reading time, median (IQR) ††19s (36s)14s (23s)< 0.001*

Note: Except where indicated, data are the mean (± SD) or number (%). SD = standard deviation. IQR = interquartile range. Comparisons of means and proportions of the two groups for demographic information were performed using Student’s t-test (*Mann-Whitney U test) and chi-squared tests.

†High-risk lung cancer patients: age: 55–74 years and a smoking history of 30 pack-years or more.

††Except for missing/unreliable reading time information (n = 479, 7.5%).

**at multiple testing, normal CXR was less frequent at Lunit group compared to control (89.0% vs. 91.3% adjusted P-value = 0.015).

Note: Except where indicated, data are the mean (± SD) or number (%). SD = standard deviation. IQR = interquartile range. Comparisons of means and proportions of the two groups for demographic information were performed using Student’s t-test (*Mann-Whitney U test) and chi-squared tests. †High-risk lung cancer patients: age: 55–74 years and a smoking history of 30 pack-years or more. ††Except for missing/unreliable reading time information (n = 479, 7.5%). **at multiple testing, normal CXR was less frequent at Lunit group compared to control (89.0% vs. 91.3% adjusted P-value = 0.015). Among the participants (n = 3,113), the radiology reports showed that 343 (11.0%) had positive, and Lunit showed that 621 (20.1%) were positive. The concordance rate was 86.8% (accept: 85.3%, edit: 0.9%, and add: 0.6%), and the discordance rate was 13.2% (Table 2) (Fig 2).
Table 2

Concordance according to radiology report and Lunit for chest radiograph (CXR).

CXR positive (11%)CXR negative (89%)
Lunit positive (20.1%)*n = 284 (9.1%)Reject (n = 343,11.0%)
*sub-classified as accept (n = 227, 7.3%), edit (n = 28, 0.9%), add (n = 19, 0.6%), and reject (n = 10, 0.3%)
Lunit negative (79.9%)Reject (n = 59, 1.9%)Accept (n = 2,427, 78.0%)

*when the lesions described on CXR report and Lunit result were in coincide (“accept”), when the lesions in the radiology reports were in partial agreement with those detected by Lunit (“edit”), when the radiology reports had additional lesions compared with those detected by Lunit (“add”). When the lesion described on CXR report was totally different from the Lunit lesion, the case was designated as “reject”.

Fig 2

Sunburst chart for concordance and discordance between Lunit and radiology reports.

Concordance and discordance were based on the agreement of the Lunit results and the radiology report. Concordance was achieved when the radiology reports were consistent with the DLA results (“accept”), the radiology reports were partially consistent with the DLA findings (“edit”) or they had additional lesions compared with the DLA findings (“add”). There was discordance when the DLA results were rejected in the radiology report.

Sunburst chart for concordance and discordance between Lunit and radiology reports.

Concordance and discordance were based on the agreement of the Lunit results and the radiology report. Concordance was achieved when the radiology reports were consistent with the DLA results (“accept”), the radiology reports were partially consistent with the DLA findings (“edit”) or they had additional lesions compared with the DLA findings (“add”). There was discordance when the DLA results were rejected in the radiology report. *when the lesions described on CXR report and Lunit result were in coincide (“accept”), when the lesions in the radiology reports were in partial agreement with those detected by Lunit (“edit”), when the radiology reports had additional lesions compared with those detected by Lunit (“add”). When the lesion described on CXR report was totally different from the Lunit lesion, the case was designated as “reject”. The distribution of the discordance cases (n = 412) were as follows: normal (83.3%), inactive lesions (9.7%), insignificant abnormality (1.5%), and significant abnormality (5.6%). The concordance rate was higher for normal (87.6%) than significant abnormal cases (74.4%, adjusted P = 0.003) (Table 3) (Fig 3).
Table 3

Concordance and discordance of chest radiograph (CXR) report and Lunit result stratified by the CXR lesion category.

Concordance (n = 2,701)Discordance (n = 412)P value
CXR_ normal (n = 2,770; 89.0%)2,427 (89.9%)343 (83.3%)< .001*
CXR_ inactive (n = 266; 7.3%)186 (6.9%)40 (9.7%)
CXR_ insignificant abnormal (n = 27; 0.9%)21 (0.8%)6 (1.5%)
CXR_ significant abnormal (n = 90; 2.9%)67 (2.5%)23 (5.6%)

*at multiple testing, concordance was more frequent for normal CXR than for significantly abnormal CXR (87.6% vs. 74.4% Bonferroni-adjusted P = 0.003).

Fig 3

Concordance rate according to chest radiograph (CXR) lesion categories.

Multiple testing showed that the concordance rate was significantly higher for normal (87.6%) than for significantly abnormal (74.4%, Bonferroni-adjusted P = 0.003) cases.

Concordance rate according to chest radiograph (CXR) lesion categories.

Multiple testing showed that the concordance rate was significantly higher for normal (87.6%) than for significantly abnormal (74.4%, Bonferroni-adjusted P = 0.003) cases. *at multiple testing, concordance was more frequent for normal CXR than for significantly abnormal CXR (87.6% vs. 74.4% Bonferroni-adjusted P = 0.003).

Reading time for the radiology report

Of all the reading times, 479 cases (7.5%) were unavailable (n = 5) or unreliable (n = 474) and were excluded from the analysis. The median reading time increased after the clinical integration of Lunit (19s for AI support vs. 14s for AI unaided readings, P < 0.001). For the generalized linear model, three factors (Lunit support, radiologists, CXR lesion categories) influenced reading time; the average reading time was higher after the clinical integration of Lunit (before vs. after the clinical integration of Lunit, P < 0.001) even after adjustment for the radiologists and CXR lesion categories (Table 4). For the subgroup analysis, the average reading time per case increased by 0.2s when the AI support was leveraged for normal CXR. Conversely, the reading time per case decreased by 0.2s with the use of AI support for the non-normal CXR examinations (inactive lesion, insignificant abnormal lesion, and significant abnormal lesions).
Table 4

Reading times stratified by the reading condition (before vs. after the clinical integration of Lunit assist), radiologists, and chest radiograph (CXR) lesion categories.

For all cases (n = 5,918)
ParametersEstimated95% confidence intervalsP-values
Intercept3.483.36, 3.60< .001
Lunit aided0.190.15, 0.22< .001
Lunit unaided0.000.00
Radiologist 11.091.04, 1.14< .001
Radiologist 2-0.39-0.44, -0.34< .001
Radiologist 30.000.00
CXR_normal-1.13-1.24, -1.01< .001
CXR_inactive-0.50-0.63, -0.36< .001
CXR_insignificant abnormal-0.12-0.35, 0.120.334
CXR_significant abnormal0.000.00
For normal CXR (n = 5,362)
Intercept2.312.27, 2.36< .001
Lunit aided0.200.16, 0.23< .001
Lunit unaided0.000.00
Radiologist 11.171.12, 1.22< .001
Radiologist 2-0.43-0.48, -0.37< .001
Radiologist 30.000.00
For non-normal CXR (n = 556)
Intercept3.913.80, 4.03< .001
Lunit aided-0.16-0.25, -0.07< .001
Lunit unaided0.000.00
Radiologist 10.180.08, 0.28< .001
Radiologist 20.210.08, 0.34< .001
Radiologist 30.000.00
CXR_inactive-0.58-0.68, -0.48< .001
CXR_insignificant abnormal-0.23-0.40, -0.040.010
CXR_significant abnormal0.000.00
In the SUS questionnaire, the average SUS score was 77.8 (75.7 for radiologists, 81.7 for radiology residents, 80.8 for physician), which was generally considered an acceptable score for system usability (Table 5).
Table 5

System usability scale (SUS) for Lunit.

GroupSUS score*
All (n = 23)77.8 ± 11.9
Radiologists (n = 14)75.7 ± 13.8
Radiology residents (n = 3)81.7 ± 8.0
Physicians (n = 6)80.8 ± 8.5

*data are the mean (± standard deviation).

*data are the mean (± standard deviation).

Discussion

Advancements in computer vision and AI have the potential to make significant contributions to health care, particularly in diagnostic specialties such as radiology. However, the perspectives of practicing clinicians and diagnosticians on the integration of AI into medical practice are poorly understood. This study evaluated the concordance rate of radiology reports by radiologists and an AI implementation in real-world clinical practice using a multicenter health screening cohort. In the health screening cohort, the concordance rate was high (86.8%). Of the discordance case (n = 412, 13.2%), most of the cases are Lunit-positive at normal CXR (83.3%) and followed by Lunit-negative at CXR lesion categories of inactive (9.7%), significant abnormality (5.6%), and insignificant abnormality (1.5%). In addition, the reading time slightly increased with the integration of Lunit assistance in the clinical radiology workflow, compared to the previous year (before the clinical integration of Lunit). Under the Lunit assistance, radiologists should review the Lunit results in addition to their interpretation of the original CXR image to complete the radiology report; the reading time inevitably increased with the use of Lunit. Interestingly, for the CXR-positive cases, the reading time decreased slightly with AI support. This may indicate that AI can facilitate the detection of abnormalities for radiologists. Recently, Lunit was effectively integrated into the PACS workstation system and allows the abnormality score to be directly visible on the PACS worklist screen without opening the resulting images. This enables radiologists to prioritize CXR images as AI normal or abnormal and helps them to read the required images first. This approach is expected to reduce reading time and enhance radiologists’ work efficiency. On the questionnaire, Lunit also reached a reasonable level (average SUS score, 77.8) for the general usalibility and learnability across the different experience level with CXR and CDSS. SUS has been tried and tested throughout almost 30 years of use, and has proven itself to be a reliable method of evaluating the usability of systems compared to industry standards. With the high concordance rate and reasonable usability scale indicated, Lunit could be implemented as an assistant systems for CXR interpretation in health screening centers and it might be useful as a training tool and CDSS for unexperienced radiology trainees and physicians. Based on the evidence of the excellent diagnostic performance of deep learning algorithms, which is comparable to that of an expert radiologist in the health care centers [16] and has power to enhance the physician’s performance for the diagnosis of lung cancer, tuberculosis, and multiple abnormal findings [6, 10, 11], Lunit was introduced in routine clinical work as an assistance tool for radiology department. Although there is evidence for good diagnostic performance in various clinical settings, the experience of a physician and the attitudes toward the assistance tool can influence how much it embraces the result of the system. Herein, we aimed to evaluate the real-world situation after the adoption of Lunit in health screening centers. We used health care centers from three institutions for this study: two tertiary academic hospitals (located in Incheon and Daejeon in Korea) and one secondary general hospital (in the capital city of Korea, Seoul). In these institutions, Lunit was introduced and has been used as a CDSS in clinical radiology practice since January 2020. The CXR is widely used as a component of periodic health examinations for asymptomatic outpatients or the general population because it has several advantages, including easy accessibility, low cost, and negligible radiation exposure. In Korea, the National Health Service offers free CXR screening biennially to all residents aged 40 years or older [17]. Furthermore, CXR has been widely performed for pre-employment and pre-military service medical screening. The interpretation of CXR is important at health screening setting for the diagnosis of thoracic diseases such as tuberculosis or lung cancer in asymptomatic subjects. Our study has several limitations. First, we did not evaluate the diagnostic performance of Lunit or radiology report since the primary endpoint of this study was the concordance rate of the radiology report by the radiologist and the Lunit result after its integration into real-world medical practice. For the evaluation of diagnostic performance of Lunit or the radiology reports, the reference standard (ground truth, GT) should be establised based on chest CT or consensus reading by expert radiologists. If we used the chest CT as GT, we could not avoid selection bias, since most of the participants who visited the health clinics did not undergo chest CT examination. To use consensus reading as GT, we needed expert radiologists’ time and cost additionally. Since the follow-up data was not sufficient for the participant, we could not use clinical follow-up data as well. However, we wanted to evaluate tremendous number of cases to reflect the real-world situation for the brand-new AI application for CXR, rather than focus to the diagnostic accuracy itself. Second, this study used a specific version of a commercial product with a predefined cut-off value set for high sensitivity. Therefore, careful interpretation is required for the results of the deep learning algorithm for other products or in other clinical settings. Third, the results of our study are limited to one country, so the generalizability to racial differences in other countries is uncertain. Finally, the concordance of AI were evaluated with only three radiologists, which might cause the limited generality. However, it also reflected the real clinical environment that only several radiologists were solely in charge of CXR for health screening centers. In conclusion, the radiology reports demonstrated high concordance with the results of Lunit, the commercialized AI solution for CXR, in a real-world multicenter health screening cohort. The reading time slight increased after the clinical integration of Lunit support.

STROBE statement—a checklist of items that should be included in reports of observational studies.

(DOC) Click here for additional data file.

The concordance rate dataset.

(XLSX) Click here for additional data file. 3 Nov 2021
PONE-D-21-19659
Concordance rate of radiologists and a commercialized deep-learning solution for chest X-ray: Real-world experience with a multicenter health screening cohort
PLOS ONE Dear Dr. Jin, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Dec 18 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Alfredo Vellido Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf. 2. Thank you for stating the following in the Competing Interests/Financial Disclosure* (delete as necessary) section: ‘KNJ has received research grant funding from Lunit Inc. for activities not related to the present article. This does not alter our adherence to PLOS ONE policies on sharing data and materials. Other authors have no potential conflicts of interest to disclose” We note that one or more of the authors are employed by a commercial company: name of commercial company. a. Please provide an amended Funding Statement declaring this commercial affiliation, as well as a statement regarding the Role of Funders in your study. If the funding organization did not play a role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript and only provided financial support in the form of authors' salaries and/or research materials, please review your statements relating to the author contributions, and ensure you have specifically and accurately indicated the role(s) that these authors had in your study. You can update author roles in the Author Contributions section of the online submission form. Please also include the following statement within your amended Funding Statement. “The funder provided support in the form of salaries for authors [insert relevant initials], but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.” If your commercial affiliation did play a role in your study, please state and explain this role within your updated Funding Statement. b. Please also provide an updated Competing Interests Statement declaring this commercial affiliation along with any other relevant declarations relating to employment, consultancy, patents, products in development, or marketed products, etc. Within your Competing Interests Statement, please confirm that this commercial affiliation does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests) . If this adherence statement is not accurate and  there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. Please include both an updated Funding Statement and Competing Interests Statement in your cover letter. We will change the online submission form on your behalf. 3. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability. "Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized. Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access. We will update your Data Availability statement to reflect the information you provide in your cover letter. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Summary: Artificial intelligence (AI) has great potential to fundamentally alter healthcare delivery; however, to data most clinical evaluations commonly concern about artificial elements and often ultimately neglect the human-in-the-loop AI. Success of AI in diagnostic imaging has fueled a growing debate surrounding whether the comprehensive diagnostic interpretive skillsets of radiologist can be replicated by algorithms. This is a nice study timely answer the important question about what’s the role of AI in real-world clinical practice. From this study, we understand that human and AI should be more partners other than competitors. Results of the study revealed high agreement between human and AI in a real-world multicenter health screening cohort. Limitation: --There is no follow-up outcome or absence of experts’ consensus reference to evaluate the diagnostic performance of AI. Even the authors argued that they is aimed to evaluate application of AI in a real-world situation rather than focus to the diagnostic accuracy, the accuracy is still first important for a new AI tool in clinical practice. If a new tool is just consistent with experts while did not bring new additional benefit or improvement (such as accuracy, sensitivity or time-saving) to radiologists, we do not know why we use it. So, how is useful of this AI tool to radiologists’ interpretation is yet unknown for us. Please clarify. --AI were compared with only three radiologists, which is not representative in a real world clinical setting. The interpretation from only three readers may had high variances, potentially impacting the results of study. Reviewer #2: 1. The manuscript shares 18% similarity with the previously published study which was not cited in the reference: https://doi.org/10.1371/journal.pone.0246472 2. Proper citation to the previous work is required and clarification must be made whether the data used in this study overlap with the published one. 3. In "Introduction", line 4 - 5 from the bottom, "no study has evaluated the extent to which radiologists accept the Lunit results in real-world clinical practice". The gap identified here was not addressed in the current study. Different tools must be used for this (e.g., System Usability Scale, UTAUT etc.). 4. According to the PLOS Data policy, it requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. No link to the raw data were provided in the manuscript. 5. High concordance result of Lunit is indirectly related to the accuracy of the performance as published in the previous manuscript. It is unclear how this study can contribute more insight to the performance of Lunit. The advantages of the incorporation of Lunit in the existing workflow is also not clearly specified (e.g., can it be used in health screening centers without the assistance of radiologist? Or perhaps as training tools for future or inexperienced radiologist?). ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Yu-Dong Zhang Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 19 Nov 2021 19-November-2021 Dear Alfredo Vellido Academic Editor, PLOS ONE Thank you very much for the opportunity to revise our original article entitled “Concordance rate of radiologists and a commercialized deep-learning solution for chest X-ray: Real-world experience with a multicenter health screening cohort (PONE-D-21-19659).” After carefully reading the reviewer’ and editor’s comments, we have tried to improve the quality and legibility of the manuscript according to the points raised. In the revised version, changes are indicated by highlighting. Individual points (E-editor’s comments and R-#1 = point 1 made by reviewer) are indicated in red. Editor’s comment PONE-D-21-19659 Concordance rate of radiologists and a commercialized deep-learning solution for chest X-ray: Real-world experience with a multicenter health screening cohort PLOS ONE Dear Dr. Jin, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Dec 18 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Alfredo Vellido Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf. �  Yes, we did. 2. Thank you for stating the following in the Competing Interests/Financial Disclosure* (delete as necessary) section: “KNJ has received research grant funding from Lunit Inc., outside the present study. This does not alter our adherence to PLOS ONE policies on sharing data and materials. Other authors have no potential conflicts of interest to disclose” We note that one or more of the authors are employed by a commercial company: name of commercial company. �  No authors are employed by a commercial company. a. Please provide an amended Funding Statement declaring this commercial affiliation, as well as a statement regarding the Role of Funders in your study. If the funding organization did not play a role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript and only provided financial support in the form of authors' salaries and/or research materials, please review your statements relating to the author contributions, and ensure you have specifically and accurately indicated the role(s) that these authors had in your study. You can update author roles in the Author Contributions section of the online submission form. Please also include the following statement within your amended Funding Statement. “The funder provided support in the form of salaries for authors [insert relevant initials], but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.” If your commercial affiliation did play a role in your study, please state and explain this role within your updated Funding Statement. �  In cover letter, I add the amended funding statement as follows; Updated Funding Statement: This work was supported by a grant from the Korea Health Industry Development Institute to YJC (Grant number: HI19C0847). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. b. Please also provide an updated Competing Interests Statement declaring this commercial affiliation along with any other relevant declarations relating to employment, consultancy, patents, products in development, or marketed products, etc. Within your Competing Interests Statement, please confirm that this commercial affiliation does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests) . If this adherence statement is not accurate and there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. Please include both an updated Funding Statement and Competing Interests Statement in your cover letter. We will change the online submission form on your behalf. �  In cover letter, I add the COI statement as follows; KNJ has received research grant funding from Lunit Inc. for activities not related to the present article. This does not alter our adherence to PLOS ONE policies on sharing data and materials. Other authors have no potential conflicts of interest to disclose. 3. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability. "Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized. Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access. We will update your Data Availability statement to reflect the information you provide in your cover letter. �  In cover letter, I add the Data Availability statement as follows; All relevant data are within the manuscript and its Supporting Information files. Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly ________________________________________ 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ________________________________________ 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: No �  Thank you for your comment. We added supporting information file. ________________________________________ 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ________________________________________ 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Summary: Artificial intelligence (AI) has great potential to fundamentally alter healthcare delivery; however, to data most clinical evaluations commonly concern about artificial elements and often ultimately neglect the human-in-the-loop AI. Success of AI in diagnostic imaging has fueled a growing debate surrounding whether the comprehensive diagnostic interpretive skillsets of radiologist can be replicated by algorithms. This is a nice study timely answer the important question about what’s the role of AI in real-world clinical practice. From this study, we understand that human and AI should be more partners other than competitors. Results of the study revealed high agreement between human and AI in a real-world multicenter health screening cohort. �  Thank you for your comments. Limitation: 1) There is no follow-up outcome or absence of experts’ consensus reference to evaluate the diagnostic performance of AI. Even the authors argued that they is aimed to evaluate application of AI in a real-world situation rather than focus to the diagnostic accuracy, the accuracy is still first important for a new AI tool in clinical practice. If a new tool is just consistent with experts while did not bring new additional benefit or improvement (such as accuracy, sensitivity or time-saving) to radiologists, we do not know why we use it. So, how is useful of this AI tool to radiologists’ interpretation is yet unknown for us. Please clarify. �  Thank you for your comments. Many of published data shows comparable diagnostic accuracy of AI. We wanted to show the clinical value for the AI application at real-world setting. In real-world setting, diagnostic accuracy is hard to measure because of 1) hard to get additional consensus reading for too many X-rays 2) if we include cases underwent CT (regarded gold standard test), it cannot avoid selection bias and cannot reflect the real world situation (in real world, only less than 10% underwent CT). 3) FU period was not that enough to show the prognosis of patients. Nevertheless, we wanted to show how the main users (radiologist) confidently accept the result of AI tool for real-world setting. In addition, we also evaluated the consumption time for CXR reading before and after the adaptation of AI. Finally, we added System Usability Scale as indicated Reviewer #2, and some discussion. 2) AI were compared with only three radiologists, which is not representative in a real world clinical setting. The interpretation from only three readers may had high variances, potentially impacting the results of study. �  Thank you for your comments. We added this point at limitation section. Reviewer #2: 1. The manuscript shares 18% similarity with the previously published study which was not cited in the reference: https://doi.org/10.1371/journal.pone.0246472 �  Thank you for your comments. We added the paper in the reference. 2. Proper citation to the previous work is required and clarification must be made whether the data used in this study overlap with the published one. �  Thank you for your comments for our mistake. We added the paper in the reference (as indicated R2-#1). However, the study cohort is totally different and the data used in this study doesn’t overlap at all with data in the previous work. 3. In "Introduction", line 4 - 5 from the bottom, "no study has evaluated the extent to which radiologists accept the Lunit results in real-world clinical practice". The gap identified here was not addressed in the current study. Different tools must be used for this (e.g., System Usability Scale, UTAUT etc.). �  Thank you for your comments. As you indicated, we evaluated how the radiologists confidently accept the result of AI tool for real-world setting, not the general usability of AI. We added the System Usability Scale surveillance for radiologists, radiologist residents and non-radiology physician who were interested in our study. 4. According to the PLOS Data policy, it requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. No link to the raw data were provided in the manuscript. �  Thank you for your comment. We added supporting information files. 5. High concordance result of Lunit is indirectly related to the accuracy of the performance as published in the previous manuscript. It is unclear how this study can contribute more insight to the performance of Lunit. The advantages of the incorporation of Lunit in the existing workflow is also not clearly specified (e.g., can it be used in health screening centers without the assistance of radiologist? Or perhaps as training tools for future or inexperienced radiologist?). �  Thank you for your comment. We added the description at discussion section. ________________________________________ 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Yu-Dong Zhang Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. Thank you again for giving us great honor to resubmit our manuscript to this prestigious journal. We have done our best to respond to all points indicated by the reviewers. We hope you find the revised manuscript acceptable for publication in PLOS ONE. The authors confirm the manuscript has not been published previously and that it will not be submitted for publication elsewhere and the authors have no conflict of interest to declare. With best regards, From Authors Uploaded: -Revised and highlighted versions of the revised manuscript (‘Manuscript’ file and ‘Revised Manuscript with Track Changes’ file) -Rebuttal letter (‘Responses to Reviewers’ file) Submitted filename: Response to Reviewers_211108.docx Click here for additional data file. 4 Jan 2022
PONE-D-21-19659R1
Concordance rate of radiologists and a commercialized deep-learning solution for chest X-ray: Real-world experience with a multicenter health screening cohort
PLOS ONE Dear Dr. Jin, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.
 
Please submit your revised manuscript by Feb 18 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Alfredo Vellido Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: (No Response) Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: No ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: In my initial comments on LIMITATION, I mentioned that only three radiologists were recruited for review, which is not clinically representative. While the response is not satisfactory by just adding a clarification of limitation. The more appropriate anwer is to take a more head to head comparsion betwen different readers, e.g., a team of juniors vs a team of seniors. Reviewer #2: - Missing data in S2 Appendix, additional data are required for concordance/discordance rate calculation. - Typographic errors: 1) In "Abstract", misspelling of questionnaire "Finally, we evaluated... questionair...". 2) In "System Usability Scale", misspelling of total, "A toal of 24 radiologists and physicians..." ​ - Please check other typo errors throughout the manuscript prior to the final submission. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
18 Jan 2022 18-January-2022 Dear Alfredo Vellido Academic Editor, PLOS ONE Happy New Year! Thank you very much for the opportunity to revise our original article entitled “Concordance rate of radiologists and a commercialized deep-learning solution for chest X-ray: Real-world experience with a multicenter health screening cohort (PONE-D-21-19659).” After carefully reading the reviewer’ and editor’s comments, we have tried to improve the quality and legibility of the manuscript according to the points raised. In the revised version, changes are indicated by highlighting. Individual points (E-editor’s comments and R-#1 = point 1 made by reviewer) are indicated in red. PONE-D-21-19659R1 Concordance rate of radiologists and a commercialized deep-learning solution for chest X-ray: Real-world experience with a multicenter health screening cohort PLOS ONE Dear Dr. Jin, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Feb 18 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Alfredo Vellido Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: (No Response) Reviewer #2: (No Response) ________________________________________ 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes ________________________________________ 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ________________________________________ 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: No ________________________________________ 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ________________________________________ 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: In my initial comments on LIMITATION, I mentioned that only three radiologists were recruited for review, which is not clinically representative. While the response is not satisfactory by just adding a clarification of limitation. The more appropriate anwer is to take a more head to head comparsion betwen different readers, e.g., a team of juniors vs a team of seniors. �  Thank you for your comments. We totally agree with you. As you indicated, we evaluate the concordance of AI results with the CXR reading made by only three radiologists, which might not be representative results. However, it also reflected the real clinical environment that only several radiologists were solely in charge of CXR for health screening centers. Actually, there was only one staff radiologist in charge of CXR interpretation in three health care centers. As a retrospective study, the original clinical radiology reports was made by three board-certified radiologists from three health screening centers (one per institution; C.S.Y., K.R.H., K.S., with 11, 7, and 20 years of experience in radiology, respectively) and we retrospectively analyzed the concordance of AI results and the original radiology reports. Reviewer #2: - Missing data in S2 Appendix, additional data are required for concordance/discordance rate calculation. �  We checked the data in S2 Appendix. AI result and concordance is not available for control group (group 2, without AI application group) - Typographic errors: 1) In "Abstract", misspelling of questionnaire "Finally, we evaluated... questionair...". 2) In "System Usability Scale", misspelling of total, "A toal of 24 radiologists and physicians..." - Please check other typo errors throughout the manuscript prior to the final submission. �  Thank you very much. We amended the typo throughout the manuscript. ________________________________________ 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. Thank you again for giving us great honor to resubmit our manuscript to this prestigious journal. We have done our best to respond to all points indicated by the reviewers. We hope you find the revised manuscript acceptable for publication in PLOS ONE. The authors confirm the manuscript has not been published previously and that it will not be submitted for publication elsewhere and the authors have no conflict of interest to declare. With best regards, From Authors Uploaded: -Revised and highlighted versions of the revised manuscript (‘Manuscript’ file and ‘Revised Manuscript with Track Changes’ file) -Rebuttal letter (‘Responses to Reviewers’ file) Submitted filename: Response to Reviewers_220118.docx Click here for additional data file. 10 Feb 2022 Concordance rate of radiologists and a commercialized deep-learning solution for chest X-ray: Real-world experience with a multicenter health screening cohort PONE-D-21-19659R2 Dear Dr. Jin, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Alfredo Vellido Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: As in R2, the authors can not take a head-to-head comparsion betwen different readers, e.g., a team of juniors vs a team of seniors due to limited readers, I will like to leave this question to editor for judgement. Reviewer #2: (No Response) ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No 14 Feb 2022 PONE-D-21-19659R2 Concordance rate of radiologists and a commercialized deep-learning solution for chest X-ray: Real-world experience with a multicenter health screening cohort Dear Dr. Jin: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Alfredo Vellido Academic Editor PLOS ONE
  12 in total

1.  Korean National Health Insurance Database.

Authors:  Dong Wook Shin; BeLong Cho; Eliseo Guallar
Journal:  JAMA Intern Med       Date:  2016-01       Impact factor: 21.873

2.  Computer-aided diagnosis in chest radiography for detection of childhood pneumonia.

Authors:  Leandro Luís Galdino Oliveira; Simonne Almeida E Silva; Luiza Helena Vilela Ribeiro; Renato Maurício de Oliveira; Clarimar José Coelho; Ana Lúcia S S Andrade
Journal:  Int J Med Inform       Date:  2008-02-20       Impact factor: 4.046

Review 3.  Machine Learning in Medicine.

Authors:  Alvin Rajkomar; Jeffrey Dean; Isaac Kohane
Journal:  N Engl J Med       Date:  2019-04-04       Impact factor: 91.245

4.  Regulation of predictive analytics in medicine.

Authors:  Ravi B Parikh; Ziad Obermeyer; Amol S Navathe
Journal:  Science       Date:  2019-02-22       Impact factor: 47.728

5.  Development and Validation of a Deep Learning-Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs.

Authors:  Eui Jin Hwang; Sunggyun Park; Kwang-Nam Jin; Jung Im Kim; So Young Choi; Jong Hyuk Lee; Jin Mo Goo; Jaehong Aum; Jae-Joon Yim; Julien G Cohen; Gilbert R Ferretti; Chang Min Park
Journal:  JAMA Netw Open       Date:  2019-03-01

6.  Development and Validation of a Deep Learning-based Automatic Detection Algorithm for Active Pulmonary Tuberculosis on Chest Radiographs.

Authors:  Eui Jin Hwang; Sunggyun Park; Kwang-Nam Jin; Jung Im Kim; So Young Choi; Jong Hyuk Lee; Jin Mo Goo; Jaehong Aum; Jae-Joon Yim; Chang Min Park
Journal:  Clin Infect Dis       Date:  2019-08-16       Impact factor: 9.079

7.  Performance of a deep-learning algorithm for referable thoracic abnormalities on chest radiographs: A multicenter study of a health screening cohort.

Authors:  Eun Young Kim; Young Jae Kim; Won-Jun Choi; Gi Pyo Lee; Ye Ra Choi; Kwang Nam Jin; Young Jun Cho
Journal:  PLoS One       Date:  2021-02-19       Impact factor: 3.240

8.  Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists.

Authors:  Pranav Rajpurkar; Jeremy Irvin; Robyn L Ball; Kaylie Zhu; Brandon Yang; Hershel Mehta; Tony Duan; Daisy Ding; Aarti Bagul; Curtis P Langlotz; Bhavik N Patel; Kristen W Yeom; Katie Shpanskaya; Francis G Blankenberg; Jayne Seekins; Timothy J Amrhein; David A Mong; Safwan S Halabi; Evan J Zucker; Andrew Y Ng; Matthew P Lungren
Journal:  PLoS Med       Date:  2018-11-20       Impact factor: 11.069

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.