Literature DB >> 35900979

Explainable emphysema detection on chest radiographs with deep learning.

Erdi Çallı1, Keelin Murphy1, Ernst T Scholten1, Steven Schalekamp1, Bram van Ginneken1.   

Abstract

We propose a deep learning system to automatically detect four explainable emphysema signs on frontal and lateral chest radiographs. Frontal and lateral chest radiographs from 3000 studies were retrospectively collected. Two radiologists annotated these with 4 radiological signs of pulmonary emphysema identified from the literature. A patient with ≥2 of these signs present is considered emphysema positive. Using separate deep learning systems for frontal and lateral images we predict the presence of each of the four visual signs and use these to determine emphysema positivity. The ROC and AUC results on a set of 422 held-out cases, labeled by both radiologists, are reported. Comparison with a black-box model which predicts emphysema without the use of explainable visual features is made on the annotations from both radiologists, as well as the subset that they agreed on. DeLong's test is used to compare with the black-box model ROC and McNemar's test to compare with radiologist performance. In 422 test cases, emphysema positivity was predicted with AUCs of 0.924 and 0.946 using the reference standard from each radiologist separately. Setting model sensitivity equivalent to that of the second radiologist, our model has a comparable specificity (p = 0.880 and p = 0.143 for each radiologist respectively). Our method is comparable with the black-box model with AUCs of 0.915 (p = 0.407) and 0.935 (p = 0.291), respectively. On the 370 cases where both radiologists agreed (53 positives), our model achieves an AUC of 0.981, again comparable to the black-box model AUC of 0.972 (p = 0.289). Our proposed method can predict emphysema positivity on chest radiographs as well as a radiologist or a comparable black-box method. It additionally produces labels for four visual signs to ensure the explainability of the result. The dataset is publicly available at https://doi.org/10.5281/zenodo.6373392.

Entities:  

Mesh:

Year:  2022        PMID: 35900979      PMCID: PMC9333227          DOI: 10.1371/journal.pone.0267539

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

Emphysema is a leading form of Chronic Obstructive Pulmonary Disease (COPD), which affects approximately 4.6% of the US population [1]. Chest radiograph (CXR) is typically the first and most common imaging examination for patients presenting with respiratory symptoms. Especially with patients of COVID-19, emphysema detection is crucial in patient management, because it significantly increases the intensive care unit admission rates, increased respiratory support requirements and higher invasive mechanical ventilation frequencies [2]. This indicates the importance of detecting signs of emphysema in CXRs. The automated diagnosis of emphysema on CXR has received relatively little attention to date. Coppini et al. (2007) [3], Coppini et al. (2013) [4], and Miniati et al. [5] used lung shapes to detect emphysema claiming performances of 0.90 accuracy, 0.954 area under the receiver operating curve (AUC), and 0.955 AUC respectively. These three studies use handcrafted features with neural networks on small datasets. More recently, in a small patient group of 80, Wanchaitanawong et al. (2021) [6] proposed that AI-based emphysema scores from CXRs could be used for patients who cannot perform spirometry and achieve similar results in diagnosing COPD. Campo et al. (2018) [7] created chest radiograph projections from CT and achieved 0.907 AUC in predicting CT-based emphysema scores from these. Some studies used different modalities that look similar to a CXR or a derivative of the CXR to detect emphysema. For example, scout images taken as a part of the CT scanning process were used to evaluate emphysema severity in [8]. This study has shown that a deep learning method can predict the emphysema severity from scout images with results consistent with CT quantification. Dark-field radiographs were also used to predict emphysema in [9], using CT based scores as the reference standard. In that study an AUC of 0.79 was obtained in detecting mild emphysema in a study involving 83 patients. There are up to 31 deep learning based studies using the ChestXray14 dataset [10], potentially providing an emphysema label among 13 others [11]. However, most of these studies use automatically extracted labels, which are noisy and unsuited to evaluation [12], including subcutaneous emphysema under the emphysema label for example [13]. The most recent such work [14] proposes an attention based extension of DenseNet121 [15] and achieves a 0.933 AUC in detecting emphysema, however the known issues with emphysema labeling in that dataset [13] makes this result difficult to interpret. Few studies collect radiologist annotations for various diseases and evaluate model performances on this data. Li et al. (2021) [16] uses annotations from 3 radiologists on 10,738 CXRs from hospital archives and reports an AUC of 0.942 on predicting emphysema as a part of the ChestXray14 disease labels. Lin et al. (2020) [17] collects localized features based on the ChestXray14 disease labels as well as 4 viral pneumonia labels for 310 CXRs, however does not report model performance on individual diseases. In this work, we describe a deep learning system that is trained and evaluated on radiologist labeled frontal and lateral CXRs. It is designed to provide an explainable emphysema score including the prediction of four visual signs of emphysema defined in the literature [18]. Sutinen et al. [18] proposed that the flattening of the diaphragms in frontal and lateral CXR, irregular radiolucency on the frontal CXR, and an abnormally large retrosternal space in the lateral CXR are key signs for detecting emphysema. This was confirmed by Miniati et al. [19] using a group of 458 patients and 5 readers, achieving 90% sensitivity and 98% specificity. Images depicting these 4 visual signs are provided in Fig 1.
Fig 1

The 4 radiological signs.

The top row shows the radiological signs on frontal chest radiographs. The left frontal chest radiograph shows the flattening of the diaphragm, and the right shows irregular radiolucency. The bottom two chest radiographs show the radiological signs on the lateral chest radiograph. The left lateral chest radiograph clearly shows the flattening of the diaphragmatic contours while the right demonstrates an abnormal retrosternal space.

The 4 radiological signs.

The top row shows the radiological signs on frontal chest radiographs. The left frontal chest radiograph shows the flattening of the diaphragm, and the right shows irregular radiolucency. The bottom two chest radiographs show the radiological signs on the lateral chest radiograph. The left lateral chest radiograph clearly shows the flattening of the diaphragmatic contours while the right demonstrates an abnormal retrosternal space. In addition to being accurate and reliable, a clinically relevant deep learning method needs to produce explainable results in order to gain trust and acceptance from end users. Although many studies provide visual cues on which pixels or locations are contributing most for a prediction [20-23], this information may not be aligned with the expertise of the radiologist [24, 25] and can potentially lead to confusing explanations, hindering acceptance of the method. Recently some studies have worked to create links between radiologically understood concepts and what a deep learning model predicts [26, 27]. In this work, we use labels from an established radiological protocol to predict emphysema, ensuring that end users can connect the outcome with their expert domain knowledge. To build and evaluate our explainable deep learning system, we collected frontal and lateral chest radiographs for 3000 studies. These were annotated by two radiologists for the existence of each of the 4 described visual signs. The descriptions provided to the radiologists are included in Table 1. This annotated data was used to train and evaluate deep learning models for the prediction of each visual sign and a final emphysema score. Performance was compared with each of the two experienced radiologists and additionally with a black-box method which provides an emphysema label without explainable visual signs. We show that neither the radiologists nor the black-box method outperform the proposed explainable model in detecting emphysema. With 3000 studies, this is the largest study to date using the four visual signs to detect emphysema on CXR, and the first to use deep-learning for this task, providing radiologically explainable results.
Table 1

The four signs of emphysema as described by Sutinen et al. [18].

SignDescription
Frontal—Flattening of the diaphragmDepression and flattening of the diaphragm with blunting of costophrenic angles. The actual level of the diaphragm is not as significant as the contour. The body build of the individual should also be considered. For example, in a short, stocky individual, emphysema might be diagnosable even if the diaphragm were at the level of the tenth rib posteriorly.
Frontal—Irregular radiolucencyIrregular radiolucency of the lung fields. This manifestation is the result of the irregularity in distribution of the emphysematous tissue destruction. It is sometimes more clearly recognizable in laminagrams.
Lateral—Flattening of the diaphragmFlattening or even concavity of diaphragmatic contour. A useful index of this change is the presence of a 90-degree or larger sternodiaphragmic angle. In most patients with emphysema, this junction is more readily seen than in subjects with normal chests.
Lateral—Abnormal retrosternal spaceAbnormal retrosternal space. This is defined as a space showing increased radiolucency and measuring 2.5 cm. Or more from the sternum to the most anterior margin of the ascending aorta.
Additional InformationEmphysema is considered to be present if the chest radiolgraphs reveal any two or more of the above criteria. Sometimes it may not be clear not a particular diaphragmic contour is flat. A useful way of resolving this in the posteroanterior radiograph is to determine the straight line from costophrenic junction to the vertebrophrenic junction on each side. If the highest level of the diaphragmic contour is <1.5 cm above this line, the diaphragm may be recorded as flat. The same dimension can be used in the lateral radiograph, measuring from a line connecting the costophrenic junction posteriorly to the sternophrenic junction anteriorly. Flattening of the diaphragmic contours with blunting of costophrenic and sternophrenic angles are seldom, if ever, seen under conditions of acute lung hyperinflation. In addition, areas of irregular radiolucency of the lung fields are absent in such conditions.

Four signs of Emphysema exactly as described by Sutinen et al. [18]. If the patient has 2 or more of those signs, Sutinen et al. [18] consider this patient as emphysema positive.

Four signs of Emphysema exactly as described by Sutinen et al. [18]. If the patient has 2 or more of those signs, Sutinen et al. [18] consider this patient as emphysema positive.

Materials and methods

Data acquisition

This study was approved by the Institutional Review Board of Radboud University Medical Center (Nijmegen, The Netherlands) (Case number: 2017–3952 Case code: 5PCL). Informed written consent was waived, and data collection and storage were carried out in accordance with local guidelines. The data is made publicly available at https://doi.org/10.5281/zenodo.6373392. For this retrospective study, we collected 281,000 CXR studies from our hospital archive (2006 to 2019). Only those with a frontal (posteroanterior) and a lateral CXR were retained, resulting in 97,000 studies. Studies where the radiology report mentioned ‘emphysema’ without the words ‘interstitial’ or ‘subcutaneous’ (16,000) were selected for potential inclusion. The dataset was obtained by a random selection of 2000 studies from the potential emphysema studies and 1000 studies from the remaining 81,000 studies. This process is illustrated in Fig 2 and patient statistics are provided.
Fig 2

Data acquisition diagram.

Data labeling

We created two random groups of 1750 studies with 500 studies common to both. Each of these groups was annotated by a chest radiologist, one with over 30 years of experience (ETS) (R1) and the other with over 7 (SS) (R2). Fig 2 illustrates the division of the dataset, with the 500 studies annotated by both radiologists as the test set. Reader studies on grand-challenge.org [28] were used to annotate images. The radiologists were asked to indicate (yes/no) whether the visual signs of emphysema described by Sutinen et al. [18] were present on the images they viewed. The descriptions of these four signs provided to them are reproduced in Table 1. They worked independently, scored frontals and laterals separately, and could not link the frontal and lateral image of a subject with each other. Line drawing and measurement tools were provided, as well as a free-text comment box. Subjects who had ≥2 of the 4 visual signs indicated present by the radiologist were considered emphysema positive as proposed by Sutinen et al. [18]. For the cases that were annotated by both radiologists (test set), we report the number of positives and negatives for each as well as the Cohen’s Kappa and the confusion matrices. For the remaining cases (training set), annotated by either radiologist we report the number of positives and negatives.

Emphysema sign detection models

To detect the four visual emphysema signs, we trained two ResNet-18 [29] models, one for the frontal and one for the lateral CXRs. ResNet-18 was chosen for this study because it is one of the most frequently used models for deep-learning in CXR [11] and as a relatively shallow model it is suitable for training with a smaller dataset. Each model outputs 2 probabilities indicating scores for the two visual signs of emphysema in the input image. To calculate a final emphysema score for the subject, we average the four probabilities from the two models. The construction and combination of these “sign” models are visualized in Fig 3.
Fig 3

Illustration of how the emphysema score is created from the sign models.

For each view, we train a separate model predicting the two relevant sign probabilities. In the end, we average these probabilities to calculate a combined emphysema score. The indicated scores are calculated using the CXRs shown in the figure for this specific subject.

Illustration of how the emphysema score is created from the sign models.

For each view, we train a separate model predicting the two relevant sign probabilities. In the end, we average these probabilities to calculate a combined emphysema score. The indicated scores are calculated using the CXRs shown in the figure for this specific subject.

Preprocessing and data augmentation

The preprocessing and data augmentation steps are provided in the Supporting Information—see S1 Table in S1 File. Steps 1 to 4 are applied to all images as preprocessing. The data augmentation steps (5–10) are repeated with randomization every time an image is loaded for training (and omitted during validation and testing). The final steps of histogram equalization and resizing (11–12) are applied to all images loaded. These steps were selected heuristically based on initial experimentation with validation data.

Training

The training settings of the two sign models are provided in Supporting Information—see S2 Table in S1 File. These were selected heuristically based on experimentation with validation data. We used a multi-stage training procedure, reducing learning rates at each stage and training until the stopping condition was met. At each stage, we loaded the best model from the previous stage, reduced the learning rate, and commenced retraining. The final models are those that achieved the best validation set losses among all stages. The validation set consisting of 256 samples is randomly selected with an equal distribution across labels and annotators and not used for training.

Ensembling

Each model is trained 30 times and the resulting probabilities are ensembled using the geometric mean. To increase the variance between models, we used a different held-out validation dataset for each model in the ensemble.

Black-box model

We trained an ensemble of models that predict emphysema from a frontal and lateral CXR pair without the use of the four visual signs. Similar to the sign models we used the ResNet-18 architecture with the same training settings and ensembling process described previously. The frontal and lateral CXR models were combined by concatenating the features from the last feature layers to predict emphysema as shown in Fig 4.
Fig 4

Illustration of the black-box model.

For each view, we define a model with 512 outputs. The outputs from the two models are concatenated and the probability of emphysema is predicted from this final layer. The indicated score is calculated using the CXRs shown in the figure for this specific subject.

Illustration of the black-box model.

For each view, we define a model with 512 outputs. The outputs from the two models are concatenated and the probability of emphysema is predicted from this final layer. The indicated score is calculated using the CXRs shown in the figure for this specific subject.

Model comparison to radiologists

We evaluate the performance of our models using each radiologist separately as the reference standard. Model performances are additionally evaluated on the subset of cases where both radiologists agree (both indicate <2 signs present or both indicate ≥2 signs present for emphysema positivity). Using the annotations of a single radiologist as the reference standard, the sensitivity and specificity of the other radiologist is calculated. This calculation is repeated for each sign and for emphysema detection by the sign models and by the black-box model. To compare the performance of the radiologist with the models, each model was fixed at the sensitivity of the compared radiologist and the McNemar test [30] was applied to determine a p-value for the performance difference. Statistical significance is inferred if p < 0.05. ROC curves with 95% confidence intervals and radiologist sensitivity specificity point with error bars are used for comparison. This analysis is repeated using the second radiologist as the reference standard, in order to illustrate any model biases that may have been introduced during the annotation and training processes.

Model comparison to black-box model

We use DeLong’s test [31] to evaluate the significance of the performance difference between the black-box model and the emphysema signs model for detecting emphysema. The ROC curves 95% confidence intervals, obtained using bootstrapping, are also provided for comparison. We report the significance of the performance difference for each experiment. As previously, results are provided using each radiologist separately as the reference standard, as well as using only the cases where both radiologists agreed (both indicate <2 signs present or both indicate ≥2 signs present for emphysema positivity).

Results

Statistics about the 3000 studies are provided in Table 2. During the annotation process, 12 studies were removed because (one of) the DICOM files did not contain valid images, 96 were removed because either the frontal or the lateral were excluded by a radiologist. Exclusion reasons included issues that preclude assessment of the four emphysema signs, such as the diaphragmatic contours not being captured in the chest radiograph, or conditions that hide the signs, such as pleural fluid blocking the view of diaphragmatic contours, or major anatomical changes such as lobectomy.
Table 2

Patient statistics of the dataset of 3,000 initially selected studies.

GenderPatient countPatient ageAge range
Female 126865.45 ± 12.58[22, 99]
Male 173265.53 ± 12.23[20, 96]
After exclusions, 2882 studies remained, of which 2418 were annotated by one of the two radiologists (training set), and the remaining 464 (test set) were annotated by both radiologists. Of these 464 studies, 42 were removed to ensure that there was no patient overlap between the training and test datasets, leaving 422 studies that were set aside as the held-out test set. The number of positive and negative annotations per dataset is provided in Table 3. There were 370 cases (53 positive, 317 negative) in the test set where the radiologists agreed on the emphysema label (both indicate <2 signs present or both indicate ≥2 signs present).
Table 3

Annotation results for the training and test datasets.

Training dataset
Patient statistics
Male Female
Count 13841032
Age 65.7 ± 12.665.7 ± 12.2
Age range [24, 99][20, 95]
Annotation results
Positive Negative
Frontal—Flattening of the diaphragm3292089
Frontal—Irregular radiolucency3182100
Lateral—Flattening of the diaphragm5061912
Lateral—Abnormal retrosternal space4851933
Emphysema (≥2 signs)4251993
Test dataset
Patient statistics
Male Female
Count 260162
Age 64.4 ± 12.664.5 ± 12.0
Age range [26, 96][37, 93]
Test dataset—R1
Annotation results
Positive Negative
Frontal—Flattening of the diaphragm40382
Frontal—Irregular radiolucency50372
Lateral—Flattening of the diaphragm75347
Lateral—Abnormal retrosternal space92330
Emphysema (≥2 signs)68354
Test dataset—R2
Annotation results
Positive Negative
Frontal—Flattening of the diaphragm75347
Frontal—Irregular radiolucency63359
Lateral—Flattening of the diaphragm88334
Lateral—Abnormal retrosternal space95327
Emphysema (≥2 signs)90332

Note that the test set was annotated by both radiologists independently while the training set was split between them.

Note that the test set was annotated by both radiologists independently while the training set was split between them. Cohen’s kappa scores and confusion matrices for the radiologist agreement on the test dataset are provided in Table 4. All kappa values are within the range of 0.503 and 0.672, indicating that the radiologists had moderate to substantial agreement on all tasks.
Table 4

Inter-observer variability.

Sign and kappaConfusion matrix
Frontal—Flattening of the diaphragm R2 Neg R2 Pos
R1 Neg 34042
Kappa: 0.503 R1 Pos 733
Frontal—Irregular radiolucency R2 Neg R2 Pos
R1 Neg 34329
Kappa: 0.558 R1 Pos 1634
Lateral—Flattening of the diaphragm R2 Neg R2 Pos
R1 Neg 31631
Kappa: 0.654 R1 Pos 1857
Lateral—Abnormal retrosternal space R2 Neg R2 Pos
R1 Neg 30525
Kappa: 0.672 R1 Pos 2270
Emphysema Positive > = 2 positive signs R2 Neg R2 Pos
R1 Neg 31737
Kappa: 0.596 R1 Pos 1553

Inter-observer variability and confusion matrices of the radiologist annotations for the 4 signs and the emphysema positive result (≥2 positive signs) on 422 cases.

Inter-observer variability and confusion matrices of the radiologist annotations for the 4 signs and the emphysema positive result (≥2 positive signs) on 422 cases.

Model performance

The performance of the four individual signs models is presented in terms of AUC in Table 5 and ROC curves are provided in S3-S5 Fig in S1 File. For brevity, the remaining text in this section discusses only the performance of the combined signs model and the black-box model.
Table 5

Comparison of the radiologists and the models.

TaskR sens.R spec.Model spec. at R sens.p-valueModel AUC
Reference Standard R1 Compared to R2
Models
Frontal—Irregular radiolucency0.680 0.922 0.830 <0.001 0.855
Frontal—Flattening of the diaphragm0.8250.890 0.939 0.004 0.949
Lateral—Abnormal retrosternal space0.760 0.924 0.845 0.001 0.893
Lateral—Flattening of the diaphragm0.7600.9100.9270.4700.948
Sign Models Emphysema Positive0.7790.8950.9010.8800.924
Black-box Emphysema Positive0.7790.8950.8700.2620.915
Reference Standard R2 Compared to R1
Models
Frontal—Irregular radiolucency0.540 0.955 0.894 0.006 0.818
Frontal—Flattening of the diaphragm0.4400.9800.9831.0000.955
Lateral—Abnormal retrosternal space0.7370.9330.9170.6350.923
Lateral—Flattening of the diaphragm0.6480.9460.9550.7610.931
Sign Models Emphysema Positive0.5890.9550.9850.1430.946
Black-box Emphysema Positive0.5890.9550.9790.2910.935
Cases for which R1 and R2 agree
Models
Frontal—Irregular radiolucency0.894
Frontal—Flattening of the diaphragm0.987
Lateral—Abnormal retrosternal space0.941
Lateral—Flattening of the diaphragm0.975
Sign Models Emphysema Positive0.981
Black-box Emphysema Positive0.972

Each radiologist is taken as the reference standard, and the performance of the other radiologist and of the model are evaluated. The p-values are obtained using McNemar test [30]. Radiologist sensitivity (R sens.), Radiologist specificity (R spec.), the model specificity at Radiologist sensitivity (Model spec. at R sens.) and the Model AUC are provided. Bold p-values indicate p < 0.05, and bold specificity values indicate significantly higher specificity in the given comparison. The last section of the table provides AUC values for the models evaluated on the subset of cases where R1 and R2 agreed on emphysema positivity.

Each radiologist is taken as the reference standard, and the performance of the other radiologist and of the model are evaluated. The p-values are obtained using McNemar test [30]. Radiologist sensitivity (R sens.), Radiologist specificity (R spec.), the model specificity at Radiologist sensitivity (Model spec. at R sens.) and the Model AUC are provided. Bold p-values indicate p < 0.05, and bold specificity values indicate significantly higher specificity in the given comparison. The last section of the table provides AUC values for the models evaluated on the subset of cases where R1 and R2 agreed on emphysema positivity.

Model comparison to radiologists

In Fig 5 the ROC curves for the combined signs model and the black-box model are shown using the reference standard from individual radiologists and on the subset of data for which both radiologists agreed. The signs model achieves an AUC of 0.924 or 0.945 depending on the radiologist chosen as the reference standard, or 0.981 on the cases where radiologists agree. Similarly the black-box model has AUCs of 0.915 and 0.935 against the reference standard from single radiologists, or 0.972 on the agreement cases. The performance for these models is also provided in Table 5 along with p-values for accurate comparison. In the comparison with second-reader radiologist we find that neither the signs model nor the black-box model has a performance that is significantly different to an expert radiological reader (Table 5). R2 achieves a sensitivity of 0.779 and specificity of 0.895 (with R1 as the reference standard), while the signs model and the black-box model at the same sensitivity have specificities of 0.901 (p = 0.88) and 0.870 (p = 0.262) respectively. Similarly, using R2 as the reference standard, R1 obtains sensitivity of 0.589 with a specificity of 0.955. At this sensitivity level the signs and black-box model have specificities of 0.985 (p = 0.143) and 0.979 (0.291) respectively.
Fig 5

ROC curve comparison of the combined emphysema signs model for detecting emphysema with at least 2 signs, the black-box emphysema model, and a radiologists sensitivity specificity point.

ROC curves are drawn for 2 different reference standards. R1 as reference standard (left)and R2 as reference standard (centre), and finally for only those cases where the radiologists agreed on the emphysema label (right). The 95% confidence intervals and error bars are calculated by bootstrapping.

ROC curve comparison of the combined emphysema signs model for detecting emphysema with at least 2 signs, the black-box emphysema model, and a radiologists sensitivity specificity point.

ROC curves are drawn for 2 different reference standards. R1 as reference standard (left)and R2 as reference standard (centre), and finally for only those cases where the radiologists agreed on the emphysema label (right). The 95% confidence intervals and error bars are calculated by bootstrapping.

Model comparison to black-box model

For direct comparison of the signs model and the black-box model the ROC curves with 95% confidence intervals are shown in Fig 5 with AUC values. DeLong’s p-values are calculated to compare these. When R1 annotations are taken as the reference standard, there is no significant difference in the AUC values of the signs model and the black box model (AUCs 0.924 and 0.915 respectively) with p = 0.408. Similarly, when R2 annotations are considered as the reference standard, there is no significant performance difference (AUC 0.946 for the signs model and AUC of 0.935 for the black box model) with a p-value of 0.345. Performance on the subset of cases where the radiologists agreed also shows no difference between the emphysema sign model (0.981 AUC) and the black-box emphysema model (0.972 AUC) with p = 0.289.

Discussion

In this retrospective study, we used deep learning on frontal and lateral chest radiographs to detect emphysema using 4 explainable visual signs. Our proposed method, based on these 4 signs of emphysema, performs at the same level as a radiologist (p = 0.880 against R2 and p = 0.143 against R1) in detecting emphysema on CXRs and achieves an AUC of 0.924 or 0.946 against R1 or R2 respectively, or 0.981 on the subset of cases where R1 and R2 agree. We additionally compared our method to a black-box model that did not use explainable visual signs to detect emphysema. Against R1 and R2 this model achieved AUCs of 0.915 and 0.935, while an AUC of 0.972 was obtained on cases where the radiologists agreed. No significant difference was found between the performance of the black-box model and our signs model which has the substantial advantage of providing explainable radiological information. Emphysema is a condition associated with COPD, which affects 4.6% of the US population [1]. It is relatively difficult to diagnose emphysema conclusively on CXR imaging as evidenced by the moderate kappa scores of the two expert observers involved in this work (Cohen’s kappa = 0.596 for ≥2 signs present). Previous studies have shown varying sensitivities for the detection of emphysema on CXR. Sanders et al. [32] show a 0.80 sensitivity while Thurlbeck and Simon [33] show sensitivity as low as 0.24. In this work, radiologist R2 had a sensitivity of 0.779 compared to R1. Despite the difficulty of the task, the radiologist is frequently required to identify signs of emphysema on CXR for subjects with suggestive symptoms and history. It is therefore important to be able to consistently and accurately identify such signs to direct patient care appropriately. To our knowledge, this work presents the first deep-learning system focused on emphysema detection in CXR, and is one of very few deep learning systems focusing on explainability of findings in medical image analysis tasks. The automated diagnosis of emphysema on CXR has received relatively little attention to date. In early work, Coppini et al. [3] used lung boundaries drawn by a physician and specified hand-crafted shape features of these lung boundaries on frontal and lateral CXR. They fed these descriptors into various shallow neural networks to detect emphysema and used the 4 signs from Sutinen et al. [18] to label their dataset. Their study used a dataset of just 320 studies with 60 emphysema positives and obtained 0.90 accuracies using 10 fold cross-validation. In their follow-up work, Miniati et al. [5] similarly collected frontal and lateral lung segmentations of 225 studies from a physician. They had 92 emphysema subjects and split their dataset into training (118) and validation (107) sets. This work used CT confirmed emphysema labels and again, hand-crafted features that describe the lung shapes. Using a shallow neural network to obtain an emphysema classification, they achieved an AUC of 0.955. In another follow-up study, Coppini et al. [4] automated the segmentation of lung boundaries and achieved 0.954 AUC for the same task on the same dataset. These works use small datasets and appear to optimize their experimental results on the test sets, meaning that they are unlikely to generalize to large unseen datasets. More recently, Campo et al. [7] simulated chest radiographs from CT scans. From these CT scans, they automatically generated the percentage of low-attenuation lung areas (%LLA, Müller et al. [34]) to determine the ratio of emphysematous tissue volume. Using this reference standard, they experiment with various %LLA thresholds to define emphysema and train CNNs consisting of 11 layers (4 convolutional). Using a 10%LLA threshold, on a dataset of 2666 training and 4671 test samples, they achieved 0.907 AUC in classifying emphysema in simulated CXRs. This work demonstrates the potential to detect emphysema automatically on CXR but does not use any real CXR images and the results cannot be considered generalizable to that domain. Li et al. [16] annotated 10,738 studies by 3 radiologists for 14 disease labels of ChestXray14 and reported 0.943 AUC in predicting emphysema. One drawback of this approach is that the agreement of 3 radiologists is likely to be favoring more severe cases of emphysema as positive. Many studies use the ChestXray14 dataset [10], which contains the emphysema label obtained by automatically parsing radiology reports. One recent example Wang et al. (2021) [14] proposed an extension of DenseNet121 [15] using various attention modules and achieved 0.933 AUC in detecting emphysema, which is comparable to our method. However, we note that the emphysema labels in this dataset are unreliable, having seen in our previous work [12], that 39 of 90 randomly selected emphysema samples had incorrect labels. Oakden-Rayner [13] demonstrated the same issue, finding that 86% of visually examined emphysema cases from that dataset were, in fact, cases of subcutaneous emphysema rather than pulmonary emphysema. This issue casts doubt on the emphysema classification performance of the many similar studies that use the ChestXray14 dataset labels for evaluation [11] and so we omit any direct comparison with these works. The work of Miniati et al. [19] demonstrated that the number of positive signs on the CXR image correlated with the severity of emphysema on CT. While we are unable to demonstrate such a finding without reference standard severity scores, it seems likely that the number of positive signs, or indeed the scores assigned by the deep learning systems would correlate with disease severity. This is an interesting avenue for future research. One limitation of this work is the lack of reference standard for the emphysema label based on CT imaging or confirmed emphysema diagnosis. The models presented here are trained only to emulate the performance of a radiologist identifying emphysema on a chest X-Ray, and are evaluated in that context also. This does not provide any indication of how well CXR-based analysis compares with more accurate reference standards such as quantitative CT and/or clinical diagnosis. Future studies should endeavor to obtain such data for a more accurate analysis of performance. The inclusion of additional expert reader opinions and data from a different institution would also improve the analysis in future work. Finally, a more systematic search of the parameter space may identify improved settings for the deep learning systems used in this work.

Conclusion

This work presents the first fully automatic and explainable deep-learning system for the detection of emphysema on CXR. Using a large and manually-labeled dataset with held-out test data from 422 studies, we demonstrate that the proposed method has a performance equivalent to an expert radiologist and to a black-box system that provides no explainable features. This work demonstrates the feasibility of providing explainable features through deep-learning systems as well as a potentially useful tool for emphysema detection.

Supporting information file.

Collection of all supporting tables and figures. Including S1, S2 Tables, S3-S5 Figs. (PDF) Click here for additional data file. 8 Feb 2022
PONE-D-21-36550
Explainable emphysema detection on chest radiographs with deep learning
PLOS ONE Dear Dr. Çallı, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.
Please revise your article according to each of the suggestions given by the reviewers especially in the design of experiments. 
Please submit your revised manuscript by Mar 25 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Yan Chai Hum Academic Editor PLOS ONE Journal Requirements: 1. When submitting your revision, we need you to address these additional requirements. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. Please update your submission to use the PLOS LaTeX template. The template and more information on our requirements for LaTeX submissions can be found at 3.  We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide. 4. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well. 5. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes Reviewer #3: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: No Reviewer #2: Yes Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: In this research, authors apply DL algos on emphysema detection. The research is interesting, however, the proposed methodology existed in the literature and thus the novelty of the research is a concern to be published in PLOS ONE. So I suggest to give the author a chance to submit after doing additional research to improve the contributions of the research work. Authors may also consider the following comments for the revision work: 1. For the literature review, authors should refer to more recent research, i.e. year 2020-2021. Currently there is no references for year 2021. Authors need to refer to more ISI/Scopus research work instead of the online/conference resources. Currently, there is only 21 references which can be still improve for a ISI level research paper. 2. Authors need to identify the research gap and highlight the contribution(s) of the research work in the manuscript to shows the novelty of the research. 3. Justification is needed for all the selected parameters, i.e. why the technique is being selected instead of other existing techniques? 4. More scientific reasoning should be added in the experimental results' explanations. 5. The format of the manuscript needs to be improved. Some sections need to be combined and restructured. 6. The results of the proposed methods should be compare with the state-of-the-arts methods in order to shows the novelty / contributions of the research work. 7. Only a self-annotated dataset is not convincing enough. Authors should include an open source dataset to prove that the model is general enough and the results and discussion would then be more convincing. Reviewer #2: Summary The authors propose to diagnose lung emphysema from a pair of frontal-lateral chest x-rays. The main contribution is in departing from direct classification of the image pair in favor of detecting the presence of four radiological markers, and deriving an "emphysema score" from marker predictions. The experiments demonstrate the feasibility of this approach, but the difference between a direct "black box" approach and the proposed method is not statistically significant. Minor weakness The design of the experimental evaluation makes it very difficult to interpret the results. Each of the test radiographs received annotations from two radiologists in terms of presence of four markers. For three out of these four markers, the doctors disagreed almost as often as (or more often than) they agreed that the marker is present (Tab. 4). The authors trained a deep net on equal number of annotations from both radiologists, but computed test scores using annotations from one of them only. I find it difficult to interpret these scores. More precisely, a hypothetical upper bound on performance of a system trained and tested on annotations from the same distribution is determined by the variance of the annotations. In case of noise-free annotations, a "perfectly accurate" system could operate with precision and recall =1. But labels produced by two doctors likely come from two different distributions (as suggested by Tab 4). A "perfectly accurate" deep network, trained on such mixture of labels, would learn to fit the mixture of the two distributions. When evaluated against labels originating from one of the distributions, even the "perfectly accurate" network should not be expected to attain maximum test scores. This is exactly the case of the presented experiments. I am unable to interpret the "specificity" of a network trained to fit the mixture of two distributions in reproducing samples of one of its components (Tab. 5). Moreover, the comparison to the other doctor (R2) cannot be interpreted in terms of "comparison to human performance", as implied by the authors. Such a claim would only make sense if both the network and the doctor were tasked with predicting results of some objective test, like the lung function test, or the clinical outcome. If the system was trained on annotations performed by the first doctor (R1), then the authors could at least compare the disagreement between the system and R1 to the disagreement between R1 and R2. In the current setup the numerical results are difficult to interpret, because the system is not trained to agree with R1, but to interpolate between R1 and R2. For example, it is not clear what it means that the proposed method is "worse than R2" on detecting two of the markers (Tab. 5). How far is it from the mixture of the distributions of R1 and R2, that it was trained to approximate? To address this difficulty, I suggest that the authors extend the experimental evaluation, and the associated description, in one of two ways: 1. Either add an evaluation limited to the test scans on which the two radiologists agreed, 2. or train the network on annotations of one of the radiologists, then compare the disagreement between that radiologist and the system to the disagreement between the two radiologists. Justification of the rating The work appears to be methodologically correct. I suggest extending the description of the experimental evaluation according to my detailed comment above, to facilitate interpretation of numerical performance of the proposed system. It should be straightforward for the authors to add this additional result to the final version of the manuscript, which does not require further review. Editorial comment Please use consistent terminology across the manuscript: either "test set" or "evaluation set". Around lines 84 and 89, please state explicitly that the training set is split into the "actual" training set and a small validation set. Reviewer #3: In this study a deep learning system is proposed to detect emphysema on chest radiographs. It is shown that the proposed method is able to predict emphysema positivity with a performance that is comparable to that of a radiologist. The paper is well written and there are only minor issues to be addressed so that it could be recommended for publication. My only main critical note to the study is an issue that the authors themselves hint to in the Discussion section: the labeling of the 4 visual signs related to emphysema by a single radiologist are taken as ground truth and this can potentially cause some bias to the models introduced and the results obtained with them. The confusion matrices of radiologist annotations (Table 4) and kappa values indicate that there are a significant number of cases, where the two radiologist do not agree on positivity compared to the number of those where both of them signaled positivity for a given visual sign. There are a couple of questions here that in my opinion the authors would need to address: - What is the confusion matrix for overall emphysema positivity (taken after the rule of at least 2 positive signs)? This has been not indicated in the paper. - Given the moderate disagreement between the radiologists, why only R1-annotated radiographs were taken as ground truth and no evaluation has been done considering R2-annotated radiographs as ground truth and/or taking only those graphs as such where there were an agreement between R1 and R2? Such an analysis would shed more light to any potential bias present in the models. - What about availability of actually diagnosed emphysema with corresponding radiographs? The only truly unbiased assessment can be made only in the case when ground truth for diagnosis is taken independently from the visual signs on radiographs. Other minor points: - Fig. 3 and 4: In the captions please indicate that the displayed probabilities correspond to a specific study case. - Table 4: In the upper row the legend of 'R1 Neg' is missing to the confusion matrix ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 20 Mar 2022 Dear Editor, Thank you for taking the time to arrange the review of our manuscript. In the following document, we address all the reviewer comments. Thanks to the feedback, we are confident that our manuscript has improved. For each reviewer, we define a section (such as Reviewer #1) and for each comment of each reviewer, we define a subsection (such as R1-C1 for the first comment of the first reviewer). We define a feedback subsection for not actionable feedback (such as R1-Feedback). Under each of these subsections, we provide the reviewer's comment as well as our response. We are looking forward to hearing your decision. Kind regards, Erdi Calli Reviewer #1 R1-Feedback Reviewer: In this research, authors apply DL algos on emphysema detection. The research is interesting, however, the proposed methodology existed in the literature and thus the novelty of the research is a concern to be published in PLOS ONE. So I suggest giving the author a chance to submit after doing additional research to improve the contributions of the research work. Authors may also consider the following comments for the revision work: Response from authors: Dear Reviewer, thank you very much for your insights and comments on our research. Following your suggestions, we have greatly changed our study. Please see the 7 responses that we provided corresponding to the 7 comments that you have written. R1-C1 Reviewer: For the literature review, authors should refer to more recent research, i.e. year 2020-2021. Currently, there are no references for the year 2021. Authors need to refer to more ISI/Scopus research work instead of the online/conference resources. Currently, there are only 21 references that can be still improved for an ISI-level research paper. Response from authors: Based on this feedback we performed a search on the peer-reviewed publications using the following query (chest and (“x-ray” or “radiograph”) and emphysema and (“machine learning” or “deep learning” or “artificial intelligence”). This search led us to add 3 new studies into our publication that have either been published after we submitted the paper, or published after we conducted our initial search. Unfortunately, emphysema detection with deep learning is a field that has not yet attracted a lot of attention. Also, the publicly available label data for this disease is extremely noisy as we explain in lines 271-282. These properties make it difficult to find studies for comparison since the emphysema labels used in most works are extremely unreliable. We mentioned these details in our introduction and discussion in detail (lines 27-40, 271-282). We have also noted other locations throughout the text where citations could be included to improve the manuscript during the review process and have added these increasing the total number of references from 21 to 36. In the following list, you can see which reference is added as a response to which comment (such as R1-C1 for the current comment), the authors with the year (as Huang et al. (2017)), followed by the lines in which this citation is used. R1-C6: Huang et al. (2017), 31-35, 272-276 R1-C2: Kashyap et al. (2020), 50-59 R1-C2: Lee et al. (2021), 50-59 R1-C1: Li et al. (2021), 35-37, 267-270 R1-C1: Lin et al. (2020), 37-40 R1-C2: Nguyen et al. (2019), 50-59 R1-C2: Pasa et al. (2019), 50-59 R1-C1: Pu et al. (2021), 21-24 R1-C2: Rajpurkar et al. (2017), 50-59 R1-C2: Ras et al. (2021), 50-59 R1-C2: Sogancioglu et al. (2020), 50-59 R1-C1: Urban et al. (2022), 24-26 R1-C1: Wanchaitanawong et al. (2021),15-17 R1-C6: Wang et al. (2021), 31-34, 272-276 R1-C2: Xie et al. (2020), 50-59 R1-C2 Reviewer: Authors need to identify the research gap and highlight the contribution(s) of the research work in the manuscript to show the novelty of the research. Response from authors: We agree that it is important to further highlight our contribution with this work. To identify the research gap, highlight our contribution, and provide the novelty of our application, we have added a paragraph in the introductions (lines 50-59). We mentioned the need for an explainable system that can be integrated into the radiology workflow and that is easily interpreted by the users in this domain. R1-C3 Reviewer: Justification is needed for all the selected parameters, i.e. why the technique is being selected instead of other existing techniques? Response from authors: We thank the reviewer for this suggestion. The existing text mentions that our training parameters were selected based on initial experimentation on validation data. We have added a further explanation on why we choose Resnet18 as the most optimal model and additionally explained that other parameters were chosen heuristically based on initial experiments.(lines 105-107, 117-118). R1-C4 Reviewer: More scientific reasoning should be added in the experimental results' explanations. Response from authors: We have expanded our discussion section with a more thorough explanation of results, referring to the p-values for scientific reasoning. (lines 221-228). R1-C5 Reviewer: The format of the manuscript needs to be improved. Some sections need to be combined and restructured. Response from authors: We identified some restructuring steps that would improve the format of the manuscript and we thank the reviewer for this suggestion. We have divided the methods section into more subsections and added the “Model comparison to radiologists” and “Model comparison to black-box model” subsections. Each subsection explains one portion of the work. These subsections are reflected with the same titles in the Results section. Please see lines 139-163, and 186-217 for these changes. We also tried to increase consistency in various places such as figures, figure captions and improved the text in various places. Since these changes are widespread throughout the manuscript we do not include every change in this document as it is not straightforward to follow them in this way. R1-C6 Reviewer: The results of the proposed methods should be compared with the state-of-the-art methods in order to show the novelty/contributions of the research work. Response from authors: We agree with the reviewer that comparison with state of the art is very important, and while we have done our best to draw comparisons this is difficult given that no other work to date has used the same set of data or aimed for precisely the same task, and that many works use public datasets with inaccurate labels. In our Discussion section, we mention various related studies and compare the task, AUC, and data used (see lines 243-282). We have added an extra comparison based on the literature findings of R1-C1 and compared our method to Li et al. (2021) (lines 267-270). We additionally have added specific mention of a recent study which to our knowledge, has achieved one of the best results so far. “Triple attention learning for classification of 14 thoracic diseases using chest radiography”, Wang et al. (2021). We mention the result of this study (AUC 0.933 for emphysema detection) in our discussions and mention how it compares to our method (lines 272-274). However, we must observe, as with most previous works, that this study uses public data with automatically extracted emphysema labels which cannot be considered reliable. We detail these in the Discussion section (lines 275-282). R1-C7 Reviewer: Only a self-annotated dataset is not convincing enough. Authors should include an open-source dataset to prove that the model is general enough and the results and discussion would then be more convincing. Response from authors: We appreciate this suggestion from the reviewer and we certainly agree that an additional dataset from an external institution or an open dataset would be a great contribution to the paper. This was discussed during the work for this study but unfortunately, we came to the conclusion that this data should be prepared for a follow-up study because of the difficulties of collecting such a dataset. To be able to do this task correctly, we need to collect a hand-labeled dataset, which requires a great deal of time from one or more radiologists. This is necessary because it is known that emphysema labels in the public CXR datasets (which are extracted automatically from radiology reports and contain many inaccuracies) are unsuitable for immediate use (see lines 27-31, 275-282 of our introduction and discussion). To obtain a meaningful number of positive samples, a large amount of data has to be reviewed and labeled by radiologists and unfortunately, it is very difficult for our radiologists to spare time for these tasks. To label, the 3000 images included in this study took approximately one year since the readers were frequently unable to attend to the labeling based on their workload. Radiologist time is very costly and scarce but we believe it is essential for the creation of a high-quality dataset. Furthermore, we note that most public CXR datasets do not include lateral images, which are required for this work, and correctly extracting and checking frontal and lateral pairs and cleaning the dataset prior to labeling is also a substantial amount of work. We regret that we are unable to include additional datasets at this time but we have identified this suggestion of the reviewer as a priority task for future development and validation of our method. We note that this is also described in the limitations of the study (lines 295-297) as shown below. R1-Summary Authors: Again, thanks a lot for your well-thought-out and detailed instructions on the necessary changes. We believe that answering your comments improved our paper. Reviewer #2 R2-Feedback Reviewer: The authors propose to diagnose lung emphysema from a pair of frontal-lateral chest x-rays. The main contribution is in departing from direct classification of the image pair in favor of detecting the presence of four radiological markers, and deriving an "emphysema score" from marker predictions. The experiments demonstrate the feasibility of this approach, but the difference between a direct "black box" approach and the proposed method is not statistically significant. Response from authors: Dear reviewer, Thank you very much for reading our manuscript and providing valuable feedback. R2-C1 Reviewer: Minor weakness The design of the experimental evaluation makes it very difficult to interpret the results. Each of the test radiographs received annotations from two radiologists in terms of the presence of four markers. For three out of these four markers, the doctors disagreed almost as often as (or more often than) they agreed that the marker is present (Tab. 4). The authors trained a deep net on an equal number of annotations from both radiologists, but computed test scores using annotations from one of them only. I find it difficult to interpret these scores. More precisely, a hypothetical upper bound on the performance of a system trained and tested on annotations from the same distribution is determined by the variance of the annotations. In the case of noise-free annotations, a "perfectly accurate" system could operate with precision and recall =1. But labels produced by two doctors likely come from two different distributions (as suggested by Tab 4). A "perfectly accurate" deep network, trained on such a mixture of labels, would learn to fit the mixture of the two distributions. When evaluated against labels originating from one of the distributions, even the "perfectly accurate" network should not be expected to attain maximum test scores. This is exactly the case of the presented experiments. I am unable to interpret the "specificity" of a network trained to fit the mixture of two distributions in reproducing samples of one of its components (Tab. 5). Moreover, the comparison to the other doctor (R2) cannot be interpreted in terms of "comparison to human performance", as implied by the authors. Such a claim would only make sense if both the network and the doctor were tasked with predicting the results of some objective test, like the lung function test, or the clinical outcome. If the system was trained on annotations performed by the first doctor (R1), then the authors could at least compare the disagreement between the system and R1 to the disagreement between R1 and R2. In the current setup the numerical results are difficult to interpret because the system is not trained to agree with R1, but to interpolate between R1 and R2. For example, it is not clear what it means that the proposed method is "worse than R2" on detecting two of the markers (Tab. 5). How far is it from the mixture of the distributions of R1 and R2, that it was trained to approximate? To address this difficulty, I suggest that the authors extend the experimental evaluation, and the associated description, in one of two ways: Either add an evaluation limited to the test scans on which the two radiologists agreed, or train the network on annotations of one of the radiologists, then compare the disagreement between that radiologist and the system to the disagreement between the two radiologists. Response from authors: Thank you very much for your detailed explanation. We understand and agree with the difficulty of interpreting the results. These two are excellent suggestions. Doing the second item would halve the number of training samples and have a big impact on the results, so we opted for the first item. Based on your feedback, we have extended our experiment design and included two new sets of results. Firstly, we included the results where the R2 is taken as the ground truth. This change can be seen in Table 5, Figure 5, and Supplementary information S4 Fig, as well as S5 Fig. The necessity for extra ROC curves has led us to move some figures into the supplementary materials for better readability and add the AUC values for those in Table 5 for completeness within the main text. Secondly, we included results for the agreed-upon annotations (subjects where both radiologists agreed on the emphysema diagnosis) to give a sense of the performance of the method. R2-Feedback Reviewer: Justification of the rating The work appears to be methodologically correct. I suggest extending the description of the experimental evaluation according to my detailed comment above, to facilitate interpretation of numerical performance of the proposed system. It should be straightforward for the authors to add this additional result to the final version of the manuscript, which does not require further review. Response from authors: Thank you very much for sharing your observations and very detailed feedback. R2-Q2 Reviewer: Editorial comment Please use consistent terminology across the manuscript: either "test set" or "evaluation set". Response from authors: We have made the necessary changes through the manuscript for consistency. R2-Q3 Reviewer: Around lines 84 and 89, please state explicitly that the training set is split into the "actual" training set and a small validation set. Response from authors: Our experiment design did not include a pre-specified held-out validation dataset. However, everytime we trained our models, we made sure to hold-out a random part of the training dataset for validation. This was mentioned in the Ensembling section. We moved this definition under the training section (lines 126-127), and added a mention of held-out in line 127. R2-Summary Authors: We would like to thank you again for your detailed feedback. We think that the manuscript has improved greatly based on it. Reviewer #3 R3-Feedback Reviewer: In this study, a deep learning system is proposed to detect emphysema on chest radiographs. It is shown that the proposed method is able to predict emphysema positivity with a performance that is comparable to that of a radiologist. The paper is well written and there are only minor issues to be addressed so that it could be recommended for publication. Response from authors: Dear reviewer, thank you very much for reading our manuscript and providing valuable feedback. R3-Feedback Reviewer: My only main critical note to the study is an issue that the authors themselves hint at in the Discussion section: the labeling of the 4 visual signs related to emphysema by a single radiologist is taken as ground truth and this can potentially cause some bias to the models introduced and the results obtained with them. The confusion matrices of radiologist annotations (Table 4) and kappa values indicate that there are a significant number of cases, where the two radiologists do not agree on positivity compared to the number of those where both of them signaled positivity for a given visual sign. There are a couple of questions here that in my opinion, the authors would need to address: Response from authors: We agree on the necessity of clarifying these questions and believe these have been addressed in the subsequent responses to the reviewer (R3-C1, R3-Q2, and R3-Q3). R3-C1 Reviewer: What is the confusion matrix for overall emphysema positivity (taken after the rule of at least 2 positive signs)? This has been not indicated in the paper. Response from authors: The confusion matrix for overall emphysema positivity is provided in Table 4, please see the final row of the table for the emphysema confusion matrix as shown in the following image. R3-Q2 Reviewer: Given the moderate disagreement between the radiologists, why only R1-annotated radiographs were taken as ground truth and no evaluation has been done considering R2-annotated radiographs as ground truth and/or taking only those graphs as such where there was an agreement between R1 and R2? Such an analysis would shed more light on any potential bias present in the models. Response from authors: We understand and agree with the difficulty of interpreting the results. Another reviewer also made the same point about using the annotations from the other radiologist. Based on these answers we have extended our experiment design. We included the results where the R2 is taken as the ground truth and results for the agreed-upon annotations to give a better sense of the performance of the method. Please refer to the response we provided in section R2-C1 of this document for further information on this item. R3-Q3 Reviewer: What about the availability of actually diagnosed emphysema with corresponding radiographs? The only truly unbiased assessment can be made only in the case when ground truth for diagnosis is taken independently from the visual signs on radiographs. Response from authors: We thank the reviewer for raising this point and we agree that this is an important issue. It would be ideal if the data included a CT or clinical diagnosis as the reference standard. Unfortunately, since this data was not available to us, we aimed to produce a model which could perform at a similar level to a radiologist viewing a chest X-Ray, and we evaluate our model only in that context. To determine whether viewing the chest X-Ray is sufficient to provide accurate emphysema diagnosis, we would certainly need to include additional radiological or clinical data. We tried to address this in the discussion section (lines 290-294), and have now extended this text for clarity. R3-Q4 Reviewer: Other minor points: Fig. 3 and 4: In the captions please indicate that the displayed probabilities correspond to a specific study case. Response from authors: Fig. 3 and 4 captions are changed to include this information. R3-Q5 Reviewer: Table 4: In the upper row the legend of 'R1 Neg' is missing from the confusion matrix Response from authors: We have added the missing “R1 Neg” in Table 4. R3-Summary Authors: We would like to thank you for your feedback. We think that the manuscript has improved greatly based on your feedback. Journal Requirements Requirement-1 Editor: When submitting your revision, we need you to address these additional requirements. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf Response from authors: The article suits the PLOS style requirements. Requirement-2 Editor: Please update your submission to use the PLOS LaTeX template. The template and more information on our requirements for LaTeX submissions can be found at Response from authors: The article uses the PLOS template. Requirement-3 Editor: We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide. Response from authors: Upon acceptance, the data will be made publicly available and the link will be added to the manuscript. Requirement-4 Editor: Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well. Response from authors: Full ethics statement (The study has been reviewed by the ethics committee on the basis of the Dutch Code of conduct for health research, the Dutch Code of conduct for responsible use, the Dutch Personal Data Protection Act and the Medical Treatment Agreement Act (CMO: 2017-3952).) is added, please see lines 74-77. Requirement-5 Editor: Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. Response from authors: Captions for supporting information are included in the initial manuscript. For new supporting information, captions are added. Submitted filename: emphysema response to reviewers.pdf Click here for additional data file. 12 Apr 2022 Explainable emphysema detection on chest radiographs with deep learning PONE-D-21-36550R1 Dear Dr. Çallı, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Yan Chai Hum Academic Editor PLOS ONE Additional Editor Comments (optional): All concerns have been addressed. Reviewers' comments: 22 Jun 2022 PONE-D-21-36550R1 Explainable emphysema detection on chest radiographs with deep learning Dear Dr. Çallı: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Yan Chai Hum Academic Editor PLOS ONE
  20 in total

1.  ROENTGENOLOGIC CRITERIA FOR THE RECOGNITION OF NONSYMPTOMATIC PULMONARY EMPHYSEMA. CORRELATION BETWEEN ROENTGENOLOGIC FINDINGS AND PULMONARY PATHOLOGY.

Authors:  S SUTINEN; A J CHRISTOFORIDIS; G A KLUGH; P C PRATT
Journal:  Am Rev Respir Dis       Date:  1965-01

2.  Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach.

Authors:  E R DeLong; D M DeLong; D L Clarke-Pearson
Journal:  Biometrics       Date:  1988-09       Impact factor: 2.571

3.  Triple attention learning for classification of 14 thoracic diseases using chest radiography.

Authors:  Hongyu Wang; Shanshan Wang; Zibo Qin; Yanning Zhang; Ruijiang Li; Yong Xia
Journal:  Med Image Anal       Date:  2020-10-16       Impact factor: 8.545

4.  Exploring Large-scale Public Medical Image Datasets.

Authors:  Luke Oakden-Rayner
Journal:  Acad Radiol       Date:  2019-11-06       Impact factor: 3.173

5.  Qualitative and Quantitative Assessment of Emphysema Using Dark-Field Chest Radiography.

Authors:  Theresa Urban; Florian T Gassert; Manuela Frank; Konstantin Willer; Wolfgang Noichl; Philipp Buchberger; Rafael C Schick; Thomas Koehler; Jannis H Bodden; Alexander A Fingerle; Andreas P Sauter; Marcus R Makowski; Franz Pfeiffer; Daniela Pfeiffer
Journal:  Radiology       Date:  2022-01-11       Impact factor: 11.105

6.  EMPHYSEMA QUANTIFICATION ON SIMULATED X-RAYS THROUGH DEEP LEARNING TECHNIQUES.

Authors:  Mónica Iturrioz Campo; Javier Pascau; Raúl San José Estépar
Journal:  Proc IEEE Int Symp Biomed Imaging       Date:  2018-05-24

7.  Radiographic appearance of the chest in emphysema.

Authors:  W M Thurlbeck; G Simon
Journal:  AJR Am J Roentgenol       Date:  1978-03       Impact factor: 3.959

8.  Value of chest radiography in phenotyping chronic obstructive pulmonary disease.

Authors:  M Miniati; S Monti; J Stolk; G Mirarchi; F Falaschi; R Rabinovich; C Canapini; J Roca; K F Rabe
Journal:  Eur Respir J       Date:  2007-12-05       Impact factor: 16.671

9.  Impact of Chronic Obstructive Pulmonary Disease and Emphysema on Outcomes of Hospitalized Patients with Coronavirus Disease 2019 Pneumonia.

Authors:  Robert M Marron; Matthew Zheng; Gustavo Fernandez Romero; Huaqing Zhao; Raj Patel; Ian Leopold; Ashanth Thomas; Taylor Standiford; Maruti Kumaran; Nicole Patlakh; Jeffrey Stewart; Gerard J Criner
Journal:  Chronic Obstr Pulm Dis       Date:  2021-04-27
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.