Literature DB >> 33169549

Automated Lung Segmentation on Chest Computed Tomography Images with Extensive Lung Parenchymal Abnormalities Using a Deep Neural Network.

Seung Jin Yoo1, Soon Ho Yoon2, Jong Hyuk Lee3, Ki Hwan Kim4, Hyoung In Choi5, Sang Joon Park3,6, Jin Mo Goo3.   

Abstract

OBJECTIVE: We aimed to develop a deep neural network for segmenting lung parenchyma with extensive pathological conditions on non-contrast chest computed tomography (CT) images.
MATERIALS AND METHODS: Thin-section non-contrast chest CT images from 203 patients (115 males, 88 females; age range, 31-89 years) between January 2017 and May 2017 were included in the study, of which 150 cases had extensive lung parenchymal disease involving more than 40% of the parenchymal area. Parenchymal diseases included interstitial lung disease (ILD), emphysema, nontuberculous mycobacterial lung disease, tuberculous destroyed lung, pneumonia, lung cancer, and other diseases. Five experienced radiologists manually drew the margin of the lungs, slice by slice, on CT images. The dataset used to develop the network consisted of 157 cases for training, 20 cases for development, and 26 cases for internal validation. Two-dimensional (2D) U-Net and three-dimensional (3D) U-Net models were used for the task. The network was trained to segment the lung parenchyma as a whole and segment the right and left lung separately. The University Hospitals of Geneva ILD dataset, which contained high-resolution CT images of ILD, was used for external validation.
RESULTS: The Dice similarity coefficients for internal validation were 99.6 ± 0.3% (2D U-Net whole lung model), 99.5 ± 0.3% (2D U-Net separate lung model), 99.4 ± 0.5% (3D U-Net whole lung model), and 99.4 ± 0.5% (3D U-Net separate lung model). The Dice similarity coefficients for the external validation dataset were 98.4 ± 1.0% (2D U-Net whole lung model) and 98.4 ± 1.0% (2D U-Net separate lung model). In 31 cases, where the extent of ILD was larger than 75% of the lung parenchymal area, the Dice similarity coefficients were 97.9 ± 1.3% (2D U-Net whole lung model) and 98.0 ± 1.2% (2D U-Net separate lung model).
CONCLUSION: The deep neural network achieved excellent performance in automatically delineating the boundaries of lung parenchyma with extensive pathological conditions on non-contrast chest CT images.
Copyright © 2021 The Korean Society of Radiology.

Entities:  

Keywords:  Artificial intelligence; Computed tomography; Deep learning; Interstitial lung diseases; Lung

Mesh:

Year:  2020        PMID: 33169549      PMCID: PMC7909864          DOI: 10.3348/kjr.2020.0318

Source DB:  PubMed          Journal:  Korean J Radiol        ISSN: 1229-6929            Impact factor:   3.500


INTRODUCTION

Computer-aided diagnosis (CAD) for chest computed tomography (CT) images is widely used to detect and analyze various lung parenchymal diseases including lung nodules (12), interstitial lung disease (ILD) (3), and emphysema (45). Precise segmentation of lung parenchymal areas from CT images is a prerequisite to automatically quantify such lung parenchymal diseases (6). Conventional automated lung segmentation, using methods such as thresholding and region growing, is effective for normal chest CT images due to the distinct difference of CT attenuation between the lung parenchyma and chest wall (7). However, automated lung segmentation is challenging on chest CT images for extensive lung parenchymal diseases, particularly for those with subpleural lung pathologies (7). Non-contrast chest CT, especially low-dose chest CT (LDCT), is one of the most commonly used CT protocols, as it provides sufficient image quality for radiologists and clinicians to evaluate lung parenchymal abnormalities with relatively low radiation exposure. LDCT is currently used in lung cancer screening and routine clinical practice, particularly for patients in need of repetitive follow-up CT studies due to chronic lung diseases (8). However, automated lung segmentation for chronic lung diseases is potentially challenging using LDCT due to its inherent high level of image noise. The purpose of this study was to develop a deep neural network for segmenting lung parenchyma with extensive pathological conditions on non-contrast chest CT images, primarily including LDCT.

MATERIALS AND METHODS

This retrospective study was approved by the Institutional Review Board and the requirement for patient consent was waived (IRB No. 1902-103-101).

CT Datasets

We retrospectively collected 193 LDCT scans from patients who had visited respiratory physicians and had undergone thin-section chest CT scan at a single institution between January 2017 and May 2017, including 53 LDCT scans without diffuse lung parenchymal disease and 140 LDCT scans with diffuse lung parenchymal disease. Because LDCT was preferentially performed for lung cancer screening and serial follow-up for chronic, indolent lung diseases in outpatient settings, we included 10 additional cases of standard-dose chest CT scans in the emergency department to enrich the dataset for acute severe lung diseases, such as acute respiratory distress syndrome, acute exacerbation of ILD, extensive lung malignancy, and atelectasis. Extensive lung parenchymal disease was defined as lung disease involving more than 40% of the lung parenchymal area on chest CT images (910), including ILD (49 cases), emphysema (36 cases), nontuberculous mycobacterial lung disease (23 cases), tuberculous destroyed lung (15 cases), pneumonia (9 cases), lung cancer (4 cases), and other diseases (14 cases) (Fig. 1).
Fig. 1

Distribution of cases for the development of deep neural networks.

CT = computed tomography, NTM = nontuberculous mycobacterium, TB = tuberculosis

CT Acquisition

All 203 CT scans were acquired with one of the following multi-detector CT scanners: Somatom Definition and Somatom Force (Siemens Healthineers); Brilliance 64, IQon spectral CT, Brilliance iCT Elite, and Ingenuity (Philips Healthcare); Aquilion One (Canon Medical Systems); and Discovery CT 750 HD (GE Healthcare). All CT examinations were performed with tube voltage of 70–150 kVp and tube current of 25–185 mAs with a volume CT dose index of 0.52–2.92 mGy (low-dose) and 2.83–14.2 mGy (standard dose). Axial images were reconstructed with sharp reconstruction kernel at 1.0-mm slice thickness. Details are provided in Supplementary Table 1.

Manual Lung Segmentation

One of five experienced board-certified radiologists (15, 10, 7, 6, and 5 years of clinical experience with chest CT interpretation, respectively) participated in preparing the reference mask of chest CT images. After uploading the CT images to a commercially available software program (MEDIP, version 1.3.2.0, Medical IP), lung parenchymal areas were initially segmented using a threshold below −400 to −500 Hounsfield unit. Then, the radiologists reviewed the result of the initial segmentation, and adjusted the boundary of the mask correctly by creating a free-drawing region-of-interest in every axial CT image slice. If needed, the radiologists additionally reviewed the coronal and sagittal images to capture the exact boundary of the lung mask, particularly for apical and basal lung areas. The radiologists modified the CT window settings, as needed. The lateral boundary of the lung mask was the outmost end of the subpleural lung, and the medial boundary of the lung mask was the innermost end of the lung parenchyma abutting the mediastinum. The radiologists included the lobar to subsegmental bronchi, arteries, and veins in the lung mask, while excluding the bilateral main bronchi, main pulmonary arteries, and veins. Any lung parenchymal pathologies, including subpleural lesions, such as honeycombing or fibrotic lesions in cases of idiopathic pulmonary fibrosis and pneumonic consolidations, were included in the lung mask. Pleural pathologies, including pleural calcifications, thickening, or pleural effusions, were excluded. A final review of the lung mask, with any modifications, was performed by two of the five radiologists in consensus.

Deep Learning-Based Training and Validation

Altogether, 203 CT scans were randomly assigned to one of the three following data sets: training set, 157 cases; tuning set, 20 cases; and internal validation set, 26 cases. Data were normalized with the lung window setting, using two-dimensional (2D) and three-dimensional (3D) U-Net models. In total, 42306 slices of axial data with existing lung areas were selected, while 8609 negative samples consisted of randomly selected slices without lung areas. Our 2D U-Net received an input size of 512 × 512 × 1 and consisted of initial convolutions, four encoders, four decoders, and a final convolution. Except for the final convolution, which was a 1 × 1 convolution, every convolutional layer consisted of a 3 × 3 convolution followed by batch normalization (11) and the rectified linear unit (ReLu) activation function (12). For decoders, up-sampling with bilinear interpolation was used, followed by concatenation to conserve information before down-sampling (Fig. 2).
Fig. 2

2D U-Net architecture used for lung segmentation.

Our 2D U-Net consists of four down-sampling, and four up-sampling steps. Every step except for the final convolution, consists of two consecutive 3 × 3 convolution followed by batch normalization and ReLu activation function and 1 × 1 convolution with softmax activation was performed at final convolution. 2D = two-dimensional, ReLu = rectified linear unit

Our 3D U-Net model received an input size of 512 × 512 × 8 and used three encoders and three decoders. Except for the final convolution, which was a 1 × 1 × 1 convolution, every convolution layer consisted of a 3 × 3 × 3 convolution followed by ReLu and group normalization (13). The first encoder used 1 × 2 × 2 max pooling to preserve data in the z-axis, whereas the second and third encoders used 2 × 2 × 2 max pooling. For decoders, up-sampling with trilinear interpolation was used. The He et al. (14) initialization method was used for weight initialization. Both models used the softmax function in the final layer and were trained using the stochastic gradient descent algorithm and the cross entropy loss function. After completion of training, the tuning dataset was used to choose the best weight, which was saved after each epoch. We applied two types of training for the deep neural networks. First, the reference masks of a whole lung were used as input for training (whole lung model). Second, we separated the reference masks of a whole lung into right and left lung masks (separate lung model). Then, the right lung masks were horizontally flipped onto the left side. The left lung masks and the flipped right lung masks were used to train the deep neural network to extract the left lung and vice versa.

External Validation

For external validation, we used the University Hospitals of Geneva (HUG)-ILD dataset (15), which consisted of 109 annotated CT scans with advanced ILD. Chest CT scans in the HUG-ILD dataset generally have a slice thickness of 1–2 mm with a 10–15 mm slice interval, corresponding to a high-resolution CT protocol. In some scans, the lung masks provided as the ground truth were inaccurate. We excluded cases in which the CT images had profound respiratory motion artifacts (n = 3) or incomplete ground truth lung segmentation (n = 4). In three cases, CT scans were divided into two separate series. Therefore, six of the 102 CT scans were merged into three CT scans. In total, 99 cases were included for external validation. Since the ground truth masks in the HUG-ILD dataset included the trachea and the main and lobar bronchi in the lung mask, a technician generated an airway mask using the region growing method, and the airway mask was combined with our U-Net-driven lung mask to assess the accuracy of lung segmentation.

Statistical Analysis

We compared the lung mask obtained through manual segmentation with that generated with deep-learning based automated segmentation using the Dice similarity coefficient (DSC), sensitivity, positive predictive value (PPV), and the Hausdorff distance (16). DSC considers correctly segmented areas, incorrectly segmented areas, and missing target areas to measure the performance of a classifier. PPV is an accuracy measurement that reflects the proportion of correctly predicted areas compared to all predicted areas. The Hausdorff distance is the largest value of all distances from a point in one set to the closest point in the other set. Measuring the Hausdorff distance for all pixels of the lung mask required substantial time and computing power resources. Accordingly, we calculated the Hausdorff distance based on a randomly selected sample of 1% of the pixels in the lung mask. The Hausdorff distance calculated using these randomly selected pixels between the same masks was not zero due to discrepancies in the location of the randomly selected pixels, inevitably resulting in a certain degree of error in the distance. We calculated the Hausdorff distance between the same two masks, and then subtracted the distance from the Hausdorff distance between the manual and deep-learning-driven masks. DSC, sensitivity, PPV, and Hausdorff distance were analyzed with the internal and external validation datasets divided by the extent of damage in the pathological lung on CT images (disease severity) into two categories (underlying lung disease involving ≤ 40% or > 40% of the lung parenchymal area), three categories (underlying lung disease involving ≤ 25%, > 25% but ≤ 75%, and > 75% of the lung parenchymal area), and disease category. Differences in the mask volume calculated with the manual and U-Net segmentation masks were compared using two-way analysis of variance with Bonferroni correction for multiple comparisons. Intraclass correlation coefficients (ICCs) were calculated between the manual segmentation masks and the 2D and 3D U-Net segmentation masks. Bland-Altman plots were used to evaluate the differences in mask volumes between the manual segmentation masks and the 2D and 3D U-Net segmentation masks. SPSS version 25 (IBM Corp.) was used for all statistical analyses.

RESULTS

The basic characteristics of the internal and external validation datasets are shown in Table 1.
Table 1

Demographics and Clinic-Radiologic Characteristics of the Datasets

CharacteristicsTraining DatasetTuning DatasetInternal Validation DatasetsExternal Validation Datasets
Number of subjects*157202699
Age (years)69.64 ± 9.6068.60 ± 11.3668.73 ± 8.7158.35 ± 20.15
Sex (M:F)90:678:1217:961:38
CT parameters
 Tube voltage (kVp)70–150100–120100–120N/A
 Effective mAs (mAs)15–18515–15015–150N/A
 Slice thickness (mm)1–1.251–1.2511–2
 Reconstruction interval (mm)1–1.251–1.25110–15
 CTDIvol (mGy)0.52–14.20.52–2.920.52–2.17N/A
Underlying extensive lung disease (> 40%)*114162065
 ILD386551
 Emphysema25380
 NTM lung disease21110
 TB destroyed lung10320
 Pneumonia8103
 Lung cancer3010
 Others92311
Without extensive lung disease (≤ 40%)*434634
 ILD10117
 Emphysema3000
 NTM lung disease3000
 TB destroyed lung4100
 Pneumonia0001
 Lung cancer0000
 Others323516
Pleural effusion
 None*147182685
 Unilateral*5105
 Bilateral5109
Anterior junctional line thickness (mm)*
 ≤ 2386936
 > 2119141763

*Data are number of subjects, †Data are mean ± standard deviation. ILD = interstitial lung disease, NTM = nontuberculous mycobacterium, TB = tuberculosis

Regarding the 2D U-Net model, DSC, sensitivity, PPV, and Hausdorff distance of the internal validation set were 99.6 ± 0.3%, 99.5 ± 0.3%, 99.6 ± 0.3%, and 17.70 ± 6.62 pixels for the whole lung model and 99.5 ± 0.3%, 99.5 ± 0.3%, 99.5 ± 0.4%, and 18.29 ± 6.51 pixels for the separate lung model, respectively (Table 2). Regarding the 3D U-Net model, DSC, sensitivity, PPV, and Hausdorff distance of the internal validation dataset were 99.4 ± 0.5%, 99.1 ± 0.9%, 99.7 ± 0.2%, and 18.75 ± 7.48 pixels for the whole lung model, and 99.4 ± 0.5%, 99.1 ± 0.8%, 99.6 ± 0.3%, and 18.16 ± 7.48 pixels for the separate lung model, respectively (Table 2).
Table 2

Dice Score, Sensitivity, PPV, and Hausdorff Distance of 2D and 3D U-Net Whole Lung and Separate Lung Training Model in Internal Validation Set

2D U-Net3D U-Net
Whole Lung ModeSeparate Lung ModelWhole Lung ModelSeparate Lung Model
DSC (%)99.6 ± 0.399.5 ± 0.399.4 ± 0.599.4 ± 0.5
Sensitivity (%)99.5 ± 0.399.5 ± 0.399.1 ± 0.999.1 ± 0.8
PPV (%)99.6 ± 0.399.5 ± 0.499.7 ± 0.299.6 ± 0.3
Hausdorff distance (pixels)17.70 ± 6.6218.29 ± 6.5118.75 ± 7.4818.16 ± 7.48

Data are mean ± standard deviation. DSC = Dice similarity coefficient, PPV = positive predictive value, 2D = two-dimensional, 3D = threedimensional

Regarding the external validation using the HUG-ILD dataset, the 2D U-Net model showed DSC, sensitivity, PPV, and Hausdorff distance values of 98.4 ± 1.0%, 98.7 ± 1.3%, 98.1 ± 1.5%, and 7.66 ± 3.93 pixels for the whole lung model and 98.4 ± 1.0%, 98.7 ± 1.1%, 98.0 ± 1.6%, and 7.59 ± 3.69 pixels for the separate lung model, respectively (Table 3). The 3D U-Net models showed DSC, sensitivity, PPV, and Hausdorff distance values of 95.3 ± 3.1%, 98.0 ± 1.9%, 92.8 ± 4.6%, and 15.58 ± 5.60 pixels for the whole lung model and 96.1 ± 2.2%, 98.1 ± 1.9%, 94.3 ± 3.5%, and 11.67 ± 4.84 pixels for the separate lung model, respectively (Table 3).
Table 3

Dice Score, Sensitivity, PPV and Hausdorff Distance of 2D and 3D U-Net Whole Lung and Separate Lung Training Model in HUG-ILD External Validation Set

2D U-Net3D U-Net
Whole Lung ModelSeparate Lung ModelWhole Lung ModelSeparate Lung Model
DSC (%)98.4 ± 1.098.4 ± 1.095.3 ± 3.196.1 ± 2.2
Sensitivity (%)98.7 ± 1.398.7 ± 1.198.0 ± 1.998.1 ± 1.9
PPV (%)98.1 ± 1.598.0 ± 1.692.8 ± 4.694.3 ± 3.5
Hausdorff distance (pixels)7.66 ± 3.937.59 ± 3.6915.58 ± 5.6011.67 ± 4.84

Data are mean ± standard deviation. HUG = University Hospitals of Geneva

Subgroup analyses of the internal and external validation datasets are summarized in Supplementary Tables 2 and 3. The mean DSC of the 2D U-Net whole and separate lung models was high in cases with underlying lung disease involving ≤ 25% of the lung parenchymal area in the internal (99.7% and 99.7%, respectively) and external datasets (98.9% and 98.9%, respectively), and in cases with underlying lung disease occupying more than 75% of the lung parenchymal area in the internal (99.3% and 99.4%, respectively) and external validation sets (97.9% and 98.0%, respectively) (Supplementary Tables 2, 3). The mean DSC of the 3D U-Net whole and separate lung models in cases with underlying lung disease occupying > 75% of the lung parenchymal area was lower in the external validation dataset (93.7% and 94.8%, respectively) than in the internal validation dataset (99.2% and 99.2%, respectively) (Supplementary Tables 2, 3). The mean DSC of the internal validation dataset divided into seven disease categories was over 98.8% in all models (Supplementary Table 2). In the external validation dataset, the performance of the 2D U-Net model was excellent in all categories, with mean DSC over 96.8%; however, the performance of the 3D U-Net model was good but unsatisfactory, with mean DSC over 92.8% (Supplementary Table 3). Two-way analysis of variance of lung volumes among the 2D and 3D whole and separate lung models showed no significant difference in either the internal validation (p = 0.997) or the external validation dataset (p = 0.784). ICCs of lung volumes between the manually segmented masks and each set of deep-learning-driven masks are shown in Supplementary Table 4. The percentage difference and limits of agreement of volumes between the manually segmented (ground truth) masks and the 2D whole lung, 2D separate lung, 3D whole lung, and 3D separate lung models were 0.1% (−0.4, 0.6), 0.0% (−0.6, 0.6), 0.6% (−1.1, 2.3), and 0.5% (−0.9, 1.9), respectively, in the internal validation set, and −0.6% (−4.2, 3.0), −0.7% (−4.4, 2.9), −5.7% (−9.3, −2.1), and −4.0% (−7.7, −0.4), respectively, in the external validation set. The 2D U−Net model showed better agreement in both the internal and external datasets. Bland−Altman plots showing differences between the volumes of the manually segmented lung masks and each set of automatically segmented masks are presented in Figures 3 and 4.
Fig. 3

Bland-Altman plots of volumes of 2D U-Net whole lung model (A), 2D U-Net separate lung model (B), 3D U-Net whole lung model (C), and 3D U-NET separate lung model (D) applied in the internal validation dataset.

The solid line represents mean of volume percentage differences and dashed lines represent the limits of agreements (1.96 times SD). The percentage difference and limits of agreement of volumes between the manually segmented (ground truth) masks and the 2D whole lung, 2D separate lung, 3D whole lung, and 3D separate lung models were 0.1% (−0.4, 0.6), 0.0% (−0.6, 0.6), 0.6% (−1.1, 2.3), and 0.5% (−0.9, 1.9), respectively, suggesting high performance of the U-Net. SD = standard deviation, 3D = three-dimensional

Fig. 4

Bland-Altman plots of volumes of 2D U-Net whole lung model (A), 2D U-Net separate lung model (B), 3D U-Net whole lung model (C), and 3D U-NET separate lung model (D) applied in the external validation dataset.

The solid line represents mean of volume percentage differences and dashed lines represent the limits of agreements (1.96 times SD). The percentage difference and limits of agreement of volumes between the provided ground truth and the 2D whole lung, 2D separate lung, 3D whole lung, and 3D separate lung models were −0.6% (−4.2, 3.0), −0.7% (−4.4, 2.9), −5.7% (−9.3, −2.1), and −4.0% (−7.7, −0.4), respectively.

Regarding the separation of anterior junctional line thickness less than 2 mm, the 2D separate and whole lung models completely separated seven out of the nine cases (77.8%) in the internal validation dataset with a thin anterior junctional line in the full scan range on CT. In the remaining two cases, the anterior junctional line was incompletely demarcated in several axial scans. In the 3D separate lung model, three out of the nine cases were completely demarcated. In the 3D whole lung model, anterior junctional line segmentation was partially incomplete in all nine cases. In the external dataset, 36 cases had an anterior junctional line thickness of less than 2 mm. When the 2D separate lung model was applied, the anterior junctional line was completely demarcated in 28 cases (77.8%).

DISCUSSION

Our study analyzed 203 cases of non-contrast chest CT images, of which 193 were LDCT scans, performed using CT machines from various vendors. One hundred and fifty cases had extensive underlying lung disease involving more than 40% of the lung parenchymal area. Manual lung segmentation for building the ground truth, which was a time-consuming process, although it enabled precise establishment of training datasets, was performed by board-certified radiologists. We used 2D and 3D deep learning algorithms that were trained in two different ways (whole lung training and separate lung training). As a result, DSC of the internal validation dataset was 99.4–99.6% and of the external dataset was 95.3–98.4%. Our model achieved high performance in both internal and external validation datasets. Demand for automatic detection and analysis of pulmonary disease in chest CT images has increased as medical technology has improved. Automatic segmentation of the lung field in CT images has been applied for analysis of various diffuse pulmonary diseases including emphysema (45), ILD (3), and infectious diseases, such as Coronavirus Disease 2019 (17). This CAD process is based on two steps: 1) extraction of lung field and 2) identification of lung disease from CT images (6). Therefore, precise segmentation of lung-field with automated lung segmentation algorithms is a prerequisite for radiologists to acquire further quantitative values from CT images, such as total lung volume and extent of the pathologic lung. Consequently, classification of the severity of the underlying lung disease or determination of the normal lung parenchymal volume (18) may be possible, which can be useful for clinicians. Accurate segmentation of lung regions in the presence of severe pathologies is challenging. Pulagam et al. (19) applied a thresholding-based algorithm with a modified convexity algorithm on 60 high-resolution CT scans with underlying honeycombing, reticular pattern, ground glass opacities, pleural plaques, and emphysema, resulting in a mean DSC of 98.6%. Harrison et al. (20) applied a fully convolutional network (FCN)–based deep-learning algorithm to chest CT scans with infections, ILD, and chronic obstructive pulmonary disease, obtaining a mean DSC of 98.5 ± 1.1%. Alves et al. (21) also applied an FCN-based deep-learning algorithm to the HUG-ILD dataset, and obtained a DSC of 98.7 ± 0.9% (21). Our model achieved generally higher DSCs in internal validation (99.4–99.6%) and even in scans with extensive underlying lung disease involving more than 40% of the lung field with a DSC of 99.3–99.5% (Fig. 5).
Fig. 5

Representative images of a 41-year old female with systemic sclerosis-associated interstitial lung disease in the internal validation dataset.

Chest CT image showing peripheral reticular and ground-glass opacities manifesting as a nonspecific interstitial pneumonia pattern (A). Manual lung mask (C) and segmented lung mask by 2D U-Net separate lung model (D) match almost perfectly on subtracted mask of manual and 2D U-Net (B). The Dice similarity coefficient between the masks was 99.7%.

For the whole lung model, DSC, sensitivity, and PPV were higher than in those reported in previous studies with a similar framework. Nevertheless, we discovered that separation of the anterior junctional line was unsatisfactory in the whole lung model. The anterior junctional line is a landmark separating the right from the left lung in the anteromedial aspect, formed by apposition of the visceral and parietal pleura and a small amount of intervening fat (22). In patients with extensive emphysema, the anterior junctional line becomes very thin, due to hyperinflation of the lung. The thin anterior junctional line is a well-known cause of failure to automatically separate the right from the left lung (1). We developed a separate training model to overcome the weakness of the whole lung training model. The digitized results of the two training models were not significantly different; however, separation of the right from the left lung by the anterior junctional line was more satisfying in the separate training model in a case-by-case visual review (Fig. 6). Compared to the 3D U-Net model, the 2D U-Net model was superior in demarcating thin anterior junctional lines.
Fig. 6

Representative images of a 68-year old male patient with emphysema in the internal validation dataset.

Chest CT image showing a very thin anterior junctional line due to hyperinflation (A). A segmented lung mask of 2D U-Net whole lung model (B) contains the anterior junctional line in the mask. 2D U-Net separate lung model (D) demarcates the anterior junctional line and separates the right from the left lung, as the ground truth (C).

The external validation results were slightly inferior to the internal validation results. In scans from the HUG-ILD dataset, the trachea and main bronchi were basically included in the ground truth mask, in contrast to our models, which were trained to exclude the trachea and main bronchi. To make a comparison, we added an airway mask to our 2D U-Net lung mask. However, in most scans from the HUG-ILD dataset, mediastinal fat tissue around the trachea was also included in the lung mask. Therefore, discrepancies were inevitable, regardless of the accuracy of lung segmentation, which led to underperformance of our model. Nevertheless, we revealed that lung segmentations obtained using our model tended to be slightly inaccurate in HUG-ILD cases with pleural effusion (Fig. 7). The number of cases with pleural effusion in the training dataset was small (only 10 of 157 cases). In some cases, our model showed more accurate lung segmentation than the ground truth of the HUG-ILD dataset, especially for the discrimination of the anterior junctional line and lung parenchyma with subpleural pathologies (Fig. 7, Supplementary Fig. 1).
Fig. 7

Representative images of an 81-year old male suspected of having pneumonia over a pulmonary fibrosis in the external validation dataset.

Chest CT image showing multifocal patchy ground-glass opacities and consolidations with underlying bronchiectasis (A). Compared to ground truth (C), lung segmentation by 2D U-Net separate training model (D) included the pleural effusion as a lung in the left hemithorax. However, our model (D) superiorly discriminated the anterior junctional line. Mismatch is observed in the trachea and large bronchi in the subtracted mask (B). Dice similarity coefficient, sensitivity, positive predictive value and Hausdorff distance were 95.4%, 98.7%, 92.2% and 8.00 pixels, respectively.

Our study had several limitations. First, segmentation was insufficient in some cases with dense subpleural consolidations (Supplementary Fig. 2). However, in those cases, accurate lung segmentation was difficult even for experienced radiologists, because the attenuation of collapsed or consolidative lung and thickened pleura on LDCT without contrast enhancement is indistinct. Second, during manual lung segmentation, radiologists may have subjectively drawn the border of the hilar structure. Finally, when comparing the 2D and 3D U-Net models, the performance of the 3D U-Net model in the external validation set was unsatisfactory. We therefore assume that the 3D U-Net model may have limited applicability in CT scans with thick image slices. Here, we present a deep neural network for automated lung segmentation in non-contrast chest CT scans with underlying extensive lung disease. DSC, sensitivity, and PPV were higher than reported in previous relevant publications for the segmentation of CT scans of patients with various extensive lung diseases, even in LDCT scans performed using machines from various vendors. This highly applicable method of automated lung segmentation in CT images using a deep neural network can form the basis for advanced computer-aided lung analysis in the future.
  13 in total

1.  An efficient algorithm for calculating the exact Hausdorff distance.

Authors:  Abdel Aziz Taha; Allan Hanbury
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2015-11       Impact factor: 6.226

Review 2.  Lines and stripes: where did they go?--From conventional radiography to CT.

Authors:  Jerry M Gibbs; Chitra A Chandrasekhar; Emma C Ferguson; Sandra A A Oldham
Journal:  Radiographics       Date:  2007 Jan-Feb       Impact factor: 5.333

3.  Computer-aided segmentation and volumetry of artificial ground-glass nodules at chest CT.

Authors:  Ernst Th Scholten; Colin Jacobs; Bram van Ginneken; Martin J Willemink; Jan-Martin Kuhnigk; Peter M A van Ooijen; Matthijs Oudkerk; Willem P Th M Mali; Pim A de Jong
Journal:  AJR Am J Roentgenol       Date:  2013-08       Impact factor: 3.959

Review 4.  Segmentation and Image Analysis of Abnormal Lungs at CT: Current Approaches, Challenges, and Future Trends.

Authors:  Awais Mansoor; Ulas Bagci; Brent Foster; Ziyue Xu; Georgios Z Papadakis; Les R Folio; Jayaram K Udupa; Daniel J Mollura
Journal:  Radiographics       Date:  2015 Jul-Aug       Impact factor: 5.333

5.  Automated Lung Segmentation from HRCT Scans with Diffuse Parenchymal Lung Diseases.

Authors:  Ammi Reddy Pulagam; Giri Babu Kande; Venkata Krishna Rao Ede; Ramesh Babu Inampudi
Journal:  J Digit Imaging       Date:  2016-08       Impact factor: 4.056

6.  Idiopathic interstitial pneumonias: do HRCT criteria established by ATS/ERS/JRS/ALAT in 2011 predict disease progression and prognosis?

Authors:  Chiara Romei; Laura Tavanti; Paola Sbragia; Annalisa De Liperi; Laura Carrozzi; Ferruccio Aquilini; Antonio Palla; Fabio Falaschi
Journal:  Radiol Med       Date:  2015-03-06       Impact factor: 3.469

7.  Automatic lung segmentation in CT images with accurate handling of the hilar region.

Authors:  Giorgio De Nunzio; Eleonora Tommasi; Antonella Agrusti; Rosella Cataldo; Ivan De Mitri; Marco Favetta; Silvio Maglio; Andrea Massafra; Maurizio Quarta; Massimo Torsello; Ilaria Zecca; Roberto Bellotti; Sabina Tangaro; Piero Calvini; Niccolò Camarlinghi; Fabio Falaschi; Piergiorgio Cerello; Piernicola Oliva
Journal:  J Digit Imaging       Date:  2009-10-14       Impact factor: 4.056

8.  Chronic obstructive pulmonary disease exacerbations in the COPDGene study: associated radiologic phenotypes.

Authors:  Meilan K Han; Ella A Kazerooni; David A Lynch; Lyrica X Liu; Susan Murray; Jeffrey L Curtis; Gerard J Criner; Victor Kim; Russell P Bowler; Nicola A Hanania; Antonio R Anzueto; Barry J Make; John E Hokanson; James D Crapo; Edwin K Silverman; Fernando J Martinez; George R Washko
Journal:  Radiology       Date:  2011-07-25       Impact factor: 11.105

9.  Well-aerated Lung on Admitting Chest CT to Predict Adverse Outcome in COVID-19 Pneumonia.

Authors:  Davide Colombi; Flavio C Bodini; Marcello Petrini; Gabriele Maffi; Nicola Morelli; Gianluca Milanese; Mario Silva; Nicola Sverzellati; Emanuele Michieletti
Journal:  Radiology       Date:  2020-04-17       Impact factor: 11.105

10.  Radiological Report of Pilot Study for the Korean Lung Cancer Screening (K-LUCAS) Project: Feasibility of Implementing Lung Imaging Reporting and Data System.

Authors:  Ji Won Lee; Hyae Young Kim; Jin Mo Goo; Eun Young Kim; Soo Jung Lee; Tae Jung Kim; Yeol Kim; Juntae Lim
Journal:  Korean J Radiol       Date:  2018-06-14       Impact factor: 3.500

View more
  7 in total

1.  Looking Ahead to 2022 for the Korean Journal of Radiology.

Authors:  Seong Ho Park
Journal:  Korean J Radiol       Date:  2022-01       Impact factor: 3.500

2.  A Pulmonary Vascular Extraction Algorithm from Chest CT/CTA Images.

Authors:  Shihui Xu; Ziming Zhang; Qinghua Zhou; Wei Shao; Wenjun Tan
Journal:  J Healthc Eng       Date:  2021-11-05       Impact factor: 2.682

3.  A bi-directional deep learning architecture for lung nodule semantic segmentation.

Authors:  Debnath Bhattacharyya; N Thirupathi Rao; Eali Stephen Neal Joshua; Yu-Chen Hu
Journal:  Vis Comput       Date:  2022-09-08       Impact factor: 2.835

4.  Chest CT Findings in Hospitalized Patients with SARS-CoV-2: Delta versus Omicron Variants.

Authors:  Soon Ho Yoon; Jong Hyuk Lee; Baek-Nam Kim
Journal:  Radiology       Date:  2022-06-28       Impact factor: 29.146

5.  Remotely shared CT-derived presurgical understanding of lung cancer: A randomized trial.

Authors:  Soon Ho Yoon; Kwon Joong Na; Chang Hyun Kang; In Kyu Park; Samina Park; Jin Mo Goo; Young Tae Kim
Journal:  Thorac Cancer       Date:  2022-09-02       Impact factor: 3.223

Review 6.  Application of medical imaging methods and artificial intelligence in tissue engineering and organ-on-a-chip.

Authors:  Wanying Gao; Chunyan Wang; Qiwei Li; Xijing Zhang; Jianmin Yuan; Dianfu Li; Yu Sun; Zaozao Chen; Zhongze Gu
Journal:  Front Bioeng Biotechnol       Date:  2022-09-12

Review 7.  Towards Machine Learning-Aided Lung Cancer Clinical Routines: Approaches and Open Challenges.

Authors:  Francisco Silva; Tania Pereira; Inês Neves; Joana Morgado; Cláudia Freitas; Mafalda Malafaia; Joana Sousa; João Fonseca; Eduardo Negrão; Beatriz Flor de Lima; Miguel Correia da Silva; António J Madureira; Isabel Ramos; José Luis Costa; Venceslau Hespanhol; António Cunha; Hélder P Oliveira
Journal:  J Pers Med       Date:  2022-03-16
  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.