Literature DB >> 34898850

COLI-Net: Deep learning-assisted fully automated COVID-19 lung and infection pneumonia lesion detection and segmentation from chest computed tomography images.

Isaac Shiri1, Hossein Arabi1, Yazdan Salimi1, Amirhossein Sanaat1, Azadeh Akhavanallaf1, Ghasem Hajianfar2, Dariush Askari3, Shakiba Moradi4, Zahra Mansouri1, Masoumeh Pakbin5, Saleh Sandoughdaran6, Hamid Abdollahi7, Amir Reza Radmard8, Kiara Rezaei-Kalantari2, Mostafa Ghelich Oghli4,9, Habib Zaidi1,10,11,12.   

Abstract

We present a deep learning (DL)-based automated whole lung and COVID-19 pneumonia infectious lesions (COLI-Net) detection and segmentation from chest computed tomography (CT) images. This multicenter/multiscanner study involved 2368 (347'259 2D slices) and 190 (17 341 2D slices) volumetric CT exams along with their corresponding manual segmentation of lungs and lesions, respectively. All images were cropped, resized, and the intensity values clipped and normalized. A residual network with non-square Dice loss function built upon TensorFlow was employed. The accuracy of lung and COVID-19 lesions segmentation was evaluated on an external reverse transcription-polymerase chain reaction positive COVID-19 dataset (7'333 2D slices) collected at five different centers. To evaluate the segmentation performance, we calculated different quantitative metrics, including radiomic features. The mean Dice coefficients were 0.98 ± 0.011 (95% CI, 0.98-0.99) and 0.91 ± 0.038 (95% CI, 0.90-0.91) for lung and lesions segmentation, respectively. The mean relative Hounsfield unit differences were 0.03 ± 0.84% (95% CI, -0.12 to 0.18) and -0.18 ± 3.4% (95% CI, -0.8 to 0.44) for the lung and lesions, respectively. The relative volume difference for lung and lesions were 0.38 ± 1.2% (95% CI, 0.16-0.59) and 0.81 ± 6.6% (95% CI, -0.39 to 2), respectively. Most radiomic features had a mean relative error less than 5% with the highest mean relative error achieved for the lung for the range first-order feature (-6.95%) and least axis length shape feature (8.68%) for lesions. We developed an automated DL-guided three-dimensional whole lung and infected regions segmentation in COVID-19 patients to provide fast, consistent, robust, and human error immune framework for lung and pneumonia lesion detection and quantification.
© 2021 The Authors. International Journal of Imaging Systems and Technology published by Wiley Periodicals LLC.

Entities:  

Keywords:  COVID‐19; X‐ray CT; deep learning; pneumonia; segmentation

Year:  2021        PMID: 34898850      PMCID: PMC8652855          DOI: 10.1002/ima.22672

Source DB:  PubMed          Journal:  Int J Imaging Syst Technol        ISSN: 0899-9457            Impact factor:   2.177


INTRODUCTION

The recent pandemic of severe acute respiratory syndrome coronavirus 2 disease (COVID‐19) is posing great health concerns globally. , The COVID‐19 pandemic has resulted in loss of lives, health, and economic issues. Although a large number of trials have been conducted to produce vaccines and/or treat COVID‐19, a specific vaccine or therapy is still lacking. , For the diagnosis of COVID‐19, reverse transcription‐polymerase chain reaction (RT‐PCR) is a high sensitive molecular test, but bears inherently a number of limitations. , Furthermore, previous studies have indicated that thoracic computed tomography (CT) is a fast and highly sensitive approach for COVID‐19 detection and management. , In this regard, dedicated ultralow‐dose CT scanning protocols were recently devised. In connection with the use of CT in COVID‐19 management, a wide range of qualitative and quantitative studies have been carried out for diagnostic, prognostic and longitudinal follow‐up of patients. , , In these studies, whole lungs or infectious lesions were analyzed and several patterns and features were found to have high diagnostic and prognostic value. , , , , , However, accurate segmentation of lungs and infectious pneumonia lesions remains challenging. Hence, segmentation is the main issue impacting the outcome of both qualitative and quantitative studies. , , Although several segmentation approaches including manual delineation, semiautomated and fully automated techniques have been applied to CT images for COVID‐19 management, they are still facing serious challenges to produce robust and dependable outcomes. In medical image segmentation, particularly whole three‐dimensional (3D) volumes definition and big data analysis, manual delineation requires experienced trained radiologists, is time consuming, labor‐intensive, and suffers from interobserver and intraobserver variability concerns. , Whole lung segmentation is a pivotal step for further analysis, including extraction of the percentage of infection, well aerated portion of the lung, and enabling radiomics and deep learning (DL) analysis of COVID‐19 patients. , Conventional algorithms, including rule‐based and atlas‐based, performed relatively well on normal and mild disease chest CT, but might fail in COVID‐19 patients lung segmentation because of different stages of disease with different levels of severity. Furthermore, developing a fully automatic tool for lung and pneumonia COVID‐19 lesions is highly desired owing to rapid changes in appearance and manifestation at different stages of the disease. , Artificial intelligence (AI) algorithms, particularly its two major subcategories, machine learning (ML), and DL, have been widely used for medical image analysis , , , , , , , and more recently in the segmentation of lung and pneumonia infectious lesions from chest CT images of COVID‐19 patients. These studies reported that AI improved the accuracy of lesion detection/segmentation and reduced the bias associated with conventional approaches. In a study by Zheng and coworkers, a weakly supervised DL algorithm was applied to chest CT images for automatic COVID‐19 detection. Fan et al. presented a COVID‐19 lung infection segmentation deep network (Inf‐Net) based on semisupervised learning. Furthermore, a number of DL algorithms, namely UNet, UNet++, V‐Net, Attention‐UNet, Gated‐UNet, and Dense‐UNet were used for COVID‐19 lesion detection and segmentation from chest CT images. , CT images are commonly acquired on various scanner models using different imaging protocols, and as such, the resulting datasets are heterogeneous, which might lead to inaccuracy in the developed models. Training a robust and generalizable DL model requires a large clean annotated dataset. Owing to the relatively recent outbreak of COVID‐19 pandemic, producing a large labeled COVID‐19 image dataset is impractical. Transfer learning (TL) has received attention to address the lack of large datasets for the implementation of machine/DL‐based algorithms. , Various TL‐based strategies were used for transferring knowledge from different domains, including natural images to medical images to develop more robust and generalizable models. In the present study, we developed a DL‐based automated detection and segmentation of lung and COVID‐19 pneumonia infectious lesions (COLI‐Net) from chest CT images. In this work, large lung and COVID‐19 lesions datasets and TL used to train a residual network (ResNet) for lung and pneumonia infectious lesions segmentation.

MATERIALS AND METHODS

Clinical studies

For lung and COVID‐19 lesions segmentation, we prepared 2368 (347 259, 2D slices) and 190 (17 341, 2D slices) multicentric and multivendor volumetric CT images with lung and COVID‐19 lesion segmentations.

Lung datasets

For lung segmentation training, we used 2298 chest CT exams (328′205, 2D slices) with different pathologies from different centers, including 800 exams of normal subjects without any lung abnormalities from Iran Center#1 (81′347, 2D slices); 400 images of non‐small cell lung carcinoma patients from Cancer Imaging Archive (TCIA) , , (48′568, 2D slices); 200 non‐COVID‐19 pneumonia (49′465, 2D slices); and 898 (148′825, 2D slices) RT‐PCR positive COVID‐19 patients from Iran Center#2. All lung segmentations were performed using a region‐growing algorithm followed by manual verification and amendment by an experienced radiologist.

COVID‐19 lesions datasets

For COVID‐19 lesions segmentation training, we used 120 (9557, 2D slices) RT‐PCR positive image datasets, including 90 (8338, 2D slices) datasets from three different centers in Iran (Centers#1, #2, #3) where the infectious lesions were manually segmented by experienced radiologists, in addition to 30 (1250, 2D slices) CT exams from Russia. Lesions were segmented manually for the local dataset whereas these segmentations were provided by data providers for the external dataset. , ,

Image preprocessing

Prior to network training, all images were cropped without losing important information (parts of lungs) and resized to 296 × 216 matrix size. In the first step of intensity normalization, voxel intensities in the entire dataset were clipped between −1024 and 300 HUs to reduce the dynamic range of the intensity of CT images. This range of HUs covers air, lung tissue, fat, soft‐tissue, and calcifications in the lung. Only bony structures will be suppressed, which are irrelevant to lung and lesion segmentations. Hence, we found this range of HUs optimal for ML‐based lung and lesion segmentation. Moreover, to further reduce the dynamic range of voxel intensities, CT images were normalized with an empiric factor equal to 1000 to keep the original dynamic intensity range and put the bulk of CT intensity within the range of 0–1 HU.

Residual neural network

The ResNet proposed by Li et al. , built upon TensorFlow was used for lung and COVID‐19 lesions segmentation. The ResNet is composed of 20 convolutional layers where different dilation factors were used for different levels of feature extraction (zero dilatation factor for low‐level, two dilatation factors for medium‐level, and four dilatation factors for high level). Every two layers were linked together with residual connections (Figure 1). Non‐square Dice was used as loss function, and Figure 1 provides descriptive detail of ResNet.
FIGURE 1

Architecture of the deep residual neural network (ResNet) along with details of the associated layers. Conv, convolutional kernel; LReLu, leaky rectified linear unit; SoftMax, Softmax function; Residual, residual connection

Architecture of the deep residual neural network (ResNet) along with details of the associated layers. Conv, convolutional kernel; LReLu, leaky rectified linear unit; SoftMax, Softmax function; Residual, residual connection

Training and evaluation

Lung and COVID‐19 lesions training was performed on 2D slices owing to the wide variability in slice thicknesses across the datasets from the different centers. We used the following hyperparameters for model training: loss function = non‐square Dice, learning rate = 0.001, optimizer = Adam, decay = 0.0001, batch size = 32, and weights regression type = L2norm, drop out = 0.5, and number of epochs = 300. For lung segmentation training, we used 2178 3D CT images (347 259, 2D slices). For COVID‐19 lesions segmentation, we used pretrained lung segmentation network as initial weights followed by fine‐tuning for lesion segmentation of 120 3D CT images (9557, 2D slices). Body fine‐tuning approaches were used for TL where all pretrained weights of lung segmentation were used as initial weights for lesion segmentation. The quantitative assessment of segmentations was performed independently on RT‐PCR positive COVID‐19 datasets from different centers, including 20 CT exams (2214, 2D slices) from Center#1 (Iran1); 10 exams (2552, 2D slices) from Center#2 (Iran2); 20 exams (1250, 2D slices) from center#3 (Russia) ; 10 exams (939, 2D slices) from Center#4 (China) , ; and 10 exams (829, 2D slices) from center#5 (Italy*). Training datasets were split into training (80%) and validation (20%) sets. Overall, the evaluation was performed on 7333 2D slices from different centers. Data splitting into training and test sets was performed based on 3D image of patients without overlap between the training and test sets. All evaluations were performed in 3D mode.

Evaluation

To evaluate the performance of image segmentation, we calculated Dice similarity coefficient (Equation (1)), Jaccard index (Equation (2)), false negative (Equation (3)), false positive (Equation (4)), mean surface distance (Equation (5)), and mean Hausdorff distance (Equation (6)). In addition, different volume indices were exploited to quantify the portion of infection, including relative volume difference (%), relative volume difference of lesion/lung relative volume (%) (Equation (7)), absolute relative volume difference (%) (Equation (8)), and absolute relative volume difference of lesion/lung relative volume (%). Hounsfield unit (mean) relative difference (%), and Hounsfield unit (mean) absolute relative difference (%) were calculated for lungs and COVID‐19 lesions from different segmentations of CT images. In addition, we evaluated the impact of the segmentation on 17 first‐order and 10 shape radiomic features in both lungs and COVID‐19 lesions. The list of radiomic features is presented in Supplemental Table 1. where is the distance between a point p belonging to the surface of a 3D surface predicted image (P) and its closest distance between the two surfaces P and G (ground truth).

RESULTS

Figures 2 and 3 compares visually in 2D and 3D views for different external validation sets of lungs and lesions delineated manually by experienced radiologists and automatically by the DL model. Additional results from the external validation sets are provided in Supplemental Figures 1–13 (2D views) and 14–17 (3D views). Overall, there is good agreement between manual and predicted lung and infectious lesions segmentation in the different datasets. Despite the variability of the subjects among the different centers, COLI‐Net performed consistently well in multicentric and multiscanner setting. What stands out from these results is that COLI‐Net can detect and segment infectious regions (within lesion segmentation) while excluding arteries and tracheae in lung segmentation.
FIGURE 2

Representative manual and predicted segmentation (2D views) of lungs and COVID‐19 lesions for five different cases from different datasets

FIGURE 3

Representative manual and predicted segmentation (3D views) of lungs and COVID‐19 lesions for three different cases from different datasets

Representative manual and predicted segmentation (2D views) of lungs and COVID‐19 lesions for five different cases from different datasets Representative manual and predicted segmentation (3D views) of lungs and COVID‐19 lesions for three different cases from different datasets Table 1 summarizes segmentation quantification metrics for lungs and COVID‐19 lesions. It can be seen that the mean Dice coefficients were 0.98 ± 0.011 (95% CI, 0.98–0.99) and 0.91 ± 0.038 (95% CI, 0.90–0.91) for lung and lesions segmentation, respectively. The mean Jaccard index was 0.97 ± 0.022 (95% CI, 0.97–0.97) and 0.83 ± 0.062 (95% CI, 0.82–0.84) for lung and COVID‐19 lesions segmentation, respectively. Lung segmentation in Russia datasets exhibited better results compared to the other centers/datasets. This might be attributed to the homogeneity and mild severity of the lesions in this dataset. Supplemental Tables 2–7 summarize lung and lesion segmentation quantification metrics for different external validation sets.
TABLE 1

Descriptive statistics of quantitative metrics for lung and COVID‐19 lesions in the different datasets

MetricMinMaxMean ± SD95% CI
LungDice0.920.990.98 ± 0.0110.98–0.99
Jaccard0.860.990.97 ± 0.0220.97–0.97
False negative0.0030.0860.013 ± 0.0110.011–0.015
False positive0.0020.0730.017 ± 0.0140.014–0.019
Average Hausdorff distance0.0050.140.022 ± 0.0260.018–0.027
Mean surface distance0.0050.170.026 ± 0.0280.021–0.031
LesionsDice0.80.980.91 ± 0.0380.9–0.91
Jaccard0.660.960.83 ± 0.0620.82–0.84
False negative0.0150.230.086 ± 0.0440.078–0.094
False positive0.0240.320.098 ± 0.0550.088–0.11
Average Hausdorff distance0.0435.60.42 ± 0.730.29–0.55
Mean surface distance0.0466.10.45 ± 0.790.31–0.59
Descriptive statistics of quantitative metrics for lung and COVID‐19 lesions in the different datasets Table 2 summarizes the impact of lung and lesions segmentations on mean Hounsfield unit and volume calculation. Mean relative HU differences (%) of 0.03 ± 0.84 (95% CI, −0.12 to 0.18) and −0.18 ± 3.4 (95% CI, −0.8 to 0.44) were achieved for lungs and lesions, respectively. The relative volume difference for the lung was 0.38 ± 1.2 (95% CI, 0.16–0.59) whereas it was 0.81 ± 6.6 (95% CI, −0.39 to 2) for lesions. The results obtained from the mean Hounsfield unit and volume calculation for lung and infectious lesions for the different external validation sets are presented in Supplemental Tables 8–11.
TABLE 2

Descriptive statistics of volume index for lung and COVID‐19 lesions in the different datasets

MetricMinMaxMean ± SD95% CI
LungRelative mean HU diff (%)−4.23.90.03 ± 0.84−0.12 to 0.18
Absolute relative mean HU diff (%)0.0064.20.52 ± 0.660.4–0.64
Relative volume diff (%)−3.16.40.38 ± 1.20.16–0.59
Absolute relative volume diff (%)0.0046.40.89 ± 0.880.73–1
LesionsRelative mean HU diff (%)−9.810−0.18 ± 3.4−0.8 ‐ 0.44
Absolute relative mean HU diff (%)0.026102.4 ± 2.51.9–2.8
Relative volume diff (%)−14210.81 ± 6.6−0.39 to 2
Absolute relative volume diff (%)0.018214.8 ± 4.64–5.6
Descriptive statistics of volume index for lung and COVID‐19 lesions in the different datasets Figures 4 and 5 depict the Dice similarity index, Jaccard, mean Hounsfield unit, and volume difference box plots for lung and lesions segmentation, respectively. Supplemental Figures 18 and 19 show box plots of Hounsfield unit absolute relative difference (%), absolute relative volume difference (%), false negative, false positive, average Hausdorff distance, and mean surface distance for lung and lesions.
FIGURE 4

Box plots comparing various quantitative imaging metrics for lung segmentation, including Dice coefficient, Jaccard index, Hounsfield units (mean) relative difference (%), and relative volume difference (%)

FIGURE 5

Box plots comparing various quantitative imaging metrics for COVID‐19 lesions segmentation, including Dice coefficient, Jaccard index, Hounsfield units (mean) relative difference (%), and relative volume difference (%)

Box plots comparing various quantitative imaging metrics for lung segmentation, including Dice coefficient, Jaccard index, Hounsfield units (mean) relative difference (%), and relative volume difference (%) Box plots comparing various quantitative imaging metrics for COVID‐19 lesions segmentation, including Dice coefficient, Jaccard index, Hounsfield units (mean) relative difference (%), and relative volume difference (%) Descriptive statistics of relative volume (lesion/lung) indices are presented in Table 3. A relative error of 0.22 ± 6.3 (95% CI, −0.95 to 1.4) and absolute relative error of 4.7 ± 4.2 (95% CI, 3.9–5.5) were achieved for relative volume (lesion/lung). Supplemental Tables 12 and 13 summarize the results obtained for the relative volume (lesion/lung) index for different external validation sets. Figure 6 depicts boxplot of manual and predicted relative volume lesion/lung differences (%) and absolute/relative error of lesion/lung relative volume errors (%) for different external validation sets.
TABLE 3

Descriptive statistics of relative volume index

MetricMinMaxMean ± SD95% CI
Manual segmentation relative volume (lesion/lung)0.0010.820.13 ± 0.190.095–0.16
Predicted segmentation relative volume (lesion/lung)0.0010.840.13 ± 0.190.094–0.16
RE volume diff lesion/lesion (%)−14160.22 ± 6.3−0.95 to 1.4
ARE volume diff lesion/lesion (%)0.004164.7 ± 4.23.9–5.5
FIGURE 6

Box plots comparing various quantitative imaging metrics for relative volume, including manual segmentation relative volume lesion/lung, predicted segmentation relative volume lesion/lung, relative error of lesion/lung relative volume (%), and absolute relative error of lesion/lung relative volume (%)

Descriptive statistics of relative volume index Box plots comparing various quantitative imaging metrics for relative volume, including manual segmentation relative volume lesion/lung, predicted segmentation relative volume lesion/lung, relative error of lesion/lung relative volume (%), and absolute relative error of lesion/lung relative volume (%) Figure 7 presents heatmap of the mean relative error of first‐order and histogram shape radiomic features in the lung and lesions for different validation sets. Most radiomic features exhibited a mean relative error less than 5% with the highest mean relative error for the lung being −6.95% for range first‐order feature and least axis length shape feature (8.68%) in lesions. The heatmap of the mean absolute relative error is depicted in Supplemental Figure 20.
FIGURE 7

Mean relative error of different first‐order and shape radiomic features for different datasets in lung and infection regions

Mean relative error of different first‐order and shape radiomic features for different datasets in lung and infection regions

DISCUSSION

Chest CT imaging has emerged as a complementary tool for COVID‐19 early diagnosis and longitudinal follow‐up. However, a number of challenges still need to be addressed for the accurate diagnosis of COVID‐19 and its differentiation from other lung diseases, such as viral and bacterial pneumonia and other respiratory diseases. In this regard, several AI‐based solutions exhibiting different levels of accuracy and robustness were proposed and evaluated. , Another challenging problem that arises in the domain of quantitative analysis of CT images in clinical practice is lung and pneumonia infectious lesions segmentation. At the outset, different complex manifestations (appearance, size, location, boundaries and contrast) of infectious lesions, including consolidation, reticulation, and ground‐glass opacity at different stages of the disease (longitudinal changes in the same patients) have been observed. Furthermore, providing ground truth segmentation for infectious lesion segmentation is challenging owing to interobserver/intraobserver variability, noisy annotations, and the long processing time. Previously developed atlas, rule, and hybrid (atlas and rule) based algorithms for lung segmentation have shown acceptable performance on normal lungs and in the presence of mild pathogens (low density), such as emphysema. However, they presented limited performance in severe conditions (high density), including pleural effusion, atelectasis, consolidation, fibrosis, and pneumonia. Recent developments in the field of ML have led to a renewed interest in automatic lung segmentation. However, most seminal works in this area used a limited training dataset, predominantly containing normal cases or focusing on one class of pathogeneses, which could impact generalizability for unseen/non‐diagnosed test datasets. In the present study, we applied DL algorithms and TL on CT images obtained from different imaging centers to detect and segment the whole lung and pneumonia infected regions in COVID‐19 patients. A number of previous works attempted to develop automated segmentation algorithms for lung and infectious lesions in COVID‐19 CT images. Hofmanninger et al. developed models for lung segmentation and reported a Dice coefficient of 0.98 ± 0.01 for different pathological states (atelectasis, fibrosis, mass, pneumothorax, and trauma). They concluded that diversity in the training dataset is more important than the DL algorithms. Müller et al. implemented a 3D U‐Net using data augmentation for generating image patches during training for lung and lesion segmentation on 20 annotated CT volumes. They achieved Dice coefficients of 0.950 and 0.761 for lung and lesions, respectively. A modified 3D U‐Net (feature variation and progressive atrous spatial pyramid pooling blocks) proposed by Yan et al. was developed for lung and infectious lesion segmentation on 861 patients, reporting a Dice similarity index of 0.987 for lung and 0.726 for lesions segmentation. Moreover, comparisons were performed with a dense fully convolutional network (lung: 0.865, lesions: 0.659) ; U‐Net (lung: 0.987, lesions: 0.688) ; V‐Net (lung: 0.983, lesions: 0.625) ; and U‐Net++ (lung: 0.986, lesions: 0.681). The mean Dice coefficient for lung and lesions segmentation for different external validation sets used in our work were 0.98 ± 0.011 and 0.91 ± 0.038, respectively. Chen et al. used the residual attention U‐Net for multi‐class segmentation of CT images, achieving a Dice coefficient of 0.94 for infectious lesions segmentation. Zhou et al. used a modified U‐net network through spatial and channel attention mechanisms along with focal Tversky loss in the training process for improving small lesions segmentation. The results were evaluated on 427 slices achieving a Dice coefficient of 0.83. Elharrouss et al. adopted an encoder‐decoder for infectious lesions segmentation using 20 clinical studies from the Italian Society of Medical and Interventional Radiology to report a Dice coefficient of 0.786. They compared the results with U‐Net (Dice: 0.439), Attention‐UNet (Dice: 0.583), Gated‐UNet (Dice: 0.623), Dense‐UNet (Dice: 0.515), U‐Net++ (Dice: 0.422), and Inf‐Net (Dice: 0.739). Wang et al. proposed a robust algorithm for COVID‐19 infectious lesions segmentation from CT images (COPLE‐Net) designed to learn from noisy labeled data. The algorithm relies on noise‐robust Dice loss and mean absolute error loss for generalized Dice loss for robust segmentation of noisy datasets and a modified version of U‐Net to better handle infectious lesion segmentation with various manifestations and scales. The best results achieved by COPLE‐Net were 0.807 ± 0.099 and 0.160 ± 0.171% as Dice coefficient and relative volume error (RVE [in %]) respectively. Wang et al. evaluated different DL algorithms, including modified 3D U‐Net (3D New‐Net U‐Net, Dice: 0.704 ± 0.187, RVE: 25.41 ± 24.73%), modified 2D U‐Net (2D New‐Net U‐Net, Dice: 0.791 ± 0.129, RVE: 18.37 ± 17.43%), spatial attention gate U‐Net (Attention U‐Net, Dice: 0.772 ± 0.123, RVE: 19.77 ± 18.41%), spatial and channel “squeeze and excitation” blocks with U‐net (ScSE U‐Net, Dice: 0.780 ± 0.125, RVE: 18.85 ± 16.69%), and light‐weight power efficient and general purpose CNN (ESPNetv2, Dice: 0.698 ± 0.148, RVE: 23.69 ± 20.26%). Our proposed COLI‐Net approach showed good performance compared to previous studies with a Dice coefficient of 0.91 ± 0.038 (95% CI: 0.90–0.91) and RVE of 0.38 ± 1.2% (95% CI: 0.16–0.59) for pneumonia infectious lesions. A large labeled dataset is required to build a robust and generalizable model while avoiding overfitting. Previous studies attempted to transfer the knowledge from natural to medical imaging domain, leading to improved accuracy by addressing the issue of limited datasets. , TL was recently applied for the detection and classification of COVID‐19 using chest x‐ray and CT images. , More recently, Wang et al. applied four TL methods on COVID‐19 CT images for the segmentation of infectious lesions using 3D U‐Net. The information was transformed from cancer and pleural effusion data to COVID‐19 lesion segmentation. The Dice coefficient increased from 0.673 ± 0.22 to 0.703 ± 0.20 after TL. They concluded that the transferability of non‐COVID‐19 data improved the quality of COVID‐19 lesion segmentation to build a robust segmentation model. In our study, we exploited TL from a large multicentric lung labeled dataset with various pathologies to overcome the shortcomings of infectious lesion segmentation. Li et al. used thick‐section chest CT images of 531 COVID‐19 patients for automatic segmentation of lesions using 2.5D U‐net to achieve Dice coefficients of 0.74 ± 0.28 and 0.76 ± 0.29 with respect to manual delineation performed by the two radiologists. The interobserver variability measured by the Dice metric was 0.79 ± 0.25 between two radiologists. They calculated two imaging biomarkers, including the percentage of infection and average infectious HU for severity and progression assessment, resulting in AUC of 0.97. Thick‐section CT imaging was recommended for high‐pitch scans to decrease the acquisition time and motion artifacts (due to breath holding) and reduce radiation doses to patients. , In our dataset, various slice thicknesses (1–8 mm) have been included to train a robust network against this parameter, which highly impacts image manifestations. The relative error of volume difference for the percentage of infections (lesion/lung) and relative mean HU Diff (%) were 0.22 ± 6.3% (95% CI: −0.95 to 1.4%) and −0.18 ± 3.4% (95% CI: −0.8 to 0.44%), demonstrating the high accuracy of COLI‐Net for biomarker generation. Potential foreseen applications are not limited to the detection and segmentation but could be useful in providing diagnostic and prognostic parameters calculated using lung and infections segmentation to estimate the percentage of infections, and enabling advanced image processing in COVID‐19 patients. The existing body of research on pneumonia suggests that the pneumonia severity index (PSI) can potentially be used as a severity marker. A recent study classified COVID‐19 patients into severe and nonsevere patients based on PSI calculated using CT images. Different DL algorithms and radiomics analysis approaches using CT images have been examined recently for developing diagnostic (discriminating COVID‐19 from bacterial/viral pneumonia) and prognostic (survival, hospital stay, intensive care unit [ICU] admission, risk of outcome) models, which require lung and lesion segmentation. Moreover, calculating the percentage of infection and well‐aerated regions in the lung are frequently performed through visual assessment or by simply calculating HU values in the lungs, which is not only time‐consuming but also lacks accuracy. The established model exhibited noticeable performance variation across different COVID‐19 patients collected from different countries, centers, with different patient backgrounds, and stages of the disease. Since the quality of CT images depends directly on the scanner model, imaging protocol (tube voltage, tube current, pitch factor, etc.), and reconstruction algorithm, we employed various datasets from different centers to cover a large variability.10, 72 Although the proposed algorithm was evaluated using a multicenter, multiscanner, multinational dataset and patients with a diverse background, stages of the disease, a full‐scale adaptation of this model requires further clinical investigation and fine‐tuning to the specific image acquisition parameters of a center. This framework provides multiple imaging biomarkers for COVID‐19 patients to facilitate the assessment of their clinical relevance in diagnostic (discriminating COVID‐19 from bacterial/viral pneumonia) and prognostic (survival, hospital stay, ICU admission, risk of outcome) applications. Further development should involve implementing lung lobes segmentation to calculate all potential imaging biomarkers at the lobes level. In this study, we used only the ResNet architecture for model evaluation. However, further evaluations should be conducted to compare different models, including UNet, VNet, and GAN architectures.

CONCLUSION

We set out to develop an automated algorithm capable of segmenting 3D whole lung and infected regions in COVID‐19 patients from chest CT images using DL techniques to enable fast, consistent, robust, and human error immune framework for lung and pneumonia lesion detection and delineation. Owing to the complex nature of the problem and high variability in lesion manifestation, TL from whole lungs to pneumonia infection lesions was proposed and implemented to enrich specific COVID‐19 pneumonia features identification from clinical studies. Moreover, a multicentric and multiscanner dataset was collected for the development of the DL model to establish an automated and generalizable platform for efficient COVID‐19 patients management. The developed AI model was evaluated using a wide range of COVID‐19 patients of diverse populations with different stages of the disease from multiple centers around the world to enable big data analysis of COVID‐19 for automated progression/regression assessment of pneumonia lesions in follow‐up studies, provide diagnostic and prognostic metrics, and enable further advanced image processing.

CONFLICT OF INTEREST

The authors declare no conflicts of interest.

AUTHOR CONTRIBUTIONS

Isaac Shiri and Hossein Arabi are the co‐first authors of this paper. Mostafa Ghelich Oghli and Habib Zaidi contributed to the study conception and design. Isaac Shiri and Hossein Arabi designed, implemented, and evaluated the image segmentation and ML framework. Yazdan Salimi, Amirhossein Sanaat, Azadeh Akhavanallaf, Ghasem Hajianfar, Dariush Askari, Shakiba Moradi, Zahra Mansouri, Masoumeh Pakbin, Saleh Sandoughdaran, Hamid Abdollahi, Amir Reza Radmard, and Kiara Rezaei‐Kalantari collated the datasets. Habib Zaidi contributed to the study conception and design, initial draft of the manuscript and supervision and funding of this work. All authors contributed to the data preparation and revision of the manuscript for important content. Appendix S1: Supporting Information Click here for additional data file.
  53 in total

1.  Toward automated segmentation of the pathological lung in CT.

Authors:  Ingrid Sluimer; Mathias Prokop; Bram van Ginneken
Journal:  IEEE Trans Med Imaging       Date:  2005-08       Impact factor: 10.048

Review 2.  Medical Image Analysis using Convolutional Neural Networks: A Review.

Authors:  Syed Muhammad Anwar; Muhammad Majid; Adnan Qayyum; Muhammad Awais; Majdi Alnowami; Muhammad Khurram Khan
Journal:  J Med Syst       Date:  2018-10-08       Impact factor: 4.460

3.  Automated Lung Segmentation from HRCT Scans with Diffuse Parenchymal Lung Diseases.

Authors:  Ammi Reddy Pulagam; Giri Babu Kande; Venkata Krishna Rao Ede; Ramesh Babu Inampudi
Journal:  J Digit Imaging       Date:  2016-08       Impact factor: 4.056

Review 4.  Review of Artificial Intelligence Techniques in Imaging Data Acquisition, Segmentation, and Diagnosis for COVID-19.

Authors:  Feng Shi; Jun Wang; Jun Shi; Ziyan Wu; Qian Wang; Zhenyu Tang; Kelei He; Yinghuan Shi; Dinggang Shen
Journal:  IEEE Rev Biomed Eng       Date:  2021-01-22

5.  H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation From CT Volumes.

Authors:  Xiaomeng Li; Hao Chen; Xiaojuan Qi; Qi Dou; Chi-Wing Fu; Pheng-Ann Heng
Journal:  IEEE Trans Med Imaging       Date:  2018-06-11       Impact factor: 10.048

6.  Standard SPECT myocardial perfusion estimation from half-time acquisitions using deep convolutional residual neural networks.

Authors:  Isaac Shiri; Kiarash AmirMozafari Sabet; Hossein Arabi; Mozhgan Pourkeshavarz; Behnoosh Teimourian; Mohammad Reza Ay; Habib Zaidi
Journal:  J Nucl Cardiol       Date:  2020-04-28       Impact factor: 5.952

7.  Correlation of Chest CT and RT-PCR Testing for Coronavirus Disease 2019 (COVID-19) in China: A Report of 1014 Cases.

Authors:  Tao Ai; Zhenlu Yang; Hongyan Hou; Chenao Zhan; Chong Chen; Wenzhi Lv; Qian Tao; Ziyong Sun; Liming Xia
Journal:  Radiology       Date:  2020-02-26       Impact factor: 11.105

8.  Automated Assessment of CO-RADS and Chest CT Severity Scores in Patients with Suspected COVID-19 Using Artificial Intelligence.

Authors:  Nikolas Lessmann; Clara I Sánchez; Ludo Beenen; Luuk H Boulogne; Monique Brink; Erdi Calli; Jean-Paul Charbonnier; Ton Dofferhoff; Wouter M van Everdingen; Paul K Gerke; Bram Geurts; Hester A Gietema; Miriam Groeneveld; Louis van Harten; Nils Hendrix; Ward Hendrix; Henkjan J Huisman; Ivana Išgum; Colin Jacobs; Ruben Kluge; Michel Kok; Jasenko Krdzalic; Bianca Lassen-Schmidt; Kicky van Leeuwen; James Meakin; Mike Overkamp; Tjalco van Rees Vellinga; Eva M van Rikxoort; Riccardo Samperna; Cornelia Schaefer-Prokop; Steven Schalekamp; Ernst Th Scholten; Cheryl Sital; Lauran Stöger; Jonas Teuwen; Kiran Vaidhya Venkadesh; Coen de Vente; Marieke Vermaat; Weiyi Xie; Bram de Wilde; Mathias Prokop; Bram van Ginneken
Journal:  Radiology       Date:  2020-07-30       Impact factor: 11.105

9.  Whole-body voxel-based internal dosimetry using deep learning.

Authors:  Azadeh Akhavanallaf; Iscaac Shiri; Hossein Arabi; Habib Zaidi
Journal:  Eur J Nucl Med Mol Imaging       Date:  2020-09-01       Impact factor: 9.236

View more
  4 in total

Review 1.  Machine learning applications for COVID-19 outbreak management.

Authors:  Arash Heidari; Nima Jafari Navimipour; Mehmet Unal; Shiva Toumaj
Journal:  Neural Comput Appl       Date:  2022-06-10       Impact factor: 5.102

2.  COLI-Net: Deep learning-assisted fully automated COVID-19 lung and infection pneumonia lesion detection and segmentation from chest computed tomography images.

Authors:  Isaac Shiri; Hossein Arabi; Yazdan Salimi; Amirhossein Sanaat; Azadeh Akhavanallaf; Ghasem Hajianfar; Dariush Askari; Shakiba Moradi; Zahra Mansouri; Masoumeh Pakbin; Saleh Sandoughdaran; Hamid Abdollahi; Amir Reza Radmard; Kiara Rezaei-Kalantari; Mostafa Ghelich Oghli; Habib Zaidi
Journal:  Int J Imaging Syst Technol       Date:  2021-10-28       Impact factor: 2.177

3.  COVID-19 prognostic modeling using CT radiomic features and machine learning algorithms: Analysis of a multi-institutional dataset of 14,339 patients.

Authors:  Isaac Shiri; Yazdan Salimi; Masoumeh Pakbin; Ghasem Hajianfar; Atlas Haddadi Avval; Amirhossein Sanaat; Shayan Mostafaei; Azadeh Akhavanallaf; Abdollah Saberi; Zahra Mansouri; Dariush Askari; Mohammadreza Ghasemian; Ehsan Sharifipour; Saleh Sandoughdaran; Ahmad Sohrabi; Elham Sadati; Somayeh Livani; Pooya Iranpour; Shahriar Kolahi; Maziar Khateri; Salar Bijari; Mohammad Reza Atashzar; Sajad P Shayesteh; Bardia Khosravi; Mohammad Reza Babaei; Elnaz Jenabi; Mohammad Hasanian; Alireza Shahhamzeh; Seyaed Yaser Foroghi Ghomi; Abolfazl Mozafari; Arash Teimouri; Fatemeh Movaseghi; Azin Ahmari; Neda Goharpey; Rama Bozorgmehr; Hesamaddin Shirzad-Aski; Roozbeh Mortazavi; Jalal Karimi; Nazanin Mortazavi; Sima Besharat; Mandana Afsharpad; Hamid Abdollahi; Parham Geramifar; Amir Reza Radmard; Hossein Arabi; Kiara Rezaei-Kalantari; Mehrdad Oveisi; Arman Rahmim; Habib Zaidi
Journal:  Comput Biol Med       Date:  2022-03-29       Impact factor: 6.698

4.  High-dimensional multinomial multiclass severity scoring of COVID-19 pneumonia using CT radiomics features and machine learning algorithms.

Authors:  Isaac Shiri; Shayan Mostafaei; Atlas Haddadi Avval; Yazdan Salimi; Amirhossein Sanaat; Azadeh Akhavanallaf; Hossein Arabi; Arman Rahmim; Habib Zaidi
Journal:  Sci Rep       Date:  2022-09-01       Impact factor: 4.996

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.