Literature DB >> 32953512

Predicting the invasiveness of lung adenocarcinomas appearing as ground-glass nodule on CT scan using multi-task learning and deep radiomics.

Xiang Wang1, Qingchu Li1, Jiali Cai1, Wei Wang1, Peng Xu2, Yiqian Zhang2, Qu Fang2, Chicheng Fu2, Li Fan1, Yi Xiao1, Shiyuan Liu1.   

Abstract

BACKGROUND: Due to different treatment method and prognosis of different subtypes of lung adenocarcinomas appearing as ground-glass nodules (GGNs) on computed tomography (CT) scan, it is important to classify invasive adenocarcinomas from non-invasive adenocarcinomas. The purpose of this paper is to build and evaluate the performance of deep learning networks on the differentiation the invasiveness of lung adenocarcinoma appearing as GGNs.
METHODS: This retrospective study included 886 GGNs from 794 pathological confirmed patients with lung adenocarcinoma for training and testing the proposed networks. Three deep learning networks, namely XimaNet (deep learning-based classification model), XimaSharp (classification and nodule segmentation model), and Deep-RadNet (deep learning and radiomics combined classification model, i.e., deep radiomics) were built. Three classification tasks, namely task 1: classification of AAH/AIS and MIA, task 2: classification of MIA and IAC, and task 3: classification of non-invasive adenocarcinomas and invasive adenocarcinomas (AAH/AIS&MIA and IAC) were conducted to evaluate the model performance. The Z-test was used to compare the model performance.
RESULTS: The AUC for classification of AAH/AIS with MIA were 0.891, 0.841 and 0.779 for Deep-RadNet, XimaNet and XimaSharp respectively. The AUC for classification of MIA with IAC were 0.889, 0.785 and 0.778 for three networks and AUC for classification of AAH/AIS&MIA with IAC were 0.941, 0.892 and 0.827 respectively. The performance of deep_RadNet was better than the other two models with the Z-test (P<0.05).
CONCLUSIONS: Deep-RadNet with the visual heat map could evaluate the invasiveness of GGNs accurately and intuitively, providing a theoretical basis for individualized and accurate medical treatment of patients with GGNs. 2020 Translational Lung Cancer Research. All rights reserved.

Entities:  

Keywords:  Deep learning; computed tomography (CT); ground glass opacity; pulmonary adenocarcinomas; radiomics; tumor invasiveness

Year:  2020        PMID: 32953512      PMCID: PMC7481614          DOI: 10.21037/tlcr-20-370

Source DB:  PubMed          Journal:  Transl Lung Cancer Res        ISSN: 2218-6751


Introduction

Lung cancer is the most common cancer and the leading cause of cancer-related death (1). With the popularization of lung cancer screening with CT, the detection rate of pulmonary nodules is getting higher (2). Most of the early-stage lung cancers appear as ground glass opacity (GGN) on thin section CT. GGNs are classified into two categories according to the presence of solid components or not, pure ground-glass nodules (pGGNs) and mixed ground glass nodules (mGGNs) (3). According to the multidisciplinary classification of lung adenocarcinomas by the International Association for Lung Cancer Research, the American Thoracic Society and the European Society of Respiratory Sciences (IASLC/ATS/ERS) in 2011 (4), lung adenocarcinoma can be pathologically classified into four subtypes, atypical adenomatous hyperplasia (AAH), adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA) and invasive adenocarcinoma (IAC). AAH and AIS is regarded as pre-invasive lesions. Different subtypes of adenocarcinoma vary the clinical management strategies, survival rates, surgical approaches and postoperative therapeutic protocols. Pre-invasive lesions and MIAs often have good biological behavior, long-term unchanged or slow growth, and can be clinically followed up to select the best surgical timepoint, reducing the overtreatment (5,6). In contrast, IAC needs timely surgical treatment. The disease-free survival rate after surgery for patients with AAH/AIS/MIA can be close to 100%, which is significantly higher than that of IAC (38–86%, P<0.001), which depends mainly on different subtypes of IAC (7,8). Due to the high survival rate of MIA, pre-invasive lesions and MIA is defined as the ‘non-invasive’ adenocarcinoma. For the ‘non-invasive’ adenocarcinoma, sub-lobar resection is usually used to achieve the radical effect, and sub-lobar resection can retain more lung function, reduce postoperative complications and shorten recovery time (9). While for patients with IAC, lobectomy and mediastinal lymph node dissection are performed; moreover, if the postoperative adjuvant treatment is applied for those, the survival rate may be improved, which is of great significance for the individualization of treatment (10,11). Therefore, evaluation of invasiveness of lung nodules is important to select the appropriate clinical-decision strategy. Although intraoperative freezing plays a great role for the evaluation of the invasiveness of lung nodules, it is limited due to a variety of reasons, such as uneven diagnostic levels, clinical experience of pathologists, frozen materials and technical conditions during surgery, inaccurate localization of the lesions, inaccurate material selection, too small lesions, and the complications (12,13). Therefore, it is more necessary to judge the invasiveness of GGNs before surgery, CT as a non-invasive method plays a great role in the preoperative evaluation the invasiveness of GGNs. Unfortunately, it has been a great challenge to distinguish the different histological invasiveness with traditional CT morphological findings. Currently, morphological findings are most commonly used to differentiate invasive from non-invasive adenocarcinoma in clinical work, such as proportion of solid component volume, non-smooth margin, lobulation and nodule size. Considerable overlapping in CT morphological features among various histological subtypes have been reported (14,15). It is difficult to differentiate histological invasiveness with the morphological features alone. With the arrival of the era of big data, many studies have focused on radiomics and pathological subtypes of lung adenocarcinomas. Although the accuracy of radiomics is higher than that of traditional morphological signs (16), the conventional radiomic methods are tedious and time-consuming. In recent years, artificial intelligence (AI) is an emerging concept in computer science research. Deep neural networks (DNN) has achieved broad application in many domains, especially in the analysis of medical images, including images of skin lesions, pneumonia, and clinical pathological images (17,18). Deep learning has better performance in many clinical aspects than traditional qualitative analysis tools. As yet, few studies assess the performance of a model combining a deep convolutional neural network and a hand-crafted radiomics signature to differentiate the histological invasiveness of lung adenocarcinomas manifesting as GGNs. The study aims to build three deep learning models, then compare the performance of the models on the invasiveness classification of GGNs based on chest CT. Besides, the performance of our models was compared with quantitative analysis of maximum nodule diameters. The authors have completed the STROBE reporting checklist (available at http://dx.doi.org/10.21037/tlcr-20-370).

Methods

Study population

From January 2012 to March 2018, 794 patients with lung adenocarcinoma showing GGNs were enrolled in this retrospective study. The included patients were those with (I) no previous therapy history before CT examination; (II) lung adenocarcinoma confirmed by surgical resection and histopathological diagnosis; (III) tumor less than 3cm in diameter on thin-slice (0.625–1 mm) CT images. The exclusion criteria were as follows: (I) marked artifacts on CT images; (II) history of preoperative treatment; (III) incomplete clinical information or DICOM images; (IV) history of other malignant tumors; (V) lung cancer associated with cystic airspaces. The patients were divided into three categories of pre-invasive lesions, MIA and IAC according to pathological examination results. Moreover, “non-invasive” adenocarcinoma and IAC were also made a binary classification. Three different classification tasks were considered: (I) classification of AAH/AIS with MIA; (II) classification of MIA with IAC; (III) classification of AAH/AIS&MIA with IAC. All procedures performed in this study were in accordance with the Declaration of Helsinki (as revised in 2013) and approved by the Ethics Committee of the Changzheng Hospital, Second Military Medical University (No. 2018SL049). Because of the retrospective nature of the research, the requirement for informed consent was waived. All patients underwent non-enhanced CT scanning with one of the five scanners in our hospital, as described in our previous study (19). All patients took the supine position and adopted the whole lung scan at the end of inspiration. Multi-planner reconstruction (MPR) was used for images reconstruction with thin-slice (≤1 mm) images. The demographic data including age and gender were derived from medical records. All patients were diagnosed with the same manner of histopathological diagnosis after surgical resection. The pathological subtypes of each GGNs were categorized according to the IASLC/ATS/ERS classification of lung adenocarcinoma in 2011 (4).

Dataset preprocess

GGNs were divided with the ratio of 0.7:0.15:0.15 for all training dataset, validation dataset and the test dataset (). However, the number of IAC patients exceed the sum number of AAHs and AISs. This imbalanced data distribution may have a direct effect on the performance of the deep neural networks. In order to alleviate such effect, AAH and AIS were regarded as the one category.
Table 1

Number of nodules for training, validation and testing

GroupTrainingValidationTestingTotal
AAH952020135
AIS1282828184
MIA1453131207
IAC2525454360
Total620133133886

AAH, atypical adenomatous hyperplasia; AIS, adenocarcinoma in situ; MIA, minimally invasive adenocarcinoma; IAC, invasive adenocarcinoma.

AAH, atypical adenomatous hyperplasia; AIS, adenocarcinoma in situ; MIA, minimally invasive adenocarcinoma; IAC, invasive adenocarcinoma.

Nodule labeling and segmentation

The segmentation of volume of interest (VOI) of each nodule was delineated manually and independently by experienced thoracic radiologist with lung window settings (window width 1,500 HU, window level 450 HU) using self-developed software from Shanghai Aitrox Information Technology Co., Ltd. The GGNs largest transverse cross-sectional diameter was measured in the lung window as the maximum nodule diameter. Large vessels and bronchi were excluded manually from each VOI. VOIs were marked with specific labels (AAH, AIS, MIA, and IAC) according to the pathological reports.

Building three deep-learning models

3D Image Patch generation and radiomic feature extraction

The two end-to-end networks, XimaNet and XimaSharp, directly take 3D image patches as input. To generate the 3D image patches, we firstly resampled the image spacing to 1 mm per pixel in all three dimensions, and normalized the CT value in each pixel to [−1, 1]. To keep the redundant image information around the tumor lesion, which could be meaningful for lung nodule invasiveness classification, we did not directly crop the lesion according to its corresponding mask. Instead, we calculated the maximal diameter of the lesion in all three dimensions. We cropped a 3D image patch around the lesion with twice the maximal diameter in all three dimensions and resized it to 64×64×64 pixels. Especially, if the maximal diameter is smaller than 32 pixels, we directly cropped a 64×64×64 3D image patch around the lesion without resizing. In contrast, the non-end-to-end network, deep-RadNet, takes selected radiomic features instead of image patches as input. We used the radiomic extracting tool PyRadiomics (20) (https://pyradiomics.readthedocs.io/en/latest/) to extract the radiomic features from the Region of Interest (ROI) on the images, which was labelled by corresponding masks. From the 1,743 extracted radiomic features, we selected the ones most relevant to the lung nodule invasiveness by: (I) excluding 5% abnormal samples with isolation forest (IF) algorithm (21); (II) deleting the features with low variance; (III) calculating the z-scores for each feature among all samples (mean normalization); (IV) reducing the feature numbers by automatic relevance determination (ARD) (22) and least absolute shrinkage and selection operator (LASSO) (23) with 10-fold cross validation. Finally, a number of 27 radiomic features were selected during the selection process.

Buliding the XimaNet and XimaSharp architecture

The XimaNet design was inspired by the ResNet (24) structure. The network structure was shown in . It took 64×64×64 pixel image patches as input, and the image patches were batch normalized, went through 64 convolutional layers and batch normalized again. Then the patches went through 6 building block modules, whose structure was shown in . The stride of the first building block was 1, and the rest building blocks had stride =2 for down-sampling. The building blocks exported a 2×2×2 pixel feature map, which was passed through a module consisting of batch normalization (BN) (25) and rectified linear unit (ReLU) (26). Then the feature map went through global average pooling (GAP) and dense layers to output three predicted probability values corresponding to AAH/AIS, MIA and IAC respectively. The final prediction was the category with the maximal predicted probability. During the training process, we applied data augmentation by flipping the images on x-axis only, y-axis only and both axes simultaneously. This DL network was trained based on TensorFlow 1.10.0 (27) and Keras 2.2.4 with Python 2.7, on a workstation equipped with 2 NVIDIA 1080Ti GPUs.
Figure 1

The structure and building block of XimaNet. (A) Structure of XimaNet. Convolutional neural network (CNN) algorithm development for classification, 3D patches with size of 64×64×64 pixel were used as input. They were first fed into a BN-convolution-BN module with 64 kernels. These feature maps then went through 6 building blocks followed by a GAP module. (B) Structure of building block of XimaNet. The first building block used a convolution with stride of 1 while the other building blocks used stride of 2 for down sampling.

The structure and building block of XimaNet. (A) Structure of XimaNet. Convolutional neural network (CNN) algorithm development for classification, 3D patches with size of 64×64×64 pixel were used as input. They were first fed into a BN-convolution-BN module with 64 kernels. These feature maps then went through 6 building blocks followed by a GAP module. (B) Structure of building block of XimaNet. The first building block used a convolution with stride of 1 while the other building blocks used stride of 2 for down sampling. The XimaSharp network was inspired by DenseSharp network (28). It was a multitask network for simultaneous classification and segmentation. The basic structure of XimaSharp was similar to XimaNet, but we up-sampled the 2×2×2 pixel feature map exported by the building blocks to feature maps of 4×4×4, 8×8×8, 16×16×16, 32×32×32 and 64×64×64 pixels and added them to the feature maps exported by each building block of the corresponding size. In the end, we exported a segmentation mask with the size of 64×64×64 pixels and evaluated the model performance by calculating both the classification and segmentation loss. The XimaSharp model was trained under the same environment to XimaNet.

Building deep-RadNet architecture

The deep-RadNet took the 27 selected radiomic features as input. We used three fully connected layers before the model exported the final prediction. Similar to the output of XimaNet, the prediction was also three probability values corresponding to AAH/AIS, MIA, and IAC, and the category with the maximal probability was the final prediction. The structure of deep-RadNet was shown in .
Figure 2

The structure of fully connected layer network in Deep-RadNet. The numbers below each layer is the number of neurons.

The structure of fully connected layer network in Deep-RadNet. The numbers below each layer is the number of neurons.

Training of models for classification of GGNs

We applied the cross-entropy function as the loss function for XimaNet and deep-RadNet. The formula is shown below: In which t is the ground truth label, y is the prediction result from our model, n is the sample number, and c is a class. For our classification and segmentation multitask model XimaSharp, the loss function consists of both classification loss and segmentation loss. The formula is shown below: The parameter λ indicates the weight for segmentation loss. In our case, it is set as 0.2 as an empirical value. lis the dice loss for segmentation whose formula is shown below: In which t is the manually labelled ground truth mask and y is the mask predicted by the model, and n is the total number of the samples. In the training process, we used Adam optimizer (29). The learning rate was set to 0.01, and the decay rate was set to 0.334. The dropout was set to 0.5 in the first building block, and 0.3 in all the other blocks. The models were trained for 80 epochs. For the deep-RadNet, we used the same cross entropy loss function as the one for XimaNet. The optimizer was Adam, and the learning rate was set to 2×10−4. The decay was set to 1×10−6. The model was trained for 80 epochs.

Evaluation of model performance

The input for XimaNet and XimaSharp was a 64×64×64 pixel image patch. XimaNet was used to predict the classification of AAH/AIS, MIA and IAC, while XimaSharp was used to predict the invasiveness degree as well as lesion segmentation mask. F1-score was used to assess the accuracy of the three-category classification mode. The maximum value of F1 score was 1 and the minimum value was 0. The “weighted average F1-score” was performed to reduce the effect of imbalanced data. The “weighted average F1-score” is a transformation of F1-score which is calculated from various kinds of F1 weighted calculation. The formulas are shown below: We also used the Matthews correlation coefficient (MCC) for model evaluation, which was insensitive to unbalanced data. The formula for MCC is shown below: To understand the “black boxes” of the deep learning model, heat maps were generated by Grad-CAM to visualize the most indicative region for the invasiveness of GGNs. Grad-CAM was a gradient related distribution map which could visualize the significance of the region the algorithm focused on.

Statistical analysis

All statistical analysis were performed in Matlab (version 2019a; MathWorks, Narick, Mass). Receiver operating characteristic curves (ROCs) as well as areas under receiver operating characteristic curves (AUCs) were used to assess overall classification performance of the three models. Then, Z test was applied to evaluate the difference of performance among models. Bootstrapping (1,000 boot-strap samples) was used to calculate 95% CIs and the associated P values. P<0.05 was considered a statistically significant difference. The performance by the size of GGNs was evaluated by t test. The optimal cut-off diameter for GNNs classification was calculated by searching in the dataset to maximize accuracy. Double tail distribution and double sample equal variance hypothesis were selected for parameters for tails and type, and P values were calculated under the optimal cut-off size.

Results

A total of 794 patients with 886 lung nodules were evaluated. The patient characteristics were illustrated in Supplementary file. We evaluated the performance in three classification tasks of the three DNNs: XimaNet, XimaSharp and deep-RadNet. Evaluation metrics included accuracy, “weighted average F1-score” and Matthews correlation coefficient (MCC), the classification performance was shown in . showed deep-RadNet presented with the highest accuracy, “weighted average F1-score” and MCC in comparison with other two models in all the three classification tasks. The AUC for Deep-RadNet was 0.891, 0.889, 0.941for the AAH/AIS and MIA classification, the MIA and IAC classification, and AAH/AIS/MIA and IAC classification, respectively, which were higher than those of other two models ().
Table 2

Classification performance of three network models

GroupNetworkAccuracyF1AVGMCC
AAH/AIS vs. MIAXimaNet0.7010.6320.391
XimaSharp0.6630.6140.376
Deep-RadNet0.7460.7090.452
MIA vs. IACXimaNet0.6570.6450.388
XimaSharp0.6350.6170.371
Deep-RadNet0.7540.6930.447
(AAH/AIS/MIA) vs. IACXimaNet0.7550.6770.431
XimaSharp0.7350.6620.428
Deep-RadNet0.8370.7710.513

AAH, atypical adenomatous hyperplasia; AIS, adenocarcinoma in situ; MIA, minimally invasive adenocarcinoma; IAC, invasive adenocarcinoma.

Figure 3

The ROCs and AUCs of classification tasks. (A) Receiver operating characteristic curve (ROC) of AAH/AIS versus MIA. (B) ROC of MIA versus IAC. (C) The ROC of AAH/AIS&MIA versus IA.

AAH, atypical adenomatous hyperplasia; AIS, adenocarcinoma in situ; MIA, minimally invasive adenocarcinoma; IAC, invasive adenocarcinoma. The ROCs and AUCs of classification tasks. (A) Receiver operating characteristic curve (ROC) of AAH/AIS versus MIA. (B) ROC of MIA versus IAC. (C) The ROC of AAH/AIS&MIA versus IA. The results showed that nodule size was a significant differentiator of non-invasive nodules from invasive nodules (P=0.04, accuracy =0.701, include AAH, 1 cm; P=0.01, accuracy =0.72, exclude AAH, 2 cm). The accuracy indicated that the classification ability of deep_RadNet was beyond that of the lesion size to differentiate histological invasiveness by the optimal cut-off diameter. Moreover, the Z-test was used to compare the performance among 3 models. P values of the Z-test was 0.021(P<0.05) between deep_RadNet and XimaNet, 0.019 (P<0.05) between deep_RadNet and XimaSharp and 0.98 (P>0.05) between XimaNet and XimaSharp, which indicated that the deep_RadNet revealed the best performance.

Discussion

The invasiveness of GGNs is associated with disease prognosis, choice of therapeutic approach, and reduction of overtreatment. This study showed deep learning system combined with the radiomics features could conveniently and automatically obtain the best performance in predicting the invasiveness of lung adenocarcinoma manifesting as GGNs, in comparison with other two models. To the best of our knowledge, this study was the first to present a XimaSharp model to detect and segment GGNs automatically, and a Deep-RadNet model to evaluate the invasiveness of GGNs accurately. There are reports claiming that the optimal cut-off diameter is helpful for evaluating the degree of invasiveness of lung adenocarcinoma. There is no consensus among different studies in that distinguishing the invasiveness degree referring to the sizes of GGNs merely. Lee et al. reported that 14 mm was the optimal cut-off value to differentiate pre/minimally invasive from IAC, and the sensitivity of 67% and a specificity of 74% (14). Lim et al. suggested that the total tumor size of 10 mm could act as a direct criterion to distinguish pre-invasive and IAC (30). Our study showed that the AUC values of lesion size in differentiating non-invasive nodules form invasive nodules were 70.1% with the cutoff value of 10 mm. The optimal cut-off value was 2cm for differentiating non-invasive adenocarcinoma from IAC when the AAH was excluded. The differences can be explained by two reasons. The first reason is that the AAH has not been included in some research. The second is that the definition of nodule size is not uniform. Therefore, the doctor judged the invasiveness of pulmonary nodules by the optimal cut-off value remains controversial. It has been reported that the solid components in the GGN plays a great role in the differentiation of invasiveness. The solid components on CT cannot evaluated accurately, due to the different pathological features with the similar solid component on CT images. Solid component may be the proliferation of fibroblasts and/or the invasive components of tumor cells, indicating a benign scar or collapse of the alveolar wall or tumor (31). Besides, there are lots of overlapping in the morphological characteristics of non-invasive nodules and invasive ones. Therefore, it is limited to differentiate the invasiveness degree by some morphological characteristic of GGNs. For the traditional deep learning method, it was difficult to extract features in the training process of the classification model. In contrast, traditional radiomics method extracts interpretative features from medical images based on prior medical knowledge combined with image processing methods. After feature extraction, features that contributed most to the prediction were selected and fed into the traditional statistical model for classification results. The critical idea of our Deep-RadNet model was to choose features that contributed most to the prediction results and fed them into deep learning neural network for training, which may improve the prediction accuracy and lead to better clinical interpretability comparing to a traditional radiomics model. We explored the possibility to combine radiomics and deep learning network for lung nodule invasiveness classification tasks, and showed the best performance. We also attempted to explore a deep learning model for lung nodule segmentation tasks, just as shown in . The heatmap demonstrated that the most meaningful region for differentiating the degree of invasiveness was the solid component inside the tumor. This finding highlighted the importance of this region, which may indicate the most invasive tumor components and assist radiologists in making accurate judgments of invasiveness of GGNs.
Figure 4

The figure illustrated the results for algorithm learning and automatic segmentation. The first to last columns were lung nodule examples selected from AAH, AIS, MIA and IAC respectively. (A) The first row showed the original CT images of tumor area. (B) The second row showed the heat maps of the corresponding tumor area. Grad-CAM method was used to visualize the region of interest learned by XimaNet. The color bar on the most right illustrated the attention degree the algorithm paid on. (C) The third row was the segmentation result predicted by XimaSharp (red circle areas were the automatic segmentation result and blue circle areas were the ground truth).

The figure illustrated the results for algorithm learning and automatic segmentation. The first to last columns were lung nodule examples selected from AAH, AIS, MIA and IAC respectively. (A) The first row showed the original CT images of tumor area. (B) The second row showed the heat maps of the corresponding tumor area. Grad-CAM method was used to visualize the region of interest learned by XimaNet. The color bar on the most right illustrated the attention degree the algorithm paid on. (C) The third row was the segmentation result predicted by XimaSharp (red circle areas were the automatic segmentation result and blue circle areas were the ground truth). This study has some limitations. First is the limited data accessibility due to a single-center study, which may lead to more selection bias and compromise in the generalization ability of our classifier. However, five independent scanners in our hospital were used and all the three models were found to be reproducible in the validation and training groups. Second, our models depended on either the pre-defined radiomic features or image features automatically extracted by deep learning algorithm for the classification, while some traditional morphological characteristics such as spiculation and lobulation were not considered. In the future studies, such information should be combined with radiomic features for building a more accurate lung-nodule invasiveness prediction model. Also, considering the recent progress in lung nodule diagnosis with contrast enhanced CT (32), we could extend to contrast enhanced CT images and investigate its performance for the invasiveness classification of GGNs.

Conclusions

In conclusion, the deep learning model integrating the radiomics features of GGNs with the visual heat map could evaluate the invasiveness of GGNs accurately and intuitively, which is convenient for clinicians, showing great potential to improve the efficiency of lung cancer screening and providing a theoretical basis for individualized and accurate medical treatment of patients with GGNs. The article’s supplementary files as
  23 in total

1.  The 2015 World Health Organization Classification of Lung Tumors: Impact of Genetic, Clinical and Radiologic Advances Since the 2004 Classification.

Authors:  William D Travis; Elisabeth Brambilla; Andrew G Nicholson; Yasushi Yatabe; John H M Austin; Mary Beth Beasley; Lucian R Chirieac; Sanja Dacic; Edwina Duhig; Douglas B Flieder; Kim Geisinger; Fred R Hirsch; Yuichi Ishikawa; Keith M Kerr; Masayuki Noguchi; Giuseppe Pelosi; Charles A Powell; Ming Sound Tsao; Ignacio Wistuba
Journal:  J Thorac Oncol       Date:  2015-09       Impact factor: 15.609

2.  Clinicopathologic features of resected subcentimeter lung cancer.

Authors:  Hiroyuki Sakurai; Kazuo Nakagawa; Shun-Ichi Watanabe; Hisao Asamura
Journal:  Ann Thorac Surg       Date:  2015-03-29       Impact factor: 4.330

3.  Does lung adenocarcinoma subtype predict patient survival?: A clinicopathologic study based on the new International Association for the Study of Lung Cancer/American Thoracic Society/European Respiratory Society international multidisciplinary lung adenocarcinoma classification.

Authors:  Prudence A Russell; Zoe Wainer; Gavin M Wright; Marissa Daniels; Matthew Conron; Richard A Williams
Journal:  J Thorac Oncol       Date:  2011-09       Impact factor: 15.609

4.  Lung Adenocarcinomas Manifesting as Radiological Part-Solid Nodules Define a Special Clinical Subtype.

Authors:  Ting Ye; Lin Deng; Shengping Wang; Jiaqing Xiang; Yawei Zhang; Hong Hu; Yihua Sun; Yuan Li; Lei Shen; Li Xie; Wenchao Gu; Yue Zhao; Fangqiu Fu; Weijun Peng; Haiquan Chen
Journal:  J Thorac Oncol       Date:  2019-01-17       Impact factor: 15.609

5.  Non-small cell lung cancer, version 2.2013.

Authors:  David S Ettinger; Wallace Akerley; Hossein Borghaei; Andrew C Chang; Richard T Cheney; Lucian R Chirieac; Thomas A D'Amico; Todd L Demmy; Ramaswamy Govindan; Frederic W Grannis; Stefan C Grant; Leora Horn; Thierry M Jahan; Ritsuko Komaki; Feng-Ming Spring Kong; Mark G Kris; Lee M Krug; Rudy P Lackner; Inga T Lennes; Billy W Loo; Renato Martins; Gregory A Otterson; Jyoti D Patel; Mary C Pinder-Schenck; Katherine M Pisters; Karen Reckamp; Gregory J Riely; Eric Rohren; Theresa A Shapiro; Scott J Swanson; Kurt Tauer; Douglas E Wood; Stephen C Yang; Kristina Gregory; Miranda Hughes
Journal:  J Natl Compr Canc Netw       Date:  2013-06-01       Impact factor: 11.908

6.  Dermatologist-level classification of skin cancer with deep neural networks.

Authors:  Andre Esteva; Brett Kuprel; Roberto A Novoa; Justin Ko; Susan M Swetter; Helen M Blau; Sebastian Thrun
Journal:  Nature       Date:  2017-01-25       Impact factor: 49.962

7.  Clinicopathologic Features and Genetic Alterations in Adenocarcinoma In Situ and Minimally Invasive Adenocarcinoma of the Lung: Long-Term Follow-Up Study of 121 Asian Patients.

Authors:  Meng Jia; Shili Yu; Lanqing Cao; Ping-Li Sun; Hongwen Gao
Journal:  Ann Surg Oncol       Date:  2020-02-11       Impact factor: 5.344

8.  Persistent pure ground-glass opacity lung nodules ≥ 10 mm in diameter at CT scan: histopathologic comparisons and prognostic implications.

Authors:  Hyun-Ju Lim; Soomin Ahn; Kyung Soo Lee; Joungho Han; Young Mog Shim; Sookyoung Woo; Jae-Hun Kim; Miyeon Yie; Ho Yun Lee; Chin A Yi
Journal:  Chest       Date:  2013-10       Impact factor: 9.410

9.  Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries.

Authors:  Freddie Bray; Jacques Ferlay; Isabelle Soerjomataram; Rebecca L Siegel; Lindsey A Torre; Ahmedin Jemal
Journal:  CA Cancer J Clin       Date:  2018-09-12       Impact factor: 508.702

10.  Mediastinal restaging: EUS-FNA offers a new perspective.

Authors:  Jouke T Annema; Maud Veseliç; Michel I M Versteegh; Luuk N A Willems; Klaus F Rabe
Journal:  Lung Cancer       Date:  2003-12       Impact factor: 5.705

View more
  9 in total

Review 1.  A narrative review of deep learning applications in lung cancer research: from screening to prognostication.

Authors:  Jong Hyuk Lee; Eui Jin Hwang; Hyungjin Kim; Chang Min Park
Journal:  Transl Lung Cancer Res       Date:  2022-06

2.  Development, Validation, and Comparison of Image-Based, Clinical Feature-Based and Fusion Artificial Intelligence Diagnostic Models in Differentiating Benign and Malignant Pulmonary Ground-Glass Nodules.

Authors:  Xiang Wang; Man Gao; Jicai Xie; Yanfang Deng; Wenting Tu; Hua Yang; Shuang Liang; Panlong Xu; Mingzi Zhang; Yang Lu; ChiCheng Fu; Qiong Li; Li Fan; Shiyuan Liu
Journal:  Front Oncol       Date:  2022-06-07       Impact factor: 5.738

3.  Predictive Efficacy of a Radiomics Random Forest Model for Identifying Pathological Subtypes of Lung Adenocarcinoma Presenting as Ground-Glass Nodules.

Authors:  Fen-Hua Zhao; Hong-Jie Fan; Kang-Fei Shan; Long Zhou; Zhen-Zhu Pang; Chun-Long Fu; Ze-Bin Yang; Mei-Kang Wu; Ji-Hong Sun; Xiao-Ming Yang; Zhao-Hui Huang
Journal:  Front Oncol       Date:  2022-05-12       Impact factor: 5.738

4.  Consecutive Serial Non-Contrast CT Scan-Based Deep Learning Model Facilitates the Prediction of Tumor Invasiveness of Ground-Glass Nodules.

Authors:  Yao Xu; Yu Li; Hongkun Yin; Wen Tang; Guohua Fan
Journal:  Front Oncol       Date:  2021-09-10       Impact factor: 6.244

5.  Multitask Learning Radiomics on Longitudinal Imaging to Predict Survival Outcomes following Risk-Adaptive Chemoradiation for Non-Small Cell Lung Cancer.

Authors:  Parisa Forouzannezhad; Dominic Maes; Daniel S Hippe; Phawis Thammasorn; Reza Iranzad; Jie Han; Chunyan Duan; Xiao Liu; Shouyi Wang; W Art Chaovalitwongse; Jing Zeng; Stephen R Bowen
Journal:  Cancers (Basel)       Date:  2022-02-26       Impact factor: 6.575

6.  Lung cancer CT image generation from a free-form sketch using style-based pix2pix for data augmentation.

Authors:  Ryo Toda; Atsushi Teramoto; Masashi Kondo; Kazuyoshi Imaizumi; Kuniaki Saito; Hiroshi Fujita
Journal:  Sci Rep       Date:  2022-07-27       Impact factor: 4.996

7.  ViSTA: A Novel Network Improving Lung Adenocarcinoma Invasiveness Prediction from Follow-Up CT Series.

Authors:  Wei Zhao; Yingli Sun; Kaiming Kuang; Jiancheng Yang; Ge Li; Bingbing Ni; Yingjia Jiang; Bo Jiang; Jun Liu; Ming Li
Journal:  Cancers (Basel)       Date:  2022-07-28       Impact factor: 6.575

8.  Artificial intelligence-based analysis for immunohistochemistry staining of immune checkpoints to predict resected non-small cell lung cancer survival and relapse.

Authors:  Haoyue Guo; Li Diao; Xiaofeng Zhou; Jie-Neng Chen; Yue Zhou; Qiyu Fang; Yayi He; Rafal Dziadziuszko; Caicun Zhou; Fred R Hirsch
Journal:  Transl Lung Cancer Res       Date:  2021-06

Review 9.  Structural and functional radiomics for lung cancer.

Authors:  Arthur Jochems; Turkey Refaee; Henry C Woodruff; Philippe Lambin; Guangyao Wu; Abdalla Ibrahim; Chenggong Yan; Sebastian Sanduleanu
Journal:  Eur J Nucl Med Mol Imaging       Date:  2021-03-11       Impact factor: 10.057

  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.