| Literature DB >> 33963232 |
Hadi Hashemzadeh1, Seyedehsamaneh Shojaeilangari2, Abdollah Allahverdi3, Mario Rothbauer4,5, Peter Ertl6, Hossein Naderi-Manesh7,8.
Abstract
Lung cancer is a leading cause of cancer death in both men and women worldwide. The high mortality rate in lung cancer is in part due to late-stage diagnostics as well as spread of cancer-cells to organs and tissues by metastasis. Automated lung cancer detection and its sub-types classification from cell's images play a crucial role toward an early-stage cancer prognosis and more individualized therapy. The rapid development of machine learning techniques, especially deep learning algorithms, has attracted much interest in its application to medical image problems. In this study, to develop a reliable Computer-Aided Diagnosis (CAD) system for accurately distinguishing between cancer and healthy cells, we grew popular Non-Small Lung Cancer lines in a microfluidic chip followed by staining with Phalloidin and images were obtained by using an IX-81 inverted Olympus fluorescence microscope. We designed and tested a deep learning image analysis workflow for classification of lung cancer cell-line images into six classes, including five different cancer cell-lines (P-C9, SK-LU-1, H-1975, A-427, and A-549) and normal cell-line (16-HBE). Our results demonstrate that ResNet18, a residual learning convolutional neural network, is an efficient and promising method for lung cancer cell-lines categorization with a classification accuracy of 98.37% and F1-score of 97.29%. Our proposed workflow is also able to successfully distinguish normal versus cancerous cell-lines with a remarkable average accuracy of 99.77% and F1-score of 99.87%. The proposed CAD system completely eliminates the need for extensive user intervention, enabling the processing of large amounts of image data with robust and highly accurate results.Entities:
Year: 2021 PMID: 33963232 PMCID: PMC8105370 DOI: 10.1038/s41598-021-89352-8
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Overview of the combined microfluidic deep learning approach.
Comparison of five deep neural network architectures.
| Model | Classification performance | No. parameters | |||
|---|---|---|---|---|---|
| Accuracy (%) | Precision (%) | Recall (%) | F1_score (%) | (M = million) | |
| AlexNet | 97.17 | 96.52 | 95.28 | 95.82 | 60 M |
| GoogLeNet | 88.26 | 89.57 | 86.50 | 87.44 | 4 M |
| ResNet18 | 98.37 | 97.64 | 96.88 | 97.12 | 25.6 M |
| Inceptionv3 | 82.67 | 90.29 | 80.39 | 83.45 | 23.6 M |
| SqueezNet | 94.41 | 92.33 | 90.48 | 90.62 | 1.2 M |
Parameter setting for the ResNet18 architecture.
| No. epochs | Mini-batch size | Initial learning rate | Learning rate factor | L2 regularization |
|---|---|---|---|---|
| 10 | 64 | 5*10–5 | 2 | 10–4 |
Figure 2Representative fluorescence microscopic images of lung cell-lines (normal and five cell-lines) from our collected database.
Figure 3Confusion matrix for lung cancer cell detection resulted from ResNet18. Rows of the matrix represents the number of instances in a predicted class (upper number) as well as the percentage of correctly or incorrectly classified observations for each true class (down number), while each column represents the instances in an actual class. The information of class-wise precisions and recalls are summarized at the end of each rows and columns respectively with green color, while the corresponding error rates are specified with red color.
Classification results for the ResNet18 architecture (six classes: normal (16-HBE), A-427, A-549, H-1975, SK-LU-1, and PC-9 cell-lines). Values are mean ± standard deviation for five experimental runs.
| Accuracy (%) | Precision (%) | Recall (%) | F1-score (%) |
|---|---|---|---|
| 98.37 ± 0.36 | 97.38 ± 0.81 | 97.35 ± 0.67 | 97.29 ± 0.73 |
Figure 4Accuracy and loss curves in training progress for the ResNet18 model; (A) accuracy is plotted versus the training iteration for both training and validation data, (B) cross-entropy loss is plotted versus the training iteration for both training and validation data. Training plots were smoothed to better visualize trends.
Classification results for the ResNet18 architecture (normal vs. cancer). Values are mean ± standard deviation for five experimental runs.
| Accuracy (%) | Precision (%) | Recall (%) | F1-score (%) |
|---|---|---|---|
| 99.77 ± 0.15 | 100 ± 0.00 | 99.74 ± 0.16 | 99.87 ± 0.08 |
Classification accuracies of the ResNet18 architecture (normal vs. cancer) for unseen test data.
| Train and validation data | Test data | Validation accuracy (%) | Test accuracy (%) |
|---|---|---|---|
| Normal versus cancer (A-427, H-1975, PC-9, SK-LU-1) | A-549 | 99.92 | 73.54 |
| Normal versus cancer (A-549, H-1975, PC-9, SK-LU-1) | A-427 | 100 | 96.58 |
| Normal versus cancer (A-549, A-427, PC-9, SK-LU-1) | H-1975 | 100 | 95.58 |
| Normal versus cancer (A-549, A-427, H-1975, SK-LU-1) | PC-9 | 100 | 100 |
| Normal versus cancer (A-549, A-427, H-1975, PC-9) | SK-LU-1 | 99.89 | 99.80 |
Overview of the number of images in our database.
| 16-HBE (normal) | P-C9 | SK-LU-1 | H-1975 | A-427 | A-549 |
|---|---|---|---|---|---|
| 800 | 1988 | 2469 | 860 | 438 | 511 |