| Literature DB >> 35138479 |
Yanru Guo1,2, Qiang Lin3,4,5, Shaofang Zhao1,2, Tongtong Li1,2, Yongchun Cao1,2,6, Zhengxing Man1,2,6, Xianwu Zeng7.
Abstract
BACKGROUND: Whole-body bone scan is the widely used tool for surveying bone metastases caused by various primary solid tumors including lung cancer. Scintigraphic images are characterized by low specificity, bringing a significant challenge to manual analysis of images by nuclear medicine physicians. Convolutional neural network can be used to develop automated classification of images by automatically extracting hierarchal features and classifying high-level features into classes.Entities:
Keywords: Bone scan; Convolutional neural network; Image classification; Lung cancer; Skeletal metastasis
Year: 2022 PMID: 35138479 PMCID: PMC8828823 DOI: 10.1186/s13244-022-01162-2
Source DB: PubMed Journal: Insights Imaging ISSN: 1869-4101
Fig. 1Overview of the proposed CNN-based multiclass classification method
Fig. 2Illustration of view aggregation for enhancing metastatic lesions
Fig. 3Illustration of translating and rotating whole-body SPECT scintigraphic image. a Original posterior image; b Translated image; and (c) rotated image by 3° to the left direction
Network structure of the proposed CNN-based classification model
| Layer | Configuration |
|---|---|
| Conv | 7 × 7, 64, Stride = 2 |
| Norm | Batch normalization |
| Pool | 3 × 3 Max pooling, Stride = 2 |
| RA-Conv_2 | |
| RA-Conv_3 | |
| RA-Conv_5 | |
| RA-Conv_2 | |
| Global average pooling (GAP) | |
| Softmax | |
Fig. 4Structure of residual convolution with hybrid attention mechanism
Fig. 5Distribution of patients included in the dataset of whole-body scintigraphic images. a Gender; and (b) age
An overview of the datasets used in this work
| Dataset | ADMet | nADMet | NoMet | Total |
|---|---|---|---|---|
| D1 | 237 | 160 | 226 | 623 |
| D2 | 624 | 640 | 614 | 1878 |
| D3 | 318 | 320 | 307 | 945 |
Parameters setting of the proposed classification network
| Parameter | Value |
|---|---|
| Learning rate | 0.01 |
| Optimizer | Adam |
| Batch size | 32 |
| Epoch | 300 |
Scores of evaluation metrics obtained by Classifer-inRAC and Classifer-outRAC on testing samples in dataset D3
| Classifier | Accuracy | Precision | Recall | F-1 score |
|---|---|---|---|---|
| Classifer-inRAC | ||||
| Classifer-outRAC | 0.6725 | 0.7233 | 0.6831 | 0.6723 |
Best value in each column is highlighted in bold
Scores of evaluation metrics obtained by Classifer-inRAC on the testing samples in datasets D1, D2, and D3
| Dataset | Accuracy | Precision | Recall | F-1 score |
|---|---|---|---|---|
| D1 | 0.6150 | 0.6324 | 0.6227 | 0.6058 |
| D2 | 0.6968 | 0.7001 | 0.7024 | 0.6930 |
| D3 |
Best value in each column is highlighted in bold
Fig. 6ROC curve and AUC value obtained by Classifer-inRAC on classifying the testing samples in D3
Fig. 7Confusion matrix obtained by Classifer-inRAC on classifying the testing samples in D3
Fig. 8Scores of evaluation metrics obtained by Classifer-inRAC on classifying subclasss on the testing samples in D3
Effects of network structure on classification performance obtained on dataset D3
| Residual | Attention | Accuracy | Precision | Recall | F-1 score |
|---|---|---|---|---|---|
| × | × | 0.6937 | 0.7032 | 0.7000 | 0.6940 |
| × | √ | 0.7042 | 0.7416 | 0.7047 | 0.7031 |
| √ | × | 0.7500 | 0.7614 | 0.7532 | 0.7497 |
| √ | √ |
Best value in each column is highlighted in bold
Overview of classifiers with similar structure but different depth from Classifer-inRAC
| Conv | 7 × 7, 64, Stride = 2 | ||
| Norm | Batch normalization | ||
| Pool | 3 × 3 Max pooling, Stride = 2 | ||
| RA-Conv | |||
| RA-Conv | |||
| RA-Conv | |||
| RA-Conv | |||
| Global average pooling (GAP) | |||
| Softmax | |||
Fig. 9Classfication performance comparison between different classifiers in Table 7
Two-class classification performance obtained by Classifer-inRAC
| Accuracy | Precision | Recall | F-1 score | AUC |
|---|---|---|---|---|
| 0.8310 | 0.8696 | 0.8696 | 0.8696 | 0.8147 |
Fig. 10Confusion matrix of two-class classification obtained by Classifer-inRAC
An overview of two classical CNNs-based models used for comparative analysis
| Model | Number of weight layers | Filter | Activation | Learning rate |
|---|---|---|---|---|
| Inception-v1 | 9 Inception blocks | 1 × 1, 3 × 3, 5 × 5 | ReLU | 10–2 |
| VGG 11 | 11 | 3 × 3 | ReLU | 10–2 |
Scores of evaluation metrics obtained by the proposed model and two classical models
| Model | Accuracy | Precision | Recall | F-1 score |
|---|---|---|---|---|
| Inception v1 | 0.5387 | 0.6003 | 0.5490 | 0.5415 |
| VGG 11 | 0.7324 | 0.7309 | 0.7333 | 0.7309 |
| Classifer-inRAC |
Best value in each column is highlighted in bold
Fig. 11An illustration of the classified images by multiclass classifier Classifer-inRAC. a NoMet incorrectly detected as metastatic; b ADMet incorrectly detected as nADmet; c Correctly detected nADMet image; and (d) Correctly detected ADMet image
Fig. 12Characteristics of metastatic lesions in ADMet and nADMet subclasses. a shape; b body region; and (c) uptake intensity