| Literature DB >> 35912199 |
Jingya Yang1,2, Xiaoli Shi2,3, Bing Wang1, Wenjing Qiu1,2, Geng Tian2,3, Xudong Wang1, Peizhen Wang1, Jiasheng Yang1.
Abstract
A thyroid nodule, which is defined as abnormal growth of thyroid cells, indicates excessive iodine intake, thyroid degeneration, inflammation, and other diseases. Although thyroid nodules are always non-malignant, the malignancy likelihood of a thyroid nodule grows steadily every year. In order to reduce the burden on doctors and avoid unnecessary fine needle aspiration (FNA) and surgical resection, various studies have been done to diagnose thyroid nodules through deep-learning-based image recognition analysis. In this study, to predict the benign and malignant thyroid nodules accurately, a novel deep learning framework is proposed. Five hundred eight ultrasound images were collected from the Third Hospital of Hebei Medical University in China for model training and validation. First, a ResNet18 model, pretrained on ImageNet, was trained by an ultrasound image dataset, and a random sampling of training dataset was applied 10 times to avoid accidental errors. The results show that our model has a good performance, the average area under curve (AUC) of 10 times is 0.997, the average accuracy is 0.984, the average recall is 0.978, the average precision is 0.939, and the average F1 score is 0.957. Second, Gradient-weighted Class Activation Mapping (Grad-CAM) was proposed to highlight sensitive regions in an ultrasound image during the learning process. Grad-CAM is able to extract the sensitive regions and analyze their shape features. Based on the results, there are obvious differences between benign and malignant thyroid nodules; therefore, shape features of the sensitive regions are helpful in diagnosis to a great extent. Overall, the proposed model demonstrated the feasibility of employing deep learning and ultrasound images to estimate benign and malignant thyroid nodules.Entities:
Keywords: Grad-CAM; convolutional neural network; deep learning; feature extraction; thyroid nodule; ultrasound images
Year: 2022 PMID: 35912199 PMCID: PMC9335944 DOI: 10.3389/fonc.2022.905955
Source DB: PubMed Journal: Front Oncol ISSN: 2234-943X Impact factor: 5.738
The distribution of thyroid nodules in the training and testing groups.
| Dataset | Benign | Malignant | Total |
|---|---|---|---|
| Train | 291 | 66 | 357 |
| Test | 124 | 27 | 151 |
| Total | 415 | 93 | 508 |
Figure 1The workflow for thyroid module classification with ResNet18. Layer 1~layer 4 show the process of image analysis by ResNet18. With the increase of layers, the features extracted by the model became more abstract. AUC and other evaluation indicators were used to evaluate the effect of model classification. In addition, a heatmap was employed to visualize the prediction results, by which we extracted and analyzed the highlighted areas.
Figure 2The specific structure of the ResNet18 model. The input is the image of thyroid nodules with the same size. After the convolution layers and pooling layers, the image features are extracted automatically. The output layer is the result of classification: benign or malignant nodule.
Figure 3Evaluations of model results. (A) The receiver operating characteristic (ROC) curves and the area under the curve (AUC) of our model and other comparative models. (B) The performance of our model and other comparative models on accuracy, recall, precision, and F1 score.
Figure 4Grad-CAM visualizes highlighted regions. (A) (a1)~(a6) are original images, (a1)~(a3) are benign nodules, (a4)~(a6) are malignant nodules. (B) (b1)~(b6) are heatmaps drawn by Grad-CAM, corresponding to (a1)~(a6).
Figure 5Extract highlighted regions in heatmaps. These heatmaps correspond to ) or (B). Samples in (A) are the extraction result of benign thyroid nodules, which correspond to (A1–A3) or (B1–B3). Samples in (B) are the extraction result of malignant thyroid nodules, which correspond to (a4)~(a6) or (b4)~(b6).
Figure 6Violin plots of image feature distribution with benign and malignant nodules. (A) Form parameter. (B) Area convexity. (C) Perimeter convexity.