| Literature DB >> 35965770 |
Yuntao Shou1, Tao Meng1, Wei Ai1, Canhao Xie1, Haiyan Liu2, Yina Wang3.
Abstract
The object detection task in the medical field is challenging in terms of classification and regression. Due to its crucial applications in computer-aided diagnosis and computer-aided detection techniques, an increasing number of researchers are transferring the object detection techniques to the medical field. However, in existing work on object detection, researchers do not consider the low resolution of medical images, the high amount of noise, and the small size of the objects to be detected. Based on this, this paper proposes a new algorithmic model called the MS Transformer, where a self-supervised learning approach is used to perform a random mask on the input image to reconstruct the input features, learn a richer feature vector, and filter out excessive noise. To focus the model on the small objects that are being detected, the hierarchical transformer model is introduced in this paper, and a sliding window with a local self-attention mechanism is used to give a higher attention score to the small objects to be detected. Finally, a single-stage object detection framework is used to predict the sequence of sets at the location of the bounding box and the class of objects to be detected. On the DeepLesion and BCDD benchmark dataset, the model proposed in this paper achieves better performance improvement on multiple evaluation metric categories.Entities:
Mesh:
Year: 2022 PMID: 35965770 PMCID: PMC9371842 DOI: 10.1155/2022/5863782
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1The MS Transformer architecture consists of an image reconstruction layer, the Swin Transformer, and the YOLOv5. The hierarchical transformer is composed of two successive transformer blocks. The model finally predicts the lesion class and the bounding box through the fully connected layer and the object detection head, respectively.
Figure 2The image reconstruction layer randomly masks patches of the input image, and then the masked patches are input into the ViT-Transformer for encoding and decoding operations to reconstruct the image by minimizing the loss function.
Division of the training, test, and validation sets in the DeepLesion dataset and BCDD, as well as the number of categories for detection and the evaluation metrics.
| Datasets | Train (%) | Validation (%) | Test (%) | Classes | Evaluation metrics |
|---|---|---|---|---|---|
| DeepLesion | 70 | 15 | 15 | 8 | IoU/mAP/ |
| BCDD | 70 | 15 | 15 | 3 | IoU/mAP/ |
On the DeepLesion dataset, the MS Transformer compares the recognition accuracy of lesion categories with other baseline models. Acc. = Accuracy.
| DeepLesion | |||||||||
|---|---|---|---|---|---|---|---|---|---|
| Lesion type | LU | ME | LV | ST | PV | AB | KD | BN | Average (w) |
| Evaluation metrics | Acc. | Acc. | Acc. | Acc. | Acc. | Acc. | Acc. | Acc. | Acc. |
| Faster R-CNN | 85.9 | 85.2 | 88.2 | 82.0 | 93.5 | 81.2 | 78.4 | 86.9 | 83.3 |
| Yolov5 | 87.2 | 85.6 | 86.6 | 90.9 | 93.4 | 84.1 | 75.5 | 84.5 | 85.2 |
| Swin transformer | 74.8 | 84.5 | 85.6 | 83.6 | 93.9 | 72.9 | 84.7 | 83.3 | 82.9 |
| DETR | 89.8 | 80.7 | 88.6 | 94.6 | 92.7 | 73.4 | 77.8 | 84.4 | 86.7 |
| MS Transformer | 90.7 | 86.3 | 94.6 | 92.9 | 93.7 | 71.9 | 87.9 | 91.0 | 90.3 |
The AP50box recognition accuracy of the MS Transformer compared with other baseline models on the DeepLesion dataset.
| Methods | DeepLesion | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| LU | ME | LV | ST | PV | AB | KD | BN | Average | |
|
|
|
|
|
|
|
|
| mAP | |
| Faster R-CNN | 91.8 | 81.7 | 86.5 | 85.2 | 89.6 | 77.0 | 73.5 | 81.7 | 83.3 |
| Yolov5 | 69.2 | 90.7 | 88.1 | 92.4 | 95.7 | 90.6 | 88.4 | 90.7 | 88.2 |
| Swin transformer | 21.0 | 91.5 | 89.6 | 92.8 | 96.4 | 78.2 | 88.8 | 91.1 | 81.2 |
| DETR | 87.9 | 92.6 | 90.1 | 90.5 | 94.8 | 90.9 | 86.2 | 91.2 | 87.8 |
| MS Transformer | 78.6 | 89.8 | 92.3 | 90.3 | 97.6 | 90.2 | 91.6 | 92.1 | 89.6 |
On the BCDD dataset, the MS Transformer compares the recognition accuracy of lesion categories with other baseline models. Acc. = Accuracy.
| BCDD | ||||
|---|---|---|---|---|
| Lesion type | WBC | RBC | Platelets | Average (w) |
| Evaluation metrics | Acc. | Acc. | Acc. | Acc. |
| Faster R-CNN | 68.71 | 97.22 | 63.15 | 75.46 |
| Yolov5 | 97.33 | 77.36 | 78.21 | 84.29 |
| Transformer | 97.32 | 79.61 | 84.53 | 86.91 |
| DETR | 94.25 | 93.70 | 84.92 | 89.86 |
| MS Transformer | 100 | 97.03 | 94.78 | 96.15 |
The AP50box recognition accuracy of the MS Transformer compared with other baseline models on the BCDD dataset.
| Methods | BCDD | |||
|---|---|---|---|---|
| WBC | RBC | Platelets | Average | |
|
|
|
| mAP | |
| Faster R-CNN | 35.94 | 91.70 | 83.01 | 70.21 |
| Yolov5 | 98.21 | 85.43 | 91.67 | 91.67 |
| Transformer | 98.84 | 78.61 | 87.53 | 88.17 |
| DETR | 76.23 | 82.35 | 88.76 | 83.91 |
| MS Transformer | 98.89 | 90.13 | 84.31 | 91.89 |
Ablation experiment of the MS Transformer model on the DeepLesion benchmark dataset.
| Mask | Hierarchical transformer | Accuracy | mAP |
|---|---|---|---|
| + | − | 74.7 | 73.5 |
| − | + | 81.7 | 80.6 |
| + | + | 90.3 | 89.6 |
Figure 3Relationship between mask rate and accuracy on BCDD and DeepLesion benchmark datasets. A high mask rate (65% 75%) can achieve better results.