| Literature DB >> 33968357 |
Lei Wang1, Chunhong Chang1, Zhouqi Liu1, Jin Huang1, Cong Liu1, Chunxiang Liu2.
Abstract
The traditional medical image fusion methods, such as the famous multi-scale decomposition-based methods, usually suffer from the bad sparse representations of the salient features and the low ability of the fusion rules to transfer the captured feature information. In order to deal with this problem, a medical image fusion method based on the scale invariant feature transformation (SIFT) descriptor and the deep convolutional neural network (CNN) in the shift-invariant shearlet transform (SIST) domain is proposed. Firstly, the images to be fused are decomposed into the high-pass and the low-pass coefficients. Then, the fusion of the high-pass components is implemented under the rule based on the pre-trained CNN model, which mainly consists of four steps: feature detection, initial segmentation, consistency verification, and the final fusion; the fusion of the low-pass subbands is based on the matching degree computed by the SIFT descriptor to capture the features of the low frequency components. Finally, the fusion results are obtained by inversion of the SIST. Taking the typical standard deviation, QAB/F, entropy, and mutual information as the objective measurements, the experimental results demonstrate that the detailed information without artifacts and distortions can be well preserved by the proposed method, and better quantitative performance can be also obtained.Entities:
Mesh:
Year: 2021 PMID: 33968357 PMCID: PMC8081630 DOI: 10.1155/2021/9958017
Source DB: PubMed Journal: J Healthc Eng ISSN: 2040-2295 Impact factor: 2.682
Figure 1The architecture of the proposed method.
Figure 2The fusion procedure of the high-pass subbands.
Figure 3The trained CNN model in this paper.
Figure 4The six groups of source images. Every two of them are captured in the same location with different modalities,CT‐MRI in the first row and MRI‐PET in the second row.
Figure 5The fusion results of the three groups of CT-MRI. (a) PCNN. (b) CSR. (c) Shearlet. (d) DCNN. (e) Proposed.
Figure 6The fusion results of the three groups of MRI-PET. (a) PCNN. (b) CSR. (c) Shearlet. (d) DCNN. (e) Proposed.
The objective evaluations of CT-MRI fusion in Figure 5.
| Method | Group 1 | Group 2 | Group 3 | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| SD |
| En | MI | SD |
| En | MI | SD |
| En | MI | |
| PCNN | 20.63 | 0.50 | 2.12 | 0.65 | 20.36 | 0.55 | 2.00 | 0.58 | 20.87 | 0.39 | 1.91 | 0.45 |
| CSR | 20.86 | 0.55 | 2.20 | 0.77 | 21.56 | 0.60 | 2.11 | 0.62 | 23.96 | 0.46 | 2.00 | 0.62 |
| Shearlet | 21.44 | 0.59 | 2.28 | 0.78 | 22.43 | 0.62 | 2.18 | 0.69 | 23.55 | 0.51 | 2.10 | 0.69 |
| DCNN | 22.35 | 0.71 | 2.31 | 0.82 | 23.58 | 0.68 | 2.26 | 0.76 | 24.20 | 0.55 | 2.19 | 0.78 |
| Proposed | 23.22 | 0.83 | 2.45 | 0.88 | 24.43 | 0.79 | 2.33 | 0.82 | 24.55 | 0.68 | 2.28 | 0.85 |
The objective evaluations of MRI-PET fusion in Figure 6.
| Method | Group 4 | Group 5 | Group 6 | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| SD |
| En | MI | SD |
| En | MI | SD |
| En | MI | |
| PCNN | 35.62 | 0.59 | 3.76 | 0.46 | 41.05 | 0.62 | 4.11 | 0.38 | 31.12 | 0.58 | 3.89 | 0.62 |
| CSR | 37.21 | 0.65 | 3.88 | 0.50 | 43.56 | 0.72 | 4.25 | 0.42 | 33.93 | 0.65 | 3.96 | 0.71 |
| Shearlet | 37.64 | 0.67 | 4.13 | 0.59 | 44.95 | 0.78 | 4.35 | 0.48 | 34.52 | 0.69 | 4.31 | 0.76 |
| DCNN | 38.19 | 0.73 | 4.50 | 0.65 | 45.36 | 0.81 | 4.44 | 0.62 | 34.91 | 0.76 | 4.55 | 0.81 |
| Proposed | 39.42 | 0.84 | 4.65 | 0.70 | 47.21 | 0.86 | 4.63 | 0.63 | 35.68 | 0.83 | 4.87 | 0.85 |