| Literature DB >> 30413066 |
Qinglei Du1,2, Han Xu3, Yong Ma4, Jun Huang5, Fan Fan6.
Abstract
In infrared and visible image fusion, existing methods typically have a prerequisite that the source images share the same resolution. However, due to limitations of hardware devices and application environments, infrared images constantly suffer from markedly lower resolution compared with the corresponding visible images. In this case, current fusion methods inevitably cause texture information loss in visible images or blur thermal radiation information in infrared images. Moreover, the principle of existing fusion rules typically focuses on preserving texture details in source images, which may be inappropriate for fusing infrared thermal radiation information because it is characterized by pixel intensities, possibly neglecting the prominence of targets in fused images. Faced with such difficulties and challenges, we propose a novel method to fuse infrared and visible images of different resolutions and generate high-resolution resulting images to obtain clear and accurate fused images. Specifically, the fusion problem is formulated as a total variation (TV) minimization problem. The data fidelity term constrains the pixel intensity similarity of the downsampled fused image with respect to the infrared image, and the regularization term compels the gradient similarity of the fused image with respect to the visible image. The fast iterative shrinkage-thresholding algorithm (FISTA) framework is applied to improve the convergence rate. Our resulting fused images are similar to super-resolved infrared images, which are sharpened by the texture information from visible images. Advantages and innovations of our method are demonstrated by the qualitative and quantitative comparisons with six state-of-the-art methods on publicly available datasets.Entities:
Keywords: different resolutions; image fusion; infrared; total variation
Mesh:
Year: 2018 PMID: 30413066 PMCID: PMC6263655 DOI: 10.3390/s18113827
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1A typical example for fusing infrared and visible images of different resolutions through (a) downsampling the visible image; (b) our method; and (c) upsampling the infrared image.
Figure 2Fusion results of different fusion methods on four image pairs. The four groups of results from left to right: airplane_in_trees, Kaptein_01, nato_camp_sequence and sandpath, where the corresponding subfigures are: (a) infrared image; (b) visible image; (c) our DRTV; (d) LP [43]; (e) RP [44]; (f) HMSD [26]; (g) CBF [45]; (h) DDCTPCA [46]; (i) GTF [10].
Figure 3Quantitative comparisons of six fusion methods on twenty image pairs.
Figure 4Fusion results of our DRTV on nato_camp_sequence when parameter increases.
Figure 5Fusion results of our DRTV on lake when parameter increases.
Figure 6Convergence rate comparison between FISTA framework and previous variational model. ((a) result on airplane_in_trees; (b) result on Kaptein_01).
Figure 7Runtime comparison of different methods on the whole test dataset.