| Literature DB >> 34203573 |
Bingzhe Wei1, Xiangchu Feng1, Kun Wang1, Bian Gao1.
Abstract
Multi-focus-image-fusion is a crucial embranchment of image processing. Many methods have been developed from different perspectives to solve this problem. Among them, the sparse representation (SR)-based and convolutional neural network (CNN)-based fusion methods have been widely used. Fusing the source image patches, the SR-based model is essentially a local method with a nonlinear fusion rule. On the other hand, the direct mapping between the source images follows the decision map which is learned via CNN. The fusion is a global one with a linear fusion rule. Combining the advantages of the above two methods, a novel fusion method that applies CNN to assist SR is proposed for the purpose of gaining a fused image with more precise and abundant information. In the proposed method, source image patches were fused based on SR and the new weight obtained by CNN. Experimental results demonstrate that the proposed method clearly outperforms existing state-of-the-art methods in addition to SR and CNN in terms of both visual perception and objective evaluation metrics, and the computational complexity is greatly reduced. Experimental results demonstrate that the proposed method not only clearly outperforms the SR and CNN methods in terms of visual perception and objective evaluation indicators, but is also significantly better than other state-of-the-art methods since our computational complexity is greatly reduced.Entities:
Keywords: convolutional neural network; multi-focus-image-fusion; sparse representation
Year: 2021 PMID: 34203573 DOI: 10.3390/e23070827
Source DB: PubMed Journal: Entropy (Basel) ISSN: 1099-4300 Impact factor: 2.524