Literature DB >> 31751238

Deep Guided Learning for Fast Multi-Exposure Image Fusion.

Kede Ma, Zhengfang Duanmu, Hanwei Zhu, Yuming Fang, Zhou Wang.   

Abstract

We propose a fast multi-exposure image fusion (MEF) method, namely MEF-Net, for static image sequences of arbitrary spatial resolution and exposure number. We first feed a low-resolution version of the input sequence to a fully convolutional network for weight map prediction. We then jointly upsample the weight maps using a guided filter. The final image is computed by a weighted fusion. Unlike conventional MEF methods, MEF-Net is trained end-to-end by optimizing the perceptually calibrated MEF structural similarity (MEF-SSIM) index over a database of training sequences at full resolution. Across an independent set of test sequences, we find that the optimized MEF-Net achieves consistent improvement in visual quality for most sequences, and runs 10 to 1000 times faster than state-of-the-art methods. The code is made publicly available at.

Year:  2019        PMID: 31751238     DOI: 10.1109/TIP.2019.2952716

Source DB:  PubMed          Journal:  IEEE Trans Image Process        ISSN: 1057-7149            Impact factor:   10.856


  1 in total

1.  General Image Fusion for an Arbitrary Number of Inputs Using Convolutional Neural Networks.

Authors:  Yifan Xiao; Zhixin Guo; Peter Veelaert; Wilfried Philips
Journal:  Sensors (Basel)       Date:  2022-03-23       Impact factor: 3.576

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.