Literature DB >> 24832595

Saliency detection for stereoscopic images.

Yuming Fang, Junle Wang, Manish Narwaria, Patrick Le Callet, Weisi Lin.   

Abstract

Many saliency detection models for 2D images have been proposed for various multimedia processing applications during the past decades. Currently, the emerging applications of stereoscopic display require new saliency detection models for salient region extraction. Different from saliency detection for 2D images, the depth feature has to be taken into account in saliency detection for stereoscopic images. In this paper, we propose a novel stereoscopic saliency detection framework based on the feature contrast of color, luminance, texture, and depth. Four types of features, namely color, luminance, texture, and depth, are extracted from discrete cosine transform coefficients for feature contrast calculation. A Gaussian model of the spatial distance between image patches is adopted for consideration of local and global contrast calculation. Then, a new fusion method is designed to combine the feature maps to obtain the final saliency map for stereoscopic images. In addition, we adopt the center bias factor and human visual acuity, the important characteristics of the human visual system, to enhance the final saliency map for stereoscopic images. Experimental results on eye tracking databases show the superior performance of the proposed model over other existing methods.

Entities:  

Mesh:

Year:  2014        PMID: 24832595     DOI: 10.1109/TIP.2014.2305100

Source DB:  PubMed          Journal:  IEEE Trans Image Process        ISSN: 1057-7149            Impact factor:   10.856


  4 in total

1.  Deep Multimodal Fusion Autoencoder for Saliency Prediction of RGB-D Images.

Authors:  Kengda Huang; Wujie Zhou; Meixin Fang
Journal:  Comput Intell Neurosci       Date:  2021-05-05

2.  A dataset of stereoscopic images and ground-truth disparity mimicking human fixations in peripersonal space.

Authors:  Andrea Canessa; Agostino Gibaldi; Manuela Chessa; Marco Fato; Fabio Solari; Silvio P Sabatini
Journal:  Sci Data       Date:  2017-03-28       Impact factor: 6.444

3.  Hierarchical Multimodal Adaptive Fusion (HMAF) Network for Prediction of RGB-D Saliency.

Authors:  Ying Lv; Wujie Zhou
Journal:  Comput Intell Neurosci       Date:  2020-11-20

4.  Analyzing fibrous tissue pattern in fibrous dysplasia bone images using deep R-CNN networks for segmentation.

Authors:  A Saranya; Kottilingam Kottursamy; Ahmad Ali AlZubi; Ali Kashif Bashir
Journal:  Soft comput       Date:  2021-12-01       Impact factor: 3.732

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.