Literature DB >> 27187946

Dynamic Whitening Saliency.

Victor Leboran, Anton Garcia-Diaz, Xose R Fdez-Vidal, Xose M Pardo.   

Abstract

General dynamic scenes involve multiple rigid and flexible objects, with relative and common motion, camera induced or not. The complexity of the motion events together with their strong spatio-temporal correlations make the estimation of dynamic visual saliency a big computational challenge. In this work, we propose a computational model of saliency based on the assumption that perceptual relevant information is carried by high-order statistical structures. Through whitening, we completely remove the second-order information (correlations and variances) of the data, gaining access to the relevant information. The proposed approach is an analytically tractable and computationally simple framework which we call Dynamic Adaptive Whitening Saliency (AWS-D). For model assessment, the provided saliency maps were used to predict the fixations of human observers over six public video datasets, and also to reproduce the human behavior under certain psychophysical experiments (dynamic pop-out). The results demonstrate that AWS-D beats state-of-the-art dynamic saliency models, and suggest that the model might contain the basis to understand the key mechanisms of visual saliency. Experimental evaluation was performed using an extension to video of the well-known methodology for static images, together with a bootstrap permutation test (random label hypothesis) which yields additional information about temporal evolution of the metrics statistical significance.

Entities:  

Year:  2016        PMID: 27187946     DOI: 10.1109/TPAMI.2016.2567391

Source DB:  PubMed          Journal:  IEEE Trans Pattern Anal Mach Intell        ISSN: 0098-5589            Impact factor:   6.226


  2 in total

1.  A Neuromorphic Proto-Object Based Dynamic Visual Saliency Model With a Hybrid FPGA Implementation.

Authors:  Jamal Molin; Chetan Thakur; Ernst Niebur; Ralph Etienne-Cummings
Journal:  IEEE Trans Biomed Circuits Syst       Date:  2021-08-12       Impact factor: 5.234

2.  Two hours in Hollywood: A manually annotated ground truth data set of eye movements during movie clip watching.

Authors:  Ioannis Agtzidis; Mikhail Startsev; Michael Dorr
Journal:  J Eye Mov Res       Date:  2020-07-27       Impact factor: 0.957

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.