Literature DB >> 34163124

Predicting Goal-directed Human Attention Using Inverse Reinforcement Learning.

Zhibo Yang1, Lihan Huang1, Yupei Chen1, Zijun Wei2, Seoyoung Ahn1, Gregory Zelinsky1, Dimitris Samaras1, Minh Hoai1.   

Abstract

Human gaze behavior prediction is important for behavioral vision and for computer vision applications. Most models mainly focus on predicting free-viewing behavior using saliency maps, but do not generalize to goal-directed behavior, such as when a person searches for a visual target object. We propose the first inverse reinforcement learning (IRL) model to learn the internal reward function and policy used by humans during visual search. We modeled the viewer's internal belief states as dynamic contextual belief maps of object locations. These maps were learned and then used to predict behavioral scanpaths for multiple target categories. To train and evaluate our IRL model we created COCO-Search18, which is now the largest dataset of high-quality search fixations in existence. COCO-Search18 has 10 participants searching for each of 18 target-object categories in 6202 images, making about 300,000 goal-directed fixations. When trained and evaluated on COCO-Search18, the IRL model outperformed baseline models in predicting search fixation scanpaths, both in terms of similarity to human search behavior and search efficiency. Finally, reward maps recovered by the IRL model reveal distinctive target-dependent patterns of object prioritization, which we interpret as a learned object context.

Entities:  

Year:  2020        PMID: 34163124      PMCID: PMC8218821          DOI: 10.1109/cvpr42600.2020.00027

Source DB:  PubMed          Journal:  Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit        ISSN: 1063-6919


  4 in total

1.  Modeling Human Visual Search in Natural Scenes: A Combined Bayesian Searcher and Saliency Map Approach.

Authors:  Gaston Bujia; Melanie Sclar; Sebastian Vita; Guillermo Solovey; Juan Esteban Kamienkowski
Journal:  Front Syst Neurosci       Date:  2022-05-27

2.  DeepGaze III: Modeling free-viewing human scanpaths with deep learning.

Authors:  Matthias Kümmerer; Matthias Bethge; Thomas S A Wallis
Journal:  J Vis       Date:  2022-04-06       Impact factor: 2.004

3.  A Bio-Inspired Endogenous Attention-Based Architecture for a Social Robot.

Authors:  Sara Marques-Villarroya; Jose Carlos Castillo; Juan José Gamboa-Montero; Javier Sevilla-Salcedo; Miguel Angel Salichs
Journal:  Sensors (Basel)       Date:  2022-07-13       Impact factor: 3.847

4.  Weighting the factors affecting attention guidance during free viewing and visual search: The unexpected role of object recognition uncertainty.

Authors:  Souradeep Chakraborty; Dimitris Samaras; Gregory J Zelinsky
Journal:  J Vis       Date:  2022-03-02       Impact factor: 2.004

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.