Literature DB >> 22201066

Exploring tiny images: the roles of appearance and contextual information for machine and human object recognition.

Devi Parikh1, C Lawrence Zitnick, Tsuhan Chen.   

Abstract

Typically, object recognition is performed based solely on the appearance of the object. However, relevant information also exists in the scene surrounding the object. In this paper, we explore the roles that appearance and contextual information play in object recognition. Through machine experiments and human studies, we show that the importance of contextual information varies with the quality of the appearance information, such as an image's resolution. Our machine experiments explicitly model context between object categories through the use of relative location and relative scale, in addition to co-occurrence. With the use of our context model, our algorithm achieves state-of-the-art performance on the MSRC and Corel data sets. We perform recognition tests for machines and human subjects on low and high resolution images, which vary significantly in the amount of appearance information present, using just the object appearance information, the combination of appearance and context, as well as just context without object appearance information (blind recognition). We also explore the impact of the different sources of context (co-occurrence, relative-location, and relative-scale). We find that the importance of different types of contextual information varies significantly across data sets such as MSRC and PASCAL.

Entities:  

Mesh:

Year:  2012        PMID: 22201066     DOI: 10.1109/TPAMI.2011.276

Source DB:  PubMed          Journal:  IEEE Trans Pattern Anal Mach Intell        ISSN: 0098-5589            Impact factor:   6.226


  1 in total

1.  A Joint Gaussian Process Model for Active Visual Recognition with Expertise Estimation in Crowdsourcing.

Authors:  Chengjiang Long; Gang Hua; Ashish Kapoor
Journal:  Int J Comput Vis       Date:  2015-06-11       Impact factor: 7.410

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.