Literature DB >> 17170482

Rapid biologically-inspired scene classification using features shared with visual attention.

Christian Siagian1, Laurent Itti.   

Abstract

We describe and validate a simple context-based scene recognition algorithm for mobile robotics applications. The system can differentiate outdoor scenes from various sites on a college campus using a multiscale set of early-visual features, which capture the "gist" of the scene into a low-dimensional signature vector. Distinct from previous approaches, the algorithm presents the advantage of being biologically plausible and of having low-computational complexity, sharing its low-level features with a model for visual attention that may operate concurrently on a robot. We compare classification accuracy using scenes filmed at three outdoor sites on campus (13,965 to 34,711 frames per site). Dividing each site into nine segments, we obtain segment classification rates between 84.21 percent and 88.62 percent. Combining scenes from all sites (75,073 frames in total) yields 86.45 percent correct classification, demonstrating the generalization and scalability of the approach.

Entities:  

Mesh:

Year:  2007        PMID: 17170482     DOI: 10.1109/TPAMI.2007.40

Source DB:  PubMed          Journal:  IEEE Trans Pattern Anal Mach Intell        ISSN: 0098-5589            Impact factor:   6.226


  16 in total

1.  A TOP-DOWN AUDITORY ATTENTION MODEL FOR LEARNING TASK DEPENDENT INFLUENCES ON PROMINENCE DETECTION IN SPEECH.

Authors:  Ozlem Kalinli; Shrikanth Narayanan
Journal:  Proc IEEE Int Conf Acoust Speech Signal Process       Date:  2008

2.  Humans and monkeys share visual representations.

Authors:  Denis Fize; Maxime Cauchoix; Michèle Fabre-Thorpe
Journal:  Proc Natl Acad Sci U S A       Date:  2011-04-18       Impact factor: 11.205

3.  Prominence Detection Using Auditory Attention Cues and Task-Dependent High Level Information.

Authors:  Ozlem Kalinli; Shrikanth Narayanan
Journal:  IEEE Trans Audio Speech Lang Process       Date:  2009-07-01

4.  Behavioral Signal Processing: Deriving Human Behavioral Informatics From Speech and Language: Computational techniques are presented to analyze and model expressed and perceived human behavior-variedly characterized as typical, atypical, distressed, and disordered-from speech and language cues and their applications in health, commerce, education, and beyond.

Authors:  Shrikanth Narayanan; Panayiotis G Georgiou
Journal:  Proc IEEE Inst Electr Electron Eng       Date:  2013-02-07       Impact factor: 10.961

5.  A neuromorphic architecture for object recognition and motion anticipation using burst-STDP.

Authors:  Andrew Nere; Umberto Olcese; David Balduzzi; Giulio Tononi
Journal:  PLoS One       Date:  2012-05-15       Impact factor: 3.240

6.  The Role of Global Appearance of Omnidirectional Images in Relative Distance and Orientation Retrieval.

Authors:  Vicente Román; Luis Payá; Adrián Peidró; Mónica Ballesta; Oscar Reinoso
Journal:  Sensors (Basel)       Date:  2021-05-11       Impact factor: 3.576

7.  Active learning framework with iterative clustering for bioimage classification.

Authors:  Natsumaro Kutsuna; Takumi Higaki; Sachihiro Matsunaga; Tomoshi Otsuki; Masayuki Yamaguchi; Hirofumi Fujii; Seiichiro Hasezawa
Journal:  Nat Commun       Date:  2012       Impact factor: 14.919

8.  Spatio-temporal saliency perception via hypercomplex frequency spectral contrast.

Authors:  Ce Li; Jianru Xue; Nanning Zheng; Xuguang Lan; Zhiqiang Tian
Journal:  Sensors (Basel)       Date:  2013-03-12       Impact factor: 3.576

9.  Performance of global-appearance descriptors in map building and localization using omnidirectional vision.

Authors:  Luis Payá; Francisco Amorós; Lorenzo Fernández; Oscar Reinoso
Journal:  Sensors (Basel)       Date:  2014-02-14       Impact factor: 3.576

10.  Classifying normal and abnormal status based on video recordings of epileptic patients.

Authors:  Jing Li; Xiantong Zhen; Xianzeng Liu; Gaoxiang Ouyang
Journal:  ScientificWorldJournal       Date:  2014-04-08
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.