Literature DB >> 20181543

Fast vision through frameless event-based sensing and convolutional processing: application to texture recognition.

Jose Antonio Perez-Carrasco1, Begona Acha, Carmen Serrano, Luis Camunas-Mesa, Teresa Serrano-Gotarredona, Bernabe Linares-Barranco.   

Abstract

Address-event representation (AER) is an emergent hardware technology which shows a high potential for providing in the near future a solid technological substrate for emulating brain-like processing structures. When used for vision, AER sensors and processors are not restricted to capturing and processing still image frames, as in commercial frame-based video technology, but sense and process visual information in a pixel-level event-based frameless manner. As a result, vision processing is practically simultaneous to vision sensing, since there is no need to wait for sensing full frames. Also, only meaningful information is sensed, communicated, and processed. Of special interest for brain-like vision processing are some already reported AER convolutional chips, which have revealed a very high computational throughput as well as the possibility of assembling large convolutional neural networks in a modular fashion. It is expected that in a near future we may witness the appearance of large scale convolutional neural networks with hundreds or thousands of individual modules. In the meantime, some research is needed to investigate how to assemble and configure such large scale convolutional networks for specific applications. In this paper, we analyze AER spiking convolutional neural networks for texture recognition hardware applications. Based on the performance figures of already available individual AER convolution chips, we emulate large scale networks using a custom made event-based behavioral simulator. We have developed a new event-based processing architecture that emulates with AER hardware Manjunath's frame-based feature recognition software algorithm, and have analyzed its performance using our behavioral simulator. Recognition rate performance is not degraded. However, regarding speed, we show that recognition can be achieved before an equivalent frame is fully sensed and transmitted.

Mesh:

Year:  2010        PMID: 20181543     DOI: 10.1109/TNN.2009.2039943

Source DB:  PubMed          Journal:  IEEE Trans Neural Netw        ISSN: 1045-9227


  6 in total

1.  Exploiting Lightweight Statistical Learning for Event-Based Vision Processing.

Authors:  Cong Shi; Jiajun Li; Ying Wang; Gang Luo
Journal:  IEEE Access       Date:  2018-04-04       Impact factor: 3.367

2.  On spike-timing-dependent-plasticity, memristive devices, and building a self-learning visual cortex.

Authors:  Carlos Zamarreño-Ramos; Luis A Camuñas-Mesa; Jose A Pérez-Carrasco; Timothée Masquelier; Teresa Serrano-Gotarredona; Bernabé Linares-Barranco
Journal:  Front Neurosci       Date:  2011-03-17       Impact factor: 4.677

3.  Selective change driven imaging: a biomimetic visual sensing strategy.

Authors:  Jose A Boluda; Pedro Zuccarello; Fernando Pardo; Francisco Vegara
Journal:  Sensors (Basel)       Date:  2011-11-22       Impact factor: 3.576

4.  A neuromorphic architecture for object recognition and motion anticipation using burst-STDP.

Authors:  Andrew Nere; Umberto Olcese; David Balduzzi; Giulio Tononi
Journal:  PLoS One       Date:  2012-05-15       Impact factor: 3.240

5.  Poker-DVS and MNIST-DVS. Their History, How They Were Made, and Other Details.

Authors:  Teresa Serrano-Gotarredona; Bernabé Linares-Barranco
Journal:  Front Neurosci       Date:  2015-12-22       Impact factor: 4.677

Review 6.  Event-Based Sensing and Signal Processing in the Visual, Auditory, and Olfactory Domain: A Review.

Authors:  Mohammad-Hassan Tayarani-Najaran; Michael Schmuker
Journal:  Front Neural Circuits       Date:  2021-05-31       Impact factor: 3.492

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.