Literature DB >> 26780817

An Energy-Efficient and Scalable Deep Learning/Inference Processor With Tetra-Parallel MIMD Architecture for Big Data Applications.

Seong-Wook Park, Junyoung Park, Kyeongryeol Bong, Dongjoo Shin, Jinmook Lee, Sungpill Choi, Hoi-Jun Yoo.   

Abstract

Deep Learning algorithm is widely used for various pattern recognition applications such as text recognition, object recognition and action recognition because of its best-in-class recognition accuracy compared to hand-crafted algorithm and shallow learning based algorithms. Long learning time caused by its complex structure, however, limits its usage only in high-cost servers or many-core GPU platforms so far. On the other hand, the demand on customized pattern recognition within personal devices will grow gradually as more deep learning applications will be developed. This paper presents a SoC implementation to enable deep learning applications to run with low cost platforms such as mobile or portable devices. Different from conventional works which have adopted massively-parallel architecture, this work adopts task-flexible architecture and exploits multiple parallelism to cover complex functions of convolutional deep belief network which is one of popular deep learning/inference algorithms. In this paper, we implement the most energy-efficient deep learning and inference processor for wearable system. The implemented 2.5 mm × 4.0 mm deep learning/inference processor is fabricated using 65 nm 8-metal CMOS technology for a battery-powered platform with real-time deep inference and deep learning operation. It consumes 185 mW average power, and 213.1 mW peak power at 200 MHz operating frequency and 1.2 V supply voltage. It achieves 411.3 GOPS peak performance and 1.93 TOPS/W energy efficiency, which is 2.07× higher than the state-of-the-art.

Entities:  

Mesh:

Year:  2016        PMID: 26780817     DOI: 10.1109/TBCAS.2015.2504563

Source DB:  PubMed          Journal:  IEEE Trans Biomed Circuits Syst        ISSN: 1932-4545            Impact factor:   3.833


  2 in total

1.  A shared synapse architecture for efficient FPGA implementation of autoencoders.

Authors:  Akihiro Suzuki; Takashi Morie; Hakaru Tamukoh
Journal:  PLoS One       Date:  2018-03-15       Impact factor: 3.240

Review 2.  Prediction Methods of Herbal Compounds in Chinese Medicinal Herbs.

Authors:  Ke Han; Lei Zhang; Miao Wang; Rui Zhang; Chunyu Wang; Chengzhi Zhang
Journal:  Molecules       Date:  2018-09-10       Impact factor: 4.411

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.