| Literature DB >> 31417340 |
Malte J Rasch1, Tayfun Gokmen1, Mattia Rigotti1, Wilfried Haensch1.
Abstract
Analog arrays are a promising emerging hardware technology with the potential to drastically speed up deep learning. Their main advantage is that they employ analog circuitry to compute matrix-vector products in constant time, irrespective of the size of the matrix. However, ConvNets map very unfavorably onto analog arrays when done in a straight-forward manner, because kernel matrices are typically small and the constant time operation needs to be sequentially iterated a large number of times. Here, we propose to parallelize the training by replicating the kernel matrix of a convolution layer on distinct analog arrays, and randomly divide parts of the compute among them. With this modification, analog arrays execute ConvNets with a large acceleration factor that is proportional to the number of kernel matrices used per layer (here tested 16-1024). Despite having more free parameters, we show analytically and in numerical experiments that this new convolution architecture is self-regularizing and implicitly learns similar filters across arrays. We also report superior performance on a number of datasets and increased robustness to adversarial attacks. Our investigation suggests to revise the notion that emerging hardware architectures that feature analog arrays for fast matrix-vector multiplication are not suitable for ConvNets.Entities:
Keywords: analog computing; convolutional networks; emerging technologies; hardware acceleration of deep learning; machine learning; resistive cross-point devices
Year: 2019 PMID: 31417340 PMCID: PMC6682637 DOI: 10.3389/fnins.2019.00753
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677