Literature DB >> 31279285

Redundant feature pruning for accelerated inference in deep neural networks.

Babajide O Ayinde1, Tamer Inanc2, Jacek M Zurada3.   

Abstract

This paper presents an efficient technique to reduce the inference cost of deep and/or wide convolutional neural network models by pruning redundant features (or filters). Previous studies have shown that over-sized deep neural network models tend to produce a lot of redundant features that are either shifted version of one another or are very similar and show little or no variations, thus resulting in filtering redundancy. We propose to prune these redundant features along with their related feature maps according to their relative cosine distances in the feature space, thus leading to smaller networks with reduced post-training inference computational costs and competitive performance. We empirically show on select models (VGG-16, ResNet-56, ResNet-110, and ResNet-34) and dataset (MNIST Handwritten digits, CIFAR-10, and ImageNet) that inference costs (in FLOPS) can be significantly reduced while overall performance is still competitive with the state-of-the-art.
Copyright © 2019 Elsevier Ltd. All rights reserved.

Keywords:  Cosine similarity; Deep learning; Deep neural networks; Feature correlation; Filter pruning; Redundancy reduction

Year:  2019        PMID: 31279285     DOI: 10.1016/j.neunet.2019.04.021

Source DB:  PubMed          Journal:  Neural Netw        ISSN: 0893-6080


  1 in total

1.  Dementia in Convolutional Neural Networks: Using Deep Learning Models to Simulate Neurodegeneration of the Visual System.

Authors:  Jasmine A Moore; Anup Tuladhar; Zahinoor Ismail; Pauline Mouches; Matthias Wilms; Nils D Forkert
Journal:  Neuroinformatics       Date:  2022-09-09
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.