Literature DB >> 32650153

Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness.

Pengzhan Jin1, Lu Lu2, Yifa Tang1, George Em Karniadakis3.   

Abstract

The accuracy of deep learning, i.e., deep neural networks, can be characterized by dividing the total error into three main types: approximation error, optimization error, and generalization error. Whereas there are some satisfactory answers to the problems of approximation and optimization, much less is known about the theory of generalization. Most existing theoretical works for generalization fail to explain the performance of neural networks in practice. To derive a meaningful bound, we study the generalization error of neural networks for classification problems in terms of data distribution and neural network smoothness. We introduce the cover complexity (CC) to measure the difficulty of learning a data set and the inverse of the modulus of continuity to quantify neural network smoothness. A quantitative bound for expected accuracy/error is derived by considering both the CC and neural network smoothness. Although most of the analysis is general and not specific to neural networks, we validate our theoretical assumptions and results numerically for neural networks by several data sets of images. The numerical results confirm that the expected error of trained networks scaled with the square root of the number of classes has a linear relationship with respect to the CC. We also observe a clear consistency between test loss and neural network smoothness during the training process. In addition, we demonstrate empirically that the neural network smoothness decreases when the network size increases whereas the smoothness is insensitive to training dataset size.
Copyright © 2020 Elsevier Ltd. All rights reserved.

Keywords:  Cover complexity; Data distribution; Generalization error; Learnability; Neural network smoothness; Neural networks

Mesh:

Year:  2020        PMID: 32650153     DOI: 10.1016/j.neunet.2020.06.024

Source DB:  PubMed          Journal:  Neural Netw        ISSN: 0893-6080


  1 in total

1.  Adaptive Checkpoint Adjoint Method for Gradient Estimation in Neural ODE.

Authors:  Juntang Zhuang; Nicha Dvornek; Xiaoxiao Li; Sekhar Tatikonda; Xenophon Papademetris; James Duncan
Journal:  Proc Mach Learn Res       Date:  2020
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.