Literature DB >> 32062373

Universal approximation with quadratic deep networks.

Fenglei Fan1, Jinjun Xiong2, Ge Wang3.   

Abstract

Recently, deep learning has achieved huge successes in many important applications. In our previous studies, we proposed quadratic/second-order neurons and deep quadratic neural networks. In a quadratic neuron, the inner product of a vector of data and the corresponding weights in a conventional neuron is replaced with a quadratic function. The resultant quadratic neuron enjoys an enhanced expressive capability over the conventional neuron. However, how quadratic neurons improve the expressing capability of a deep quadratic network has not been studied up to now, preferably in relation to that of a conventional neural network. Specifically, we ask four basic questions in this paper: (1) for the one-hidden-layer network structure, is there any function that a quadratic network can approximate much more efficiently than a conventional network? (2) for the same multi-layer network structure, is there any function that can be expressed by a quadratic network but cannot be expressed with conventional neurons in the same structure? (3) Does a quadratic network give a new insight into universal approximation? (4) To approximate the same class of functions with the same error bound, could a quantized quadratic network have a lower number of weights than a quantized conventional network? Our main contributions are the four interconnected theorems shedding light upon these four questions and demonstrating the merits of a quadratic network in terms of expressive efficiency, unique capability, compact architecture and computational capacity respectively.
Copyright © 2020 Elsevier Ltd. All rights reserved.

Entities:  

Keywords:  Approximation theory; Deep learning; Quadratic networks

Mesh:

Year:  2020        PMID: 32062373      PMCID: PMC7076904          DOI: 10.1016/j.neunet.2020.01.007

Source DB:  PubMed          Journal:  Neural Netw        ISSN: 0893-6080


  13 in total

1.  Information complexity of neural networks.

Authors:  M A Kon; L Plaskota
Journal:  Neural Netw       Date:  2000-04

2.  Learning, invariance, and generalization in high-order neural networks.

Authors:  C L Giles; T Maxwell
Journal:  Appl Opt       Date:  1987-12-01       Impact factor: 1.980

Review 3.  Representation learning: a review and new perspectives.

Authors:  Yoshua Bengio; Aaron Courville; Pascal Vincent
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2013-08       Impact factor: 6.226

4.  Universal Approximation Using Radial-Basis-Function Networks.

Authors:  J Park; I W Sandberg
Journal:  Neural Comput       Date:  1991       Impact factor: 2.026

5.  A new type of neurons for machine learning.

Authors:  Fenglei Fan; Wenxiang Cong; Ge Wang
Journal:  Int J Numer Method Biomed Eng       Date:  2017-09-15       Impact factor: 2.747

6.  Probabilistic lower bounds for approximation by shallow perceptron networks.

Authors:  Věra Kůrková; Marcello Sanguineti
Journal:  Neural Netw       Date:  2017-04-19

7.  Neurons With Paraboloid Decision Boundaries for Improved Neural Network Classification Performance.

Authors:  Nikolaos Tsapanos; Anastasios Tefas; Nikolaos Nikolaidis; Ioannis Pitas
Journal:  IEEE Trans Neural Netw Learn Syst       Date:  2018-06-14       Impact factor: 10.451

8.  On the complexity of neural network classifiers: a comparison between shallow and deep architectures.

Authors:  Monica Bianchini; Franco Scarselli
Journal:  IEEE Trans Neural Netw Learn Syst       Date:  2014-08       Impact factor: 10.451

9.  Deep networks are effective encoders of periodicity.

Authors:  Lech Szymanski; Brendan McCane
Journal:  IEEE Trans Neural Netw Learn Syst       Date:  2014-10       Impact factor: 10.451

10.  Mastering the game of Go without human knowledge.

Authors:  David Silver; Julian Schrittwieser; Karen Simonyan; Ioannis Antonoglou; Aja Huang; Arthur Guez; Thomas Hubert; Lucas Baker; Matthew Lai; Adrian Bolton; Yutian Chen; Timothy Lillicrap; Fan Hui; Laurent Sifre; George van den Driessche; Thore Graepel; Demis Hassabis
Journal:  Nature       Date:  2017-10-18       Impact factor: 49.962

View more
  3 in total

1.  Quadratic Autoencoder (Q-AE) for Low-Dose CT Denoising.

Authors:  Fenglei Fan; Hongming Shan; Mannudeep K Kalra; Ramandeep Singh; Guhan Qian; Matthew Getzin; Yueyang Teng; Juergen Hahn; Ge Wang
Journal:  IEEE Trans Med Imaging       Date:  2019-12-31       Impact factor: 10.048

2.  A review on Deep Learning approaches for low-dose Computed Tomography restoration.

Authors:  K A Saneera Hemantha Kulathilake; Nor Aniza Abdullah; Aznul Qalid Md Sabri; Khin Wee Lai
Journal:  Complex Intell Systems       Date:  2021-05-30

3.  Superiority of quadratic over conventional neural networks for classification of gaussian mixture data.

Authors:  Tianrui Qi; Ge Wang
Journal:  Vis Comput Ind Biomed Art       Date:  2022-09-28
  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.