Literature DB >> 35594093

Trainability of Dissipative Perceptron-Based Quantum Neural Networks.

Kunal Sharma1,2, M Cerezo1,3, Lukasz Cincio1, Patrick J Coles1.   

Abstract

Several architectures have been proposed for quantum neural networks (QNNs), with the goal of efficiently performing machine learning tasks on quantum data. Rigorous scaling results are urgently needed for specific QNN constructions to understand which, if any, will be trainable at a large scale. Here, we analyze the gradient scaling (and hence the trainability) for a recently proposed architecture that we call dissipative QNNs (DQNNs), where the input qubits of each layer are discarded at the layer's output. We find that DQNNs can exhibit barren plateaus, i.e., gradients that vanish exponentially in the number of qubits. Moreover, we provide quantitative bounds on the scaling of the gradient for DQNNs under different conditions, such as different cost functions and circuit depths, and show that trainability is not always guaranteed. Our work represents the first rigorous analysis of the scalability of a perceptron-based QNN.

Entities:  

Mesh:

Year:  2022        PMID: 35594093     DOI: 10.1103/PhysRevLett.128.180505

Source DB:  PubMed          Journal:  Phys Rev Lett        ISSN: 0031-9007            Impact factor:   9.161


  1 in total

1.  Generalization in quantum machine learning from few training data.

Authors:  Matthias C Caro; Hsin-Yuan Huang; M Cerezo; Kunal Sharma; Andrew Sornborger; Lukasz Cincio; Patrick J Coles
Journal:  Nat Commun       Date:  2022-08-22       Impact factor: 17.694

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.