| Literature DB >> 29434508 |
Carlo Ciliberto1, Mark Herbster1, Alessandro Davide Ialongo2,3, Massimiliano Pontil1,4, Andrea Rocchetto1,5, Simone Severini1,6, Leonard Wossnig1,5,7.
Abstract
Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.Entities:
Keywords: machine learning; quantum; quantum computing
Year: 2018 PMID: 29434508 PMCID: PMC5806018 DOI: 10.1098/rspa.2017.0551
Source DB: PubMed Journal: Proc Math Phys Eng Sci ISSN: 1364-5021 Impact factor: 2.704
Quantum linear algebra algorithms and their ML applications. When carefully compared with classical versions that take into account the same caveats, quantum algorithms might lose their advantages. C, Q and P indicate, respectively, the asymptotic computational complexity for classical, quantum and parallel computation. We remind the reader that, to date, memory and bandwidth limits in the communication between processors make the implementation of certain parallel algorithms unrealistic. We remark that asymptotic scalings are only an indication of potential runtime differences and solely by benchmarking the algorithms on quantum hardware we will obtain clear insights on their performance. Given an N×N-dimensional matrix A, we denote by k the number of singular values that are computed by the algorithm, by s the sparsity and by κ the condition number. For approximation algorithms ϵ is an approximation parameter. In other cases, it denotes the numerical precision. Classical algorithms return the whole solution vector. Quantum algorithms return a quantum state; in order to extract the classical vector, one needs copies on the state.
| problem | solving linear system of equations | singular value estimation |
|---|---|---|
| scaling | ||
| applications | least-squares SVM [ | recommendation systems [ |
| GP regression [ | linear regression [ | |
| Kernel least squares [ | principal component analysis [ |
An approximate algorithm that can be applied to dense matrices. Here is the number of entries in the matrix A and alludes to the number of entries per row.Exact but does not output the solution vector and works only for sparse matrices (more details can be found in §6).Requires parallel units and is numerically unstable due to high sensitivity to rounding errors. Stable algorithms such as Gaussian elimination with pivoting or parallel QR-decomposition require time using computational units [63].An approximate algorithm which returns a rank k-approximation with probability 1−δ and has an additional error ϵ∥A∥. Exacts methods for an N×M matrix scale with .Calculates SVD by computing the eigenvalue decomposition of the symmetric matrix AA.Works on dense matrices that are low-rank approximable. Finally, we note that there exist efficient, classical, parallel algorithms for sparse systems, where [64,65]. Probabilistic numerical linear algebra also allows selected problems to be solved more efficiently and, under specific assumptions, even in linear time and with bounded error [66].