Motohiko Ezawa1. 1. Department of Applied Physics, University of Tokyo, Hongo 7-3-1, Tokyo, 113-8656, Japan. ezawa@ap.t.u-tokyo.ac.jp.
Abstract
We analyze a binary classification problem by using a support vector machine based on variational quantum-circuit model. We propose to solve a linear equation of the support vector machine by using a [Formula: see text] matrix expansion. In addition, it is shown that an arbitrary quantum state is prepared by optimizing a universal quantum circuit representing an arbitrary [Formula: see text] based on the steepest descent method. It may be a quantum generalization of Field-Programmable-Gate Array (FPGA).
We analyze a binary classification problem by using a support vector machine based on variational quantum-circuit model. We propose to solve a linear equation of the support vector machine by using a [Formula: see text] matrix expansion. In addition, it is shown that an arbitrary quantum state is prepared by optimizing a universal quantum circuit representing an arbitrary [Formula: see text] based on the steepest descent method. It may be a quantum generalization of Field-Programmable-Gate Array (FPGA).
Quantum computation is a hottest topic in contemporary physics[1-3]. An efficient application of quantum computations is machine learning, which is called quantum machine learning[4-17] . A support vector machine is one of the most fundamental algorithms for machine learning[18,22,23], which classifies data into two classes by a hyperplane. A support vector machine (SVM) is a computer algorithm that learns by examples to assign labels to objects. It is a typical method to solve a binary-classification problem[18]. The optimal hyperplane is determined by an associated linear equation , where F and are given. A quantum support vector machine solves this linear equation by a quantum computer[10,13,24]. Usually, the linear equation is solved by the Harrow-Hassidim-Lloyd (HHL) algorithm[25]. However, this algorithm requires many quantum gates. Thus, the HHL algorithm is hard to be executed by using a near-term quantum computer. Actually, this algorithm has experimentally been verified only for two and three qubits[26-28]. In addition, it requires a unitary operator to execute , which is quite hard to be implemented. The Kernel based SVM implementation based on the quantum is reported[19-21].The number of qubits in current quantum computers is restricted. Variational quantum algorithms are appropriate for these small-qubit quantum computers, which use both quantum computers and classical computers. Various methods have been proposed such as Quantum Approximate Optimization Algorithm (QAOA)[29], variational eigenvalue solver[30], quantum circuit learning[31] and quantum linear solver[32,33]. We use wave functions with variational parameters in QAOA, which are optimized by minimizing the expectation value of the Hamiltonian. A quantum circuit has variational parameters in quantum circuit learning[31], which are optimized by minimizing a certain cost function. A quantum linear solver solves a linear equation by variational ansatz[32,33]. The simplest method of the optimization is a steepest-descent method.In this paper, we present a variational method for a quantum support vector machine by solving an associated linear equation based on variational quantum circuit learning. We propose a method to expand the matrix F by the matrices, which gives simple quantum circuits. We also propose a variational method to construct an arbitrary state by using a universal quantum circuit to represent an arbitrary unitary matrix . We prepare various internal parameters for a universal quantum circuit, which we optimize by minimizing a certain cost function. Our circuit is capable to determine the unitary transformation U satisfying with arbitrary given states and . It will be a quantum generalization of field-programmable-gate array (FPGA), which may execute arbitrary outputs with arbitrary inputs.
Results
Support vector machine
A simplest example of the SVM reads as follows. Suppose that there are red and blue points whose distributions are almost separated into two dimensions. We classify these data points into two classes by a line, as illustrated in Fig. 1a.
Figure 1
(a) Binary classification of red and blue points based on a quantum support vector machine with soft margin. A magenta (cyan) line obtained by an exact solution (variational method). (b) Evolution of the cost function. The vertical axis is the log. The horizontal axis is the variational step number. We have used , and and . We have runed simulations ten times, where each simulation is plotted in different color. (c) The saturated value of the cost function as a function of ranging for various . The green dots indicates , black dots indicates , magenta dots indicates and cyan dots indicates .
In general, M data points are spattered in D dimensions, which we denote , where . The problem is to determine a hyperplane,separating data into two classes with the use of a support vector machine. We setfor red points andfor blue points. These conditions are implemented by introducing a functionwhich assigns to red points and to blue points. In order to determine and for a given set of data , we introduce real numbers by A support vector machine enables us to determine and by solving the linear equationwhere , and F is a matrix given byHere,is a Kernel matrix, and is a certain fixed constant which assures the existence of the solution of the linear equation (6) even when the red and blue points are slightly inseparable. Note that corresponds to the hard margin condition. Details of the derivation of Eq. (6) are given in Method A.(a) Binary classification of red and blue points based on a quantum support vector machine with soft margin. A magenta (cyan) line obtained by an exact solution (variational method). (b) Evolution of the cost function. The vertical axis is the log. The horizontal axis is the variational step number. We have used , and and . We have runed simulations ten times, where each simulation is plotted in different color. (c) The saturated value of the cost function as a function of ranging for various . The green dots indicates , black dots indicates , magenta dots indicates and cyan dots indicates .
Quantum linear solver based on matrix expansion
We solve the linear equation (6) by a quantum computer. In general, we solve a linear equationfor an arbitrary given non-unitary matrix F and an arbitrary given state . Here, the coefficient c is introduced to preserve the norm of the state, and it is given by The HHL algorithm[25] is a most famous algorithm to solve this linear equation by a quantum computer. We first construct a Hermitian matrix by Then, a unitary matrix associated with F is uniquely obtained by . Nevertheless, it requires many quantum gates. In addition, it is a nontrivial problem to implement .Recently, variational methods have been proposed[32] to solve the linear equation (9). In one of the methods, the matrix F is expanded in terms of some unitary matrices as In general, a complicated quantum circuit is necessary to determine the coefficient .We start with a trial state to determine the state . Application of each unitary matrix to this state is efficiently done by a quantum computer, , and we obtainwhere is an approximation of the given state . We tune a trial state by a variational method so as to minimize the cost function[32]which measures the similarity between the approximate state and the state in (9). We have , where for the exact solution. The merit of this cost function is that the inner product is naturally calculated by a quantum computer.Let the dimension of the matrix F be . It is enough to use N satisfying without loss of generality by adding trivial components to the linear equation. We propose to expand the matrix F by the gamma matrices aswithwhere
and z.The merit of our method is that it is straightforward to determine by the well-known formula In order to construct a quantum circuit to calculate , we express the matrix F by column vectors as We have , where subscript p denotes the p-th component of . Then is given bywhere the subscript q denotes the ()-th component of . We have introduced a notation with , where q is the decimal representation of the binary number . See explicit examples for one and two qubits in Method B.The state is generated as follows. We prepare the NOT gates for the i-th qubit if . Using all these NOT gates we define We act it on the initial state and obtain Next, we construct a unitary gate generating , We will discuss how to prepare by a quantum circuit soon later; See Eq. (33). By using these operators, is expressed aswhich can be executed by a quantum computer. We show explicit examples in Fig. 2.
Figure 2
Quantum circuits determining . We show an example with (a) . and (b) . .
Once we have , the final state is obtained by applying to and taking sum over j, which leads to The implementation of the matrix is straightforward in quantum circuit, because the matrix is composed of the Pauli sigma matrices, as shown in Fig. 2.Quantum circuits determining . We show an example with (a) . and (b) . .
Steepest-descent method
One of the most common approaches to optimization is the steepest-descent method, where we make iterative steps in directions indicated by the gradient[34]. We may use this method to find an optimal trial state closest to the state . To determine the gradient, we calculate the difference of the cost function when we slightly change the trial state at step t by the amount of as We explain how to construct by a quantum circuit soon later; See Eq. (33). Then, we renew the state aswhere we use an exponential function for , We choose appropriate constants and for an efficient search of the optimal solution, whose explicit examples are given in the caption of Fig. 1b. We stop the renewal of the variational step when the difference becomes sufficiently small, which gives the optimal state of the linear equation (9).In the numerical simulation, we discretize the time stepwith a fixed . We add a small value in the p-th component of the trial state at the n stepwhere denotes a unit vector where only the p component is 1 and the other components are zero. Then, we calculate the costfunction By running p from 1 to , we obtain a vector , whose p-th component is . Then, the gradient is numerically obtained asand we set the trial state at the step. We iterate this process so that becomes sufficiently small.We denote the saturated cost function . It depends on the choice of and in Eq. (27). We show as a function of for various in Fig. 1c. There are some features. First, is small for small . Namely, we need to choose small in order to obtain a good solution. On the other hand, the required step increases for small . It is natural that small means that the step size is small. The required step number is antiproportional to . Second, there is a critical value to obtain a good solution as a function of for a fixed value of . We find that it is necessary to set .A comment is in order. The cost function does not become zero although it becomes very small. It means that the solution is trapped by a local minimum and does not reach the exact solution. It is a general feature of variational algorithms, where we cannot obtain the exact solution. However, the exact solution is unnecessary in many cases including machine learnings. Actually, the classification shown in Fig. 1a is well done.
Variational universal-quantum-state generator
In order to construct the trial state , it is necessary to prepare an arbitrary state by a quantum circuit. Alternatively, we need such a unitary transformation U that It is known that any unitary transformation is done by a sequential application of the Hadamard, the phase-shift and the CNOT gates[35,36]. Indeed, an arbitrary unitary matrix is decomposable into a sequential application of quantum gates[35,36], each of which is constructed as a universal quantum circuit systematically[37-42]. Universal quantum circuits have so far been demonstrated experimentally for two and three qubits[43-46].We may use a variational method to construct U satisfying Eq. (33). Quantum circuit learning is a variational method[31], where angle variables are used as variational parameters in a quantum circuit U, and the cost function is optimized by tuning . We propose to use a quantum circuit learning for a universal quantum circuit. We show that an arbitrary state can be generated by tuning starting from the initial state as We adjust by minimizing the cost functionwhich is the same as that of the variational quantum support vector machine. We present explicit examples of universal quantum circuits for one, two and three qubits in Method C.Evolution of the cost function for (a) two qubits and (b) three qubits. The vertical axis is the log. The horizontal axis is the number of variational steps. We use and for both the two- and three-qubit universal quantum circuits. We prepare random initial and final states, where we have runed simulations ten times. Each simulation is plotted in different color.
Quantum field-programmable-gate array
We next consider a problem to find a unitary transformation which maps an arbitrary initial state to an arbitrary final state , Since we can generate an arbitrary unitary matrix as in Eq. (33), it is possible to generate such matrices and that Then, Eq. (36) is solved assince .An FPGA is a classical integrated circuit[47-50], which can be programmable by a customer or a designer after manufacturing in a factory. An FPGA executes any classical algorithms. On the other hand, our variational universal quantum-state generator creates an arbitrary quantum state. We program by using the variational parameters . In this sense, the above quantum circuit may be considered as a quantum generalization of FPGA, which is a quantum FPGA (q-FPGA).We show explicitly how the cost function is renewed for each variational step in the case of two- and three-qubit universal quantum circuits in Fig. 3, where we have generated the initial and the final states randomly. We optimize 15 parameters for two-qubit universal quantum circuits and 82 parameters for three-qubit universal quantum circuits. We find that is well determined by variational method as in Fig.3.
Figure 3
Evolution of the cost function for (a) two qubits and (b) three qubits. The vertical axis is the log. The horizontal axis is the number of variational steps. We use and for both the two- and three-qubit universal quantum circuits. We prepare random initial and final states, where we have runed simulations ten times. Each simulation is plotted in different color.
Variational quantum support vector machine
We demonstrate a binary classification problem in two dimensions based on the support vector machine. We prepare a data set, where red points have a distribution around with variance r, while blue points have a distribution around with variance r. We assume the Gaussian normal distribution. We choose randomly. We note that there are some overlaps between the red and blue points, which is the soft margin model.As an example, we show the distribution of red and blue points and the lines obtained by the variational method marked in cyan and by the direct solution of (6) marked in magenta in Fig. 1a. They agrees well with one another, where both of the lines well separate red and blue points. We have prepared 31 red points and 32 blue points, and used six qubits.
Discussion
Efficiency
The original proposal[10] requires runtime, where is the dimension of the feature space and M is the number of training data points. It has an advantage over the classical protocol which requires . There exisits also a quantum-inspired classical SVM[51], which requires polynomial runtime as a function of the number of data points M and dimension of the feature space .N qubit can represent . Hence, the required number of qubits is . We need quantum gates for an exact preparation of a universal quantum state. On the other hand, a hardware-efficient universal quantum circuit prepares an approximate universal quantum state by using the order of 4N quantum gates[52-54]. We need N quantum gates for the execution of and , separately. We need quantum gates for exact preparation and 6N for approximate preparation. In machine learning, the exact solution is unnecessary. Thus, 6N quantum gates are enough.On the other hand, the accuracy is independent of the number of required quantum gates. It is determined by as shown in new Fig.1c.
Radial basis function
In this paper, we have used the linear Kernel function (8), which is efficient to classify data points linearly. However, it is not sufficient to classify data points which are not separated by the linear function. The radial basis function[55,56] is given by with a free parameter . It is used for a nonlinear classification[57]. It is known[58,59] that the depth of a quantum is linear to the dimension of the feature space N.
Conclusion
We have proposed that the matrix F is efficiently inputted into a quantum computer by using the -matrix expansion method. There are many ways to use a matrix in a quantum computer such as linear regression and principal component analysis. Our method will be applicable to these cases.Although it is possible to obtain the exact solution for the linear equation by the HHL algorithm, it requires many gates. On the other hand, it is often hard to obtain the exact solution by variational methods since trial functions may be trapped to a local minimum. However, this problem is not serious for the machine learning problem because it is more important to obtain an approximate solution efficiently rather than an exact solution by using many gates. Indeed, our optimized hyperplane also well separates red and blue points as shown in Fig. 1a.In order to classify M data, we need to prepare qubits. It is hard to execute a large number of data points by current quantum computers. Recently, it is shown that electric circuits may simulate universal quantum gates[60-62] based on the fact that the Kirchhoff law is rewritten in the form of the Schrödinger equation[63]. Our variational algorithm will be simulated by using them.
Methods
A support vector machine is an algorithm for supervised learning[18,22,23]. We first prepare a set of training data, where each point is marked either in red or blue. Then, we determine a hyperplane separating red and blue points. After learning, input data are classified into red or blue by comparing the input data with the hyperplane. The support vector machine maximizes a margin, which is a distance between the hyperplane and data points. If red and blue points are perfectly separated by the hyperplane, it is called a hard margin problem (Fig.4a). Otherwise, it is called a soft margin problem (Fig.4b).
Figure 4
Illustration of the hyperplane and the support vector. Two support vectors are marked by red and blue squares. (a) Hard margin where red and blue points are separated perfectly, and (b) soft margin where they are separated imperfectly.
We minimize the distance between a data point and the hyperplane given by We define support vectors as the closest points to the hyperplane. There is such a vector in each side of the hyperplane, as shown in Fig. 4a. This is the origin of the name of the support vector machine. Without loss of generality, we setfor the support vectors, because the hyperplane is present at the equidistance of two closest data points and because it is possible to set the magnitude of to be 1 by scaling and . Then, we maximize the distancewhich is identical to minimize .Illustration of the hyperplane and the support vector. Two support vectors are marked by red and blue squares. (a) Hard margin where red and blue points are separated perfectly, and (b) soft margin where they are separated imperfectly.First, we consider the hard margin problem, where red and blue points are perfectly separable. All red points satisfy and all blue points satisfy . We introduce variables , where for red points and for blue points. Using them, the condition is rewritten asfor each j. The problem is reduced to find the minimum of under the above inequalities. The optimization under inequality conditions is done by the Lagrange multiplier method with the Karush-Kuhn-Tucker condition[64]. It is expressed in terms of the Lagrangian aswhere are Lagrange multipliers to ensure the constraints.For the soft margin case, we cannot separate two classes exactly. In order to treat this case, we introduce slack variables satisfyingand redefine the cost function as Here, corresponds to the hard margin. The second term represents the penalty for some of data points to have crossed over the hyperplane. The Lagrangian is modified as The stationary points are determined by We may solve these equations to determine and asfrom (48), andfrom (50). Inserting them into (51), we find Since , it is rewritten as Since appears always in a pair with , we introduce a new variable defined byand we define the Kernel matrix as Then, and are obtained by solving linear equationswhich are summarized aswhich is Eq. (6) in the main text. Finally, is determined by Once the hyperplane is determined, we can classify new input data into red ifand blue if Thus, we obtain the hyperplane for binary classification.
matrix expansion
We explicitly show how to calculate in (17) based on the matrix expansion for the one and two qubits.
One qubit
We show an explicit example of the -matrix expansion for one qubit. One qubit is represented by a matrix, The column vectors are explicitly given by The coefficient in (17) is calculated as
Two qubits
Next, we show an explicit example of the -matrix expansion for two qubits. Two qubits are represented by a matrix, The column vectors are explicitly given by The coefficient in (17) is calculated as
Universal quantum circuits
Angle variables are used as variational parameters in a universal quantum circuit learning. We present examples for one, two and three qubits.
One-qubit universal quantum circuit
The single-qubit rotation gates are defined by The one-qubit universal quantum circuit is constructed as We show a quantum circuit in Fig. 5a. There are three variational parameters.
Figure 5
Universal quantum circuits for (a) one, (b) two and (c) three qubits.
It is obvious that an arbitrary state is realized starting from the state asUniversal quantum circuits for (a) one, (b) two and (c) three qubits.
Two-qubit universal quantum circuit
The two-qubit universal quantum circuit is constructed as[43]where the entangling two-qubit gate is defined by[43] The two-qubits universal quantum circuit contains 15 variational parameters. We show a quantum circuit in Fig. 5b.
Three-qubit universal quantum circuit
The three-qubit universal quantum circuit is constructed aswhere , , , and are one-qubit universal quantum circuits, while , , , and are two-qubit universal quantum circuit andEplicit quantum circuits for , and are shown in Ref.[42]. The three-qubits universal quantum circuit contains 82 variational parameters. We show a quantum circuit in Fig. 5c.Hardware-efficient universal quantum circuits for four qubits[54].
Multi-qubit universal quantum circuit
General multi-qubit universal quantum circuit is constructed in Ref.[39]. The minimum numbers of variational parameters are for N-qubit unicersal quantum circuits. However, we need more variational parameters in the currently known algorithm for .Actually, multi-qubit universal quantum circuits are well approximated by the hardware-efficient quantum circuit[52-54]. They are constructed aswith the use of the single qubit rotationand the CNOT gateswhere stands for the controlled rotation gate with the controlled qubit being n and the target qubit being . We need the order of 4N quantum gates for N-qubit universal quantum circuits. We show an example of the hardware-efficient quantum circuit with in Fig. 6.
Figure 6
Hardware-efficient universal quantum circuits for four qubits[54].
In addition, an ansatz based on the restricted Boltzmann machine requires quantum gates, while a unitary-coupled cluster ansatz requires quantum gates[16,34].
Simulations
All of the numerical calculations are carried out by Mathematica.
Authors: L DiCarlo; J M Chow; J M Gambetta; Lev S Bishop; B R Johnson; D I Schuster; J Majer; A Blais; L Frunzio; S M Girvin; R J Schoelkopf Journal: Nature Date: 2009-06-28 Impact factor: 49.962
Authors: Vojtěch Havlíček; Antonio D Córcoles; Kristan Temme; Aram W Harrow; Abhinav Kandala; Jerry M Chow; Jay M Gambetta Journal: Nature Date: 2019-03-13 Impact factor: 49.962
Authors: Abhinav Kandala; Antonio Mezzacapo; Kristan Temme; Maika Takita; Markus Brink; Jerry M Chow; Jay M Gambetta Journal: Nature Date: 2017-09-13 Impact factor: 49.962