Literature DB >> 35115630

Clinical data classification with noisy intermediate scale quantum computers.

S Moradi1, C Brandner1, C Spielvogel2, D Krajnc1, S Hillmich3, R Wille3,4, W Drexler1, L Papp5.   

Abstract

Quantum machine learning has experienced significant progress in both software and hardware development in the recent years and has emerged as an applicable area of near-term quantum computers. In this work, we investigate the feasibility of utilizing quantum machine learning (QML) on real clinical datasets. We propose two QML algorithms for data classification on IBM quantum hardware: a quantum distance classifier (qDS) and a simplified quantum-kernel support vector machine (sqKSVM). We utilize these different methods using the linear time quantum data encoding technique ([Formula: see text]) for embedding classical data into quantum states and estimating the inner product on the 15-qubit IBMQ Melbourne quantum computer. We match the predictive performance of our QML approaches with prior QML methods and with their classical counterpart algorithms for three open-access clinical datasets. Our results imply that the qDS in small sample and feature count datasets outperforms kernel-based methods. In contrast, quantum kernel approaches outperform qDS in high sample and feature count datasets. We demonstrate that the [Formula: see text] encoding increases predictive performance with up to + 2% area under the receiver operator characteristics curve across all quantum machine learning approaches, thus, making it ideal for machine learning tasks executed in Noisy Intermediate Scale Quantum computers.
© 2022. The Author(s).

Entities:  

Year:  2022        PMID: 35115630      PMCID: PMC8814029          DOI: 10.1038/s41598-022-05971-9

Source DB:  PubMed          Journal:  Sci Rep        ISSN: 2045-2322            Impact factor:   4.379


Introduction

Quantum technologies promise to revolutionize the future of information and computation using quantum devices to process massive amounts of data. To date, considerable progress has been made from both software and hardware points of view. Many researches are underway to simplify quantum algorithms[1-8] in order to implement them on existing, so-called Noisy Intermediate Scale Quantum (NISQ) computers[9]. As a result, small quantum devices based on photons, superconductors, or trapped ions are capable of efficiently running scalable quantum algorithms[6,7,10]. Quantum Machine Learning (QML) is a particularly interesting approach, as it is suited for existing NISQ architectures[11-15]. While conventional machine learning is generally applied to process large amounts of data, many research fields cannot provide such large datasets. One example is medical research, where collecting cohorts that represent certain characteristics of diseases routinely results in small datasets[16]. NISQ devices can efficiently execute algorithms with shallow depth and a low number of qubits[9]. Therefore, it appears logical to exploit the potential of QML executed on NISQ devices incorporating clinical datasets. However, the execution of QML algorithms in the form of practical quantum gate operations is non-trivial. First, the classical data needs to be encoded into quantum states. For this purpose, prior QML algorithms assume that a quantum random access memory (QRAM) device for storing the data is present[17]. Nevertheless, to date, such practical devices are not available. Second, since the output of quantum algorithms are obviously quantum states, the efficient classical bits of information must be extracted through quantum measurements. To date, various classical data encoding approaches have been proposed[6,7,18-21]. In particular, encoding classical numerical features into quantum states has the advantage to utilize number of qubits (a.k.a. linear time encoding) in relation to number of input features[18-21]. This approach allows to utilize NISQ devices with a small number of qubits and to minimize quantum noise, while at the same time maintaining quantum speedup[14]. In contrast, to date, this approach in combination with quantum machine learning appears to be underrepresented. In light of the above proceedings, we hypothesize that clinically-relevant quantum prediction models can be built on NISQ devices employing the encoding, having prediction performances comparable to classic ML approaches. In our work, we propose two quantum machine learning approaches that rely on the encoding approach, thus, not requiring the presence of a fault-tolerant quantum circuit for implementation of quantum RAM[17]. Previously proposed techniques for estimation of the inner product with Hadamard Test and Swap Test assume that there is a quantum RAM or a quantum circuit that store both index of data and their values[22,23]. To construct a quantum database (QDB) from classical data, -qubits are required for sample and -feature counts, where , , and 1 is considered as qubit register[24]. In contrast, the encoding technique utilizes only qubit and steps to classically access to data without allocating extra qubits to the index of entries of dataset. First, we demonstrate a simple and efficient quantum distance classifier (qDC) executable on existing NISQ devices. Second, we present a simplified quantum-kernel SVM (sqKSVM) approach using quantum kernels which can be executed once without optimization instead of twice with optimization as in case of the quantum-kernel SVM (qKSVM) approach[6,7]. In order to test our hypothesis, we demonstrate the performance of the qDC and the sqKSVM approaches using real clinical data and compare their performances to qKSVM, as well as to classic computing counterparts such as k-nearest neighbors[25] and classic support vector machines[26].

Results

Dataset

This study incorporated three open-access clinical datasets that have been presented and evaluated in various contexts[27-29]. Each dataset underwent redundancy reduction by correlation matrix analysis[30] followed by a tenfold cross-validation split with a training-validation ratio of 80–20%[16]. Training sets of the folds were subjects of feature ranking analysis[31] and the highest-ranking eight as well as 16 (if available) features were selected for further analysis. The resulted dataset configurations were analyzed by class imbalance ratios and the quantum advantage score (a.k.a. difference geometry)[20] for quantum kernel methods. Table 1 demonstrates the characteristics of the data configurations as well as the results of the imbalance ratio and the quantum advantage scores (for estimation of the quantum advantage scores , see Appendix E of the supplementary material).
Table 1

Clinical datasets utilized for the study with their sample and selected feature count as well as their imbalance ratios and quantum advantage scores .

Dataset#SamplesImbalance Ratio#Features\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${g}_{CQ}$$\end{document}gCQReference
Pediatric Bone Marrow Transplant 2-year survival1340.3380.40[27]
160.60
Wisconsin Breast Cancer Malign-vs-benign5690.3781.30[28]
163.50
Heart Failure Mortality3000.580.42[29]

Given a two-class dataset, the imbalance ratio () is , where is the number of minority class and is the total number of samples. Furthermore, measures the similarities of quantum kernel and linear classical kernel functions of the same dataset.

Clinical datasets utilized for the study with their sample and selected feature count as well as their imbalance ratios and quantum advantage scores . Given a two-class dataset, the imbalance ratio () is , where is the number of minority class and is the total number of samples. Furthermore, measures the similarities of quantum kernel and linear classical kernel functions of the same dataset.

Encoding strategies

This study relies on the data encoding strategy which uses sequences of Pauli-Y gate rotations () and gates (see Appendix A of the supplementary material) to result in a number of encoding qubits[18,19,21]. puts each qubit in a superposition state and s entangle qubits. The data encoding feature map with the application of and is given by[32]where . In Eq. (1), is the encoding map from the Euclidean space to the Hilbert space and is the model circuit for data encoding, which maps to another ket vector of input data . To find a relationship between the input data and , see Appendix A of the supplementary material. In contrast, previously proposed quantum ML-specific encoding utilizes a block of the Hadamard gates followed by a block of Pauli-Z gate rotations () are applied to each qubit[7]. To entangle the qubits, nearest neighbor s are also applied. The features of data samples are considered as angles of rotations and the required number of qubits for data encoding are equal to the number of features. The data encoding feature map with the application of the Hadamard, and is given by[7]where . is the model circuit for N features data encoding. In order to compare the predictive performance of the above two data encoding strategies, the qDC, the sqKSVM and the qKSVM (see Appendix C of the supplementary material) approaches were compared utilizing a number of features. This analysis was executed using the Pennylane simulator environment[33], while the sqKSVM was also evaluated on the IBMQ Melbourne machine (see “Methods”). Table 2 demonstrates the cross-validation area under the receiver operator characteristics curve (AUC) performance values of the quantum ML algorithms in relation to the and encoding qubit strategies.
Table 2

Comparison of the cross-validation AUC performance for different data encodings.

DatasetqDCqKSVMsqKSVMsqKSVM*qubits
Pediatric Bone Marrow Transplant 2YS0.620.630.620.61\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathrm{log}}_{2}N$$\end{document}log2N
0.610.630.610.59\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N$$\end{document}N
Wisconsin Breast Cancer Malign-vs-benign0.920.920.880.87\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathrm{log}}_{2}N$$\end{document}log2N
0.900.910.870.85\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N$$\end{document}N
Heart failure Mortality0.620.510.510.50\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathrm{log}}_{2}N$$\end{document}log2N
0.600.510.510.50\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N$$\end{document}N

The qDC, qKSVM, and sqKSVM run on Pennylane simulator for . For the encoding, features are encoded into qubits with sequences of Pauli-Y gate rotations () and s. In another strategy, features are encoded into qubits with sequences of the Hadamard gates, Pauli-Z gate () rotations followed by nearest neighbor s.

*The sqKSVM was also executed on the IBMQ Melbourne machine for reference comparison.

Comparison of the cross-validation AUC performance for different data encodings. The qDC, qKSVM, and sqKSVM run on Pennylane simulator for . For the encoding, features are encoded into qubits with sequences of Pauli-Y gate rotations () and s. In another strategy, features are encoded into qubits with sequences of the Hadamard gates, Pauli-Z gate () rotations followed by nearest neighbor s. *The sqKSVM was also executed on the IBMQ Melbourne machine for reference comparison.

Quantum and classic machine learning predictive performance evaluation

The quantum distance classifier (qDC) first calculates the distance between the state vector of a test sample and each state vector of the training sample in set and set and, then, assigns a label of the test sample to the label of the closest set. In the qDC, we divide the training set, with number of samples, based on their labels into two subset and , where contains only label with the number of samples and contains only label with the number of samples with . The task is to determine the label of the given test sample , if or . Mathematically, if is the state vector of the test sample as well as and , then the label of is determined by , if , otherwise . The distance between the vectors is given by[8] i.e.where is the norm of a vector. Therefore, the task is to calculate the inner product with a quantum computer. The two different approaches to estimate with quantum computers are Hadamard Test[22] and the Swap Test[23]. For the simplified quantum kernel SVM (sqKSVM), we first need to note that the standard form of the quantum kernelized binary classifiers iswhere is the unknown label, is the label of the th training sample, is the th component of the support vector , is the number of training data, and is the kernel matrix of all the training-test pairs. For a given dataset , one option to bypass the drawbacks of the qKSVM algorithm (see Appendix C of the supplementary material) as presented in[6,7] is to set uniform weight , in case of (balanced dataset). Otherwise, for the majority class and for the minority class. Thresholding the value yields the binary output as following In Eq. (5), is defined as (see Appendix F of the supplementary material) The dataset configurations were utilized to estimate the performance of quantum and classic machine learning algorithms incorporated in this study. Performance estimation was done by confusion matrix analytics[34]. Prediction models were built based on the given training subset, followed by evaluating the respective validation subsets of each fold. Average area under the receiver operator characteristics curve (AUC) was calculated across validation cases for each predictive model. To build predictive models, quantum ML approaches included the qDC, the sqKSVM and the qKSVM (see Appendix C of supplementary material) were utilized. Classic machine learning approaches were k-nearest neighbors (ckNN)[25] and support vector machines (cSVM)[26]. See Table 3 for the comparison of cross-validation AUC performances of quantum and classic computing algorithm within the dataset configurations.
Table 3

Comparison of the cross-validation AUC performance with QML and ML algorithms.

Dataset#FeaturessqKSVMqKSVMqDCcSVMckNN
Pediatric Bone Marrow Transplant 2YS80.610.630.600.640.61
160.660.690.640.710.64
Wisconsin Breast Cancer Malign-vs-benign80.870.920.910.890.90
160.880.930.900.890.93
Heart Failure Mortality*80.500.510.600.530.58

For all QML algorithms, features are encoded into qubits with sequences of Pauli-Y gate rotations () and s. All QML algorithms were executed on the IBMQ Melbourne machine.

*Heart failure has no 16-feature variant, since the maximum number of features are 13.

Comparison of the cross-validation AUC performance with QML and ML algorithms. For all QML algorithms, features are encoded into qubits with sequences of Pauli-Y gate rotations () and s. All QML algorithms were executed on the IBMQ Melbourne machine. *Heart failure has no 16-feature variant, since the maximum number of features are 13.

Estimation of the probability of errors rate

Our experimental demonstrations are performed on the 15-qubit IBMQ Melbourne processor based on superconducting transmon qubits. The experiment has been conducted on the Wisconsin Breast Cancer dataset with 8 and 16 features, given, that this dataset provided the highest predictive cross-validation performance. On the NISQ device and simulator, each circuit is run with a fixed number of measurement shots (= 8192). We plot scatter diagrams for the inner product values from the simulator and the inner product from the NISQ device in Fig. 1. To show the correlation between the experimental and the simulator values of the inner products, we also fit optimal lines using least square regression in Fig. 1. To measure the difference between the inner products from the simulator and the inner product from the NISQ device, the root mean square error (RMSE) was calculated. The value of RMSE was 0.039 (3.9%) and 0.075 (7.5%) for 8 and 16 feature counts, respectively (Fig. 1). Therefore, the fidelities of the quantum circuits on the quantum cloud device were estimated 96% and 92.5% for the 8 and 16 feature counts, respectively. For more details of the experiment see Appendix G of the Supplementary material.
Figure 1

Scatter diagrams of simulator inner products vs. experiment inner products for both the train state vectors and test state vectors. This data corresponds to the Wisconsin Breast Cancer dataset with 8 (left) and 16 (right) features. The red lines represent optimal fit lines based on least-squared regression.

Scatter diagrams of simulator inner products vs. experiment inner products for both the train state vectors and test state vectors. This data corresponds to the Wisconsin Breast Cancer dataset with 8 (left) and 16 (right) features. The red lines represent optimal fit lines based on least-squared regression. The depolarizing noise model represents a linear relationship between the ideal (simulator) and the noisy (experiment) values of the inner products based on Eq. (14) in “Methods”. Nevertheless, the slope of the fit lines in Fig. 1 show that the depolarizing noise model cannot estimate the true value of probability of error rate (). This is due to gate errors[35] that are originated from miscalibration of quantum Hardware, not being covered by the depolarizing noise model.

Discussion

In this study, we aimed to investigate the effect of two encoding strategies in various quantum machine learning-built clinical prediction models. Next to prior quantum machine learning approaches, we also proposed two methods specifically designed for the encoding approach. Our results demonstrate that the encoding in combination with low-complexity quantum machine learning approaches provides comparable or better results than the encoding approach with previously-proposed quantum machine learning methods. This advantage was demonstrated not only in a simulator environment, but also utilizing NISQ devices. The low algorithmic quantum complexity also aims towards building prediction models that may be easier to interpret in the future, especially in light of the high complexity of classic machine learning approaches[36]. In contrast, it is important to emphasize, that the proposed quantum machine learning processes are also applicable in big data, given, that calculating the inner product of quantum states in NISQ devices can be done efficiently with the encoding approach[21,22]. The data encoding is also more robust against noise compared to the data encoding, since it uses less number of noisy qubits of the NISQ device to estimate the inner product of quantum states[10]. After encoding data from classical Euclidean space into quantum Hilbert space, the distance between data points may increase or decrease, which has implications in case of kernel methods[20]. The score can represent, whether distances between data points would increase or decrease after data encoding. For further explanations see Supplemental Appendix E. When feature count increases, increases as well, because quantum state vectors of input features become closer due to the high dimensionality property of the Hilbert space. Higher feature count significantly influences performance in a positive way if is < 1 (e.g. + 5–6% AUC in the Pediatric bone marrow dataset). It has been shown that classical ML models are competitive or outperform quantum ML approaches when is small[20]. Nevertheless, we demonstrated that when , higher feature count does not contribute much to the performance increase (e.g. 1% difference in the Wisconsin breast cancer dataset). It is important to point out that a high (> 1) alone does not mean that the dataset is not ideal for kernel-based quantum machine learning. Specifically, the highest AUC of 0.93 was achieved in the 16 feature counts Wisconsin breast cancer dataset, while it also demonstrated the highest , which also confirms prior findings[20]. In contrast, the same dataset in the classic SVM resulted in 0.89 AUC. We hypothesize that this phenomenon is due to the high sample count of the Wisconsin breast cancer dataset (M = 569). In general, the imbalance ratio of the datasets did not appear to be correlated with predictive performance. The increased AUC with up to 2% compared to the encoding when comparing the execution of the quantum machine learning approaches using simulation environment. This behavior was also identifiable with executions in NISQ devices, in case of kernel methods and the qDC. We hypothesize that lower AUC performance for the encoding method in the simulator environment and NISQ device is due to higher number of qubits which likely lead to lower value of inner products. This is in line with the findings in[20]. In general, the qKSVM demonstrated 2–5% higher AUC compared to the sqKSVM. The relative performance increase of the qKSVM was in relation to sample count and feature count. Specifically, the qKSVM showed an average 2% higher AUC with small sample count (Heart failure and Pediatric bone datasets), while it had 5% higher AUC in the Wisconsin breast cancer dataset. Nevertheless, both the qKSVM and the sqKSVM increased its AUC with double feature counts in the small Pediatric bone marrow dataset. This level of performance increase was not identifiable in the larger Wisconsin breast cancer dataset. Classic SVM demonstrated similar properties in relation to higher feature counts in small datasets[20], while it was outperformed by the qKSVM in the large Wisconsin breast cancer dataset. In conclusion, quantum SVM approaches benefit from higher feature count in general, where the qKSVM—due to relying on optimization—has a particular benefit compared to the sqKSVM. In contrast, the sqKSVM algorithm reduces the time complexity of the qKSVM algorithm significantly, which may be advantageous in case of large datasets on NISQ devices. In the large Wisconsin breast cancer dataset, the qDC demonstrated higher performance compared to the sqKSVM, especially in small feature counts (0.91 AUC vs 0.87 AUC in the qDC and the sqKSVM respectively in 8 features). The qDC resulted in the highest AUC of 0.60 across all other quantum (0.50–0.51 AUC) and classic machine learning (0.53–0.58 AUC) approaches in the Heart failure dataset. We hypothesize that this is due to the distribution characteristics of the samples belonging to the two subclasses in the feature space, which challenges classification with kernel methods. Generally, the performance of the executed quantum and classic machine learning approaches are comparable within the collected cohorts (Table 3). According to our findings, quantum distance approaches can provide high performance with small feature and sample counts, which is particularly ideal for NISQ devices. In contrast, quantum kernel methods appear to provide high performance with high feature and sample counts. We demonstrated that the encoding strategy allows to execute quantum ML algorithms for highly dimensional clinical datasets on low qubit count NISQ devices. In general, quantum machine learning benefits from utilizing the encoding strategy, as it increases predictive performance and reduces execution time in NISQ devices, while keeping model complexity lower. Our experiments also pointed out an important implication of how noise shall be estimated. As such, the depolarizing noise model cannot cover gate errors. We consider our findings of high importance in relation to building future quantum ML prediction models in NISQ devices for clinically-relevant cohorts and beyond.

Methods

All experiments of this study were performed in accordance with the respective guidelines and regulations of the open-access data sources this study relied on. For details, see section “Access”.

Figure 2 shows the quantum circuit for estimation of the real part of with the Hadamard Test.
Figure 2

Quantum circuit computes the real part of the inner product . The Hadamard gate puts the ancilla qubit () into uniform superposition. A single-controlled unitary gate entangles the exited state of the ancilla qubit with the training data state vector (). The gate flips the ancilla qubit. Another single unitary controlled gate entangles the state vector of the test data () with the excited state of the ancilla qubit. A second gate flips the ancilla qubit. The Hadamard gate on the ancilla qubit interferences train and test data state vectors. The ancilla qubit is measured using a Pauli- gate. The real value of is estimated from Eq. (9). The measurement gate is done by a Pauli- gate and .

Quantum circuit computes the real part of the inner product . The Hadamard gate puts the ancilla qubit () into uniform superposition. A single-controlled unitary gate entangles the exited state of the ancilla qubit with the training data state vector (). The gate flips the ancilla qubit. Another single unitary controlled gate entangles the state vector of the test data () with the excited state of the ancilla qubit. A second gate flips the ancilla qubit. The Hadamard gate on the ancilla qubit interferences train and test data state vectors. The ancilla qubit is measured using a Pauli- gate. The real value of is estimated from Eq. (9). The measurement gate is done by a Pauli- gate and . To estimate the real part of on the quantum computer with the Hadamard Test, the training and test data needs to be prepared in a quantum state aswhere and are the quantum states for the train and test datasets, respectively. Then the Hadamard gate on the ancilla qubit interferences the training vector with the test vector Finally, the measuring quantum state given in Eq. (8) in the computational basis gives probability aswhere is the value of the probability of measurement on the state of Eq. (8) and . Since our datasets are real values . See Appendix H of the Supplementary material for details of the estimation of the inner product on the IBMQ Melbourne machine with the Hadamard Test. The inner product can also be estimated on a quantum computer with the Swap Test (see Fig. 3). The Hadamard gate is applied on the ancilla qubit to create a superposition of , i.e.
Figure 3

Quantum Circuit to compute . The model circuits encode train and test data into quantum states and . The Hadamard gate on the ancilla qubit () generates a superposition of the quantum state including the train and test datasets. The application of the single-controlled swap gates with the ancilla qubit as the control results in an entangled state of Eq. (10). Another Hadamard gate on the ancilla qubit interferences and . The ancilla qubit on the state is measured in the Z basis. Therefore, the value of can be obtained from Eq. (12).

Quantum Circuit to compute . The model circuits encode train and test data into quantum states and . The Hadamard gate on the ancilla qubit () generates a superposition of the quantum state including the train and test datasets. The application of the single-controlled swap gates with the ancilla qubit as the control results in an entangled state of Eq. (10). Another Hadamard gate on the ancilla qubit interferences and . The ancilla qubit on the state is measured in the Z basis. Therefore, the value of can be obtained from Eq. (12). The application of the single-controlled swap gates on the state given in Eq. (10) entangles the ancilla qubit with . The resulted entangled quantum state is . Then, another Hadamard gate interferences the product state of the state vectors of the training and the test i.e. Measuring the quantum state given in Eq. (11) in the computational basis yields with the probabilitywhere is the value of the probability of measurement on the state of Eq. (11). See Appendix D of the Supplementary material for details of the estimation of the inner product on the IBMQ Melbourne machine with the Hadamard Test.

Simplified quantum kernel support vector machine

The quantum Support Vector Machine algorithm is proposed in[37] for big data classification. They show exponential speedup for their algorithm via quantum mechanically access to data. Nevertheless, this approach is not ideal for NISQ devices[9]. To date, two separate qKSVM approaches are proposed for data classification via classical access to data[6,7]. In these approaches, the quantum circuits must run twice on the quantum computer and a cost function needs to be optimized on the classical computer to compute the support vector[7]. We propose a simplified version qKSVM called sqKSVM as shown in Fig. 4.
Figure 4

Schematic of the sqKSVM for data classification algorithm. First, the training data vector and test are prepared on a classical computer. Next, the original training data and test data are encoded into quantum states followed by computing the kernel matrix of all pairs of the training-test data with a NISQ computer. If are considered to be a solution of the support vector, the binary classifier can be constructed based on Eq. (5).

Schematic of the sqKSVM for data classification algorithm. First, the training data vector and test are prepared on a classical computer. Next, the original training data and test data are encoded into quantum states followed by computing the kernel matrix of all pairs of the training-test data with a NISQ computer. If are considered to be a solution of the support vector, the binary classifier can be constructed based on Eq. (5).

Software and hardware

For classical machine learning algorithms, we use classical machine learning (CML) libraries of scikit-learn[38]. Pennylane-Qiskit[33] is used for quantum circuit simulation and quantum experiment for designing quantum computing programs. Pennylane-Qiskit 0.13.0 plugin integrates the Qiskit quantum computing framework to the Pennylane simulator. For executing quantum algorithms on existing quantum computers, this study relied on IBM’s remote quantum machines (https://quantum-computing.ibm.com/) that can run quantum programs with noisy qubits. Since IBM quantum computers only support single-qubit gate and two-qubit gate operations, complex gate operations must be decomposed into elementary supported gates before mapping the quantum circuit on noisy hardware. Owing to the specific architecture of IBM quantum computers, all two-qubit gate operations must satisfy the constraints imposed by the coupling map[39], i.e., if is the control qubit and is the target qubit, can only be applied if there is coupling between and . In case of running the QML algorithms on the quantum computer, we choose the 15-qubits IBMQ Melbourne machine with the supported gates , , and , where is identity single-qubit gate, is single-qubit arbitrary rotation gates with as two-qubit gate. Figure 5 shows the coupling map of the IBMQ Melbourne with its gate error rates.
Figure 5

Topology and coupling map of the IBMQ Melbourne (https://quantum-computing.ibm.com/services). Single-qubit error rate is the error induced by applying the single-qubit gates. error is the error of the only two-qubit gates. Each circle represents a physical superconducting qubit and each shows coupling between neighbor qubits.

Topology and coupling map of the IBMQ Melbourne (https://quantum-computing.ibm.com/services). Single-qubit error rate is the error induced by applying the single-qubit gates. error is the error of the only two-qubit gates. Each circle represents a physical superconducting qubit and each shows coupling between neighbor qubits.

Depolarizing noise model

A simple model to describe incoherent noise is the depolarizing noise model. For a qubit pure quantum state , the depolarizing noise operator (channel) leads to a loss of information with probability and with probability the system is left untouched[40]. The state of the system after this noise iswhere denotes the noise channel, is a density matrix, is the probability of error rate that depends on NISQ devices, the gate operations, and the depth of the quantum circuit, and is the () identity matrix. The expectation value of observable for a state represented by a density matrix is given bywhere is the noisy expectation value and is the noiseless expectation value[41]. Supplementary Information.
  11 in total

1.  Quantum circuits for general multiqubit gates.

Authors:  Mikko Möttönen; Juha J Vartiainen; Ville Bergholm; Martti M Salomaa
Journal:  Phys Rev Lett       Date:  2004-09-20       Impact factor: 9.161

2.  Supervised learning with quantum-enhanced feature spaces.

Authors:  Vojtěch Havlíček; Antonio D Córcoles; Kristan Temme; Aram W Harrow; Abhinav Kandala; Jerry M Chow; Jay M Gambetta
Journal:  Nature       Date:  2019-03-13       Impact factor: 49.962

3.  Quantum Machine Learning in Feature Hilbert Spaces.

Authors:  Maria Schuld; Nathan Killoran
Journal:  Phys Rev Lett       Date:  2019-02-01       Impact factor: 9.161

4.  Quantum support vector machine for big data classification.

Authors:  Patrick Rebentrost; Masoud Mohseni; Seth Lloyd
Journal:  Phys Rev Lett       Date:  2014-09-25       Impact factor: 9.161

5.  Power of data in quantum machine learning.

Authors:  Hsin-Yuan Huang; Michael Broughton; Masoud Mohseni; Ryan Babbush; Sergio Boixo; Hartmut Neven; Jarrod R McClean
Journal:  Nat Commun       Date:  2021-05-11       Impact factor: 14.919

6.  Supervised machine learning enables non-invasive lesion characterization in primary prostate cancer with [68Ga]Ga-PSMA-11 PET/MRI.

Authors:  L Papp; C P Spielvogel; B Grubmüller; M Grahovac; D Krajnc; B Ecsedi; R A M Sareshgi; D Mohamad; M Hamboeck; I Rausch; M Mitterhauser; W Wadsak; A R Haug; L Kenner; P Mazal; M Susani; S Hartenbach; P Baltzer; T H Helbich; G Kramer; S F Shariat; T Beyer; M Hartenbach; M Hacker
Journal:  Eur J Nucl Med Mol Imaging       Date:  2020-12-19       Impact factor: 9.236

7.  Circuit-Based Quantum Random Access Memory for Classical Data.

Authors:  Daniel K Park; Francesco Petruccione; June-Koo Kevin Rhee
Journal:  Sci Rep       Date:  2019-03-08       Impact factor: 4.379

8.  Machine learning can predict survival of patients with heart failure from serum creatinine and ejection fraction alone.

Authors:  Davide Chicco; Giuseppe Jurman
Journal:  BMC Med Inform Decis Mak       Date:  2020-02-03       Impact factor: 2.796

9.  Experimental quantum speed-up in reinforcement learning agents.

Authors:  V Saggio; B E Asenbeck; A Hamann; T Strömberg; P Schiansky; V Dunjko; N Friis; N C Harris; M Hochberg; D Englund; S Wölk; H J Briegel; P Walther
Journal:  Nature       Date:  2021-03-10       Impact factor: 49.962

10.  Breast Tumor Characterization Using [18F]FDG-PET/CT Imaging Combined with Data Preprocessing and Radiomics.

Authors:  Denis Krajnc; Laszlo Papp; Thomas S Nakuz; Heinrich F Magometschnigg; Marko Grahovac; Clemens P Spielvogel; Boglarka Ecsedi; Zsuzsanna Bago-Horvath; Alexander Haug; Georgios Karanikas; Thomas Beyer; Marcus Hacker; Thomas H Helbich; Katja Pinker
Journal:  Cancers (Basel)       Date:  2021-03-12       Impact factor: 6.639

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.