Literature DB >> 35614090

Neural network-based prediction of the secret-key rate of quantum key distribution.

Min-Gang Zhou1, Zhi-Ping Liu1, Wen-Bo Liu1, Chen-Long Li1,2, Jun-Lin Bai1,2, Yi-Ran Xue1,2, Yao Fu2, Hua-Lei Yin3, Zeng-Bing Chen4,5.   

Abstract

Numerical methods are widely used to calculate the secure key rate of many quantum key distribution protocols in practice, but they consume many computing resources and are too time-consuming. In this work, we take the homodyne detection discrete-modulated continuous-variable quantum key distribution (CV-QKD) as an example, and construct a neural network that can quickly predict the secure key rate based on the experimental parameters and experimental results. Compared to traditional numerical methods, the speed of the neural network is improved by several orders of magnitude. Importantly, the predicted key rates are not only highly accurate but also highly likely to be secure. This allows the secure key rate of discrete-modulated CV-QKD to be extracted in real time on a low-power platform. Furthermore, our method is versatile and can be extended to quickly calculate the complex secure key rates of various other unstructured quantum key distribution protocols.
© 2022. The Author(s).

Entities:  

Year:  2022        PMID: 35614090      PMCID: PMC9133163          DOI: 10.1038/s41598-022-12647-x

Source DB:  PubMed          Journal:  Sci Rep        ISSN: 2045-2322            Impact factor:   4.996


Introduction

With the concurrent rise of artificial intelligence and quantum information science, these two fields are merging in a synergistic manner. In this growing trend, some works try to design new theoretical models based on quantum algorithms to improve classical machine learning for desired quantum speed-up[1-10]. At the same time, with the ever-increasing complexity of quantum systems, advanced quantum information technologies also require powerful tools for data processing and data analysis. We therefore urgently need to leverage existing classical machine learning techniques to solve practical, but difficult, problems in quantum information science, such as tomography[11-13], classifying quantum states[14-16], quantum metrology[17-19], quantum control[20,21] and quantum cryptography[22]. Quantum key distribution (QKD)[23,24] is by far the most practical technology in quantum information. It allows two distant parties (Alice and Bob) to establish secure keys against any eavesdropper. Various QKD protocols have been proposed one after another in recent decades[25-30]. Calculating the secure key rates of these QKD protocols is typically done by analytical methods[31], but these analytical methods are usually inseparable from certain symmetry assumptions. These assumptions are often broken by experimental imperfections in practice. Therefore, to analyze the security of QKD protocols that are more suitable for practical implementations, some numerical methods based on convex optimization[32-35] have been developed. For instance, continuous-variable (CV) QKD has its own distinct advantages at a metropolitan distance[36,37] due to the use of common components of coherent optical communication technology. In addition, the homodyne[38] or heterodyne[39] measurements used by CV-QKD have inherent extraordinary spectral filtering capabilities, which allows the crosstalk in wavelength division multiplexing (WDM) channels to be effectively suppressed. Therefore, hundreds of QKD channels may be integrated into a single optical fiber and can be cotransmitted with classic data channels. This allows QKD channels to be more effectively integrated into existing communication networks. In CV-QKD, discrete modulation technology has attracted much attention[31,40-50] because of its ability to reduce the requirements for modulation devices. However, due to the lack of symmetry, the security proof of discrete modulation CV-QKD also mainly relies on numerical methods[43-48,51]. Unfortunately, calculating a secure key rate by numerical methods requires minimizing a convex function over all eavesdropping attacks related with the experimental data[52,53]. The efficiency of this optimization depends on the number of parameters of the QKD protocol. For example, in discrete modulation CV-QKD, the number of parameters is generally depending on the different choices of cutoff photon numbers[44]. This leads to the corresponding optimization possibly taking minutes or even hours[51]. Therefore, it is especially important to develop tools for calculating the key rate that are more efficient than numerical methods. In this work, we take the homodyne detection discrete-modulated CV-QKD[44] as an example to construct a neural network capable of predicting the secure key rate for the purpose of saving time and resource consumption. We apply our neural network to a test set obtained at different excess noises and distances. Excellent accuracy and time savings are observed after adjusting the hyperparameters. Importantly, the predicted key rates are highly likely to be secure. Note that our method is versatile and can be extended to quickly calculate the complex secure key rates of various other unstructured quantum key distribution protocols. Through some open source deep learning frameworks for on-device inference, such as TensorFlow Lite[54], our model can also be easily deployed on devices at the edge of the network, such as mobile devices, embedded Linux or microcontrollers.

Results

Discrete-modulated CV-QKD

To clearly show the problem we try to solve, we briefly introduce the main ideas of discrete-modulated CV-QKD and give the convex optimization problem of finding its key rates in this section. See Ref.[44] and see description of “Discrete-modulated CV-QKD” in Methods. The protocol involves two parties, Alice and Bob. Alice randomly prepares one of the four coherent states and sends it to Bob by an untrusted quantum channel. Bob measures the received coherent state using homodyne detection. After repeating N rounds, Alice and Bob perform sifting, parameter estimation, error correction and privacy amplification over the classical authentication channel to obtain the final secure key rates. The key rate formula in the asymptotic limit can be expressed according to Refs.[32,33] aswhere is the quantum relative entropy; is the bipartite state of Alice and Bob; is the mapping to describe the postprocessing of the bipartite state ; is a pinching quantum channel for reading out the results of the key rate mapping; is the set of all density operators that match the experimental observations; is a sifting factor that determines how many rounds of data are used for generating keys; represents the amount of information leakage per bit in the error-correction process. The key to finding the secure key rates is to solve the minimum value of , since is a fixed quantity. The associated optimization problem is[44]where is a local projective measurement operator of Alice’s side, where ; , where and are the annihilation and creation operators of a single-mode state, respectively; ; ; ; , , and represent the corresponding expectation values of the operators , , and acting on , respectively; is the state of Bob after Alice has performed measurement on , and is the corresponding probability; is the identity transformation acting on system B. The first four constraints in Eq. (2) are derived from experimental observations. The fifth and sixth constraints are conditions that the density matrix must satisfy. The seventh constraint comes from the fact that Alice’s states do not change because they do not go through insecure quantum channels. The optimization problem in Eq. (2) is to find the optimal in such that is minimized. is infinite-dimensional because the attacker has the ability to arbitrarily perturb the optical mode sent by Alice into an infinite-dimensional state to send to Bob. To solve this optimization problem using numerical methods, we need to apply the photon-number cutoff assumption to to ensure that the number of variables is in a reasonable range. A detailed description of this method can be found in Ref.[44]. After applying the photon-number cutoff assumption, the optimization problem in Eq. (2) can be solved by applying the numerical method in Refs.[33,44], but this is very time consuming. In this work, to reduce the time to predict secure key rates, we use the key rates obtained by the numerical method in Refs.[33,44] as labels to train our neural network.

Neural networks for predicting the key rates

We use an artificial neural network to predict the key rates of discrete-modulated CV-QKD. The general spirit of the work is to encode the optimization problem in Eq. (2) on the loss function of a feedforward neural network and train the neural network by minimizing this loss function. The trained neural network can be seen as a mapping, which has learned the structure of the training set. For new instances, the neural network outputs the results directly via mapping, unlike traditional numerical methods that perform complex searches. As a result, the trained neural network saves a great deal of time, while ensuring a good level of accuracy. A more detailed description of neural networks can be found in Ref.[55]. Schematic diagram of our neural network model. We preprocess each training input and its corresponding label to obtain and . The neural network receives and outputs the corresponding . The numbers of neurons in the first hidden layer and the second hidden layer of the neural network are 400 and 200, respectively. and are used to compute the loss function designed by us. Minimization of the loss function completes the training process. A four-layer neural network model is designed to predict the key rates of discrete-modulated CV-QKD (Fig. 1). The input layer of the network has 29 neurons, which are used to receive the training inputs. The first hidden layer and the second hidden layer of the network have 400 and 200 neurons respectively, and their activation functions are the tanh function and sigmoid function, respectively. The output layer has only one neuron, which is used to predict secure key rates.
Figure 1

Schematic diagram of our neural network model. We preprocess each training input and its corresponding label to obtain and . The neural network receives and outputs the corresponding . The numbers of neurons in the first hidden layer and the second hidden layer of the neural network are 400 and 200, respectively. and are used to compute the loss function designed by us. Minimization of the loss function completes the training process.

Relative deviations before and after data preprocessing. We use the network structure shown in Fig. 1 with the mean square error as the loss function to compare the results of data preprocessing (a) and without data preprocessing (b). The data set is generated under the excess noise of 0.002–0.005, and is split into a training set containing 158,000 samples and a test set containing 2000 samples. The horizontal coordinate represents the different samples in the test set. The vertical coordinate represents the relative deviations between the key rate predicted by our neural network and the key rate obtained by the numerical method at each sample. To train our neural network, we generate the data set containing 552,000 input instances and 552,000 corresponding labels using the numerical method in Refs.[33,44]. Each represents a vector of 29 variables, and label represents the corresponding key rate. There are 16 variables in each that are the right parts of the first four restrictions of Eq. (2), 12 variables in each are nondiagonal elements of the right side matrix of the last restriction of Eq. (2), and the remaining variable is excess noise . The 29 variables in each can be calculated in the experiment by using experimental parameters and experimental observations. In our simulation, these random input instances are generated directly from seven experimental parameters (transmission distance L, light intensity , excess noise , and probability p0, p1, p2 and p3) and the following method. When the excess noise is within 0.002–0.014, we first generate a two-dimensional grid with excess noise and distance in the horizontal and vertical coordinates, respectively. Specifically, the value of the distance is between 0 and 100 km in a step of 5 km. The value of the excess noise is between 0.002 and 0.014 in a step of 0.001. Then, each grid point is sampled 80 times. With each sampling, the excess noise fluctuates around the exact value, and the float range is 0.0005 up and down. Once the excess noise for this sampling is determined, the light intensity will take a value every 0.01 between 0.35 and 0.60. Each sampling needs to generate 25 input instances with a positive key rate; otherwise, the current round of sampling is discarded and restarted. In this way, 2000 input instances are generated on each grid point. Correspondingly, a total of 520,000 training inputs are generated on this two-dimensional grid. When the excess noise is 0.015, a similar two-dimensional grid is generated. However, we only sample to 80 km, so only 32,000 instances are generated. In this way, we collect a total of 552,000 samples with excess noise between 0.002 and 0.015. Using the numerical approach in Refs.[33,44], we calculate the corresponding key rate for each sample as the label of the data set on the blade cluster system of the High Performance Computing Center of Nanjing University. We consume over 40, 000 core hours, and the node we used contains 4 Intel Xeon Gold 6248 CPUs, which involves immense computational power. To improve the convergence speed and accuracy of our neural network, we preprocess the input instances and the corresponding labels . To demonstrate the necessity of the data preprocessing, we use the network structure shown in Fig. 1 to perform a controlled experiment with the mean square error as the loss function. With the excess noise of 0.002–0.005, the absolute values of the relative deviations between the key rates predicted by our neural network and the corresponding key rates obtained by the numerical method do not exceed after the data preprocessing (Fig. 2), whereas the absolute values of the relative deviations exceed without the data preprocessing. Here, the relative deviation is the absolute deviation between the predicted value and true value divided by the true value. A detailed description of the data preprocessing can be found in “Details of data preprocessing” in Methods.
Figure 2

Relative deviations before and after data preprocessing. We use the network structure shown in Fig. 1 with the mean square error as the loss function to compare the results of data preprocessing (a) and without data preprocessing (b). The data set is generated under the excess noise of 0.002–0.005, and is split into a training set containing 158,000 samples and a test set containing 2000 samples. The horizontal coordinate represents the different samples in the test set. The vertical coordinate represents the relative deviations between the key rate predicted by our neural network and the key rate obtained by the numerical method at each sample.

A new loss function is specifically designed to make key rates predicted by our neural network as information-theoretically secure as possible, rather than using the traditional mean squared error as a loss function. The expression of the loss function is as follows:, where n is the number of training inputs. is the residual error between the preprocessed label and the corresponding output of the neural network. The minimum function part in Eq. (3) is the penalty term and is used to make the key rates predicted by the neural network as information-theoretically secure as possible. On the other hand, the part consisting of the maximum function and the squared term in Eq. (3) is used to bound the upper limit of to obtain higher key rates. The parameter is used to balance the effects of the two parts. With the help of this loss function, we expect that the relative deviations between predicted value and true value can be bound in after choosing the proper and . Performance comparison of neural networks with different hyperparameters. (a) The results of the neural network with the hyperparameters and in predicting 2000 samples with excess noise between 0.002 and 0.005 in the test set. The predicted key rates are strictly below the key rates obtained by the numerical method in Refs.[33,44]. (b) The histogram of the relative deviation distribution in (a). The absolute value of the relative deviations remains roughly in the region of 5–20%. (c–f) plot the corresponding results for the hyperparameters , and , , respectively. The performance of the neural networks is related to hyperparameters and . Without loss of generality, we take the examples of neural networks with excess noise between 0.002 and 0.005 (Fig. 3). When and , the key rates predicted by the neural network are strictly lower than those obtained by the numerical method in Refs.[33,44], which means that the key rates predicted by the neural network are information-theoretically secure. Meanwhile, the absolute values of the relative deviations are mainly distributed between 0.05 and 0.20 (Fig. 3a,b). Figure 3c–f plot the corresponding results for the hyperparameters , and , , respectively. Note that the partial key rates predicted by the neural networks under , and , are higher than the key rates obtained by the numerical method. This indicates that the performance of neural networks trained with hyperparameters , and , is not as good as that of neural network trained with hyperparameters and . Therefore, we need to carefully tune hyperparameters of the neural networks to ensure their stable performance.
Figure 3

Performance comparison of neural networks with different hyperparameters. (a) The results of the neural network with the hyperparameters and in predicting 2000 samples with excess noise between 0.002 and 0.005 in the test set. The predicted key rates are strictly below the key rates obtained by the numerical method in Refs.[33,44]. (b) The histogram of the relative deviation distribution in (a). The absolute value of the relative deviations remains roughly in the region of 5–20%. (c–f) plot the corresponding results for the hyperparameters , and , , respectively.

The 552,000 data generated by the numerical method are split into a training set containing 524,400 data and a test set containing 27,600 data. The test set is sampled from the original data set and covers instances generated under all combinations of excess noise and distance. The data preprocessing procedure follows data splitting. The Adam optimization algorithm[56] is used to train our neural network. The initial learning rate is set to 0.001. For each training, we set 200 epochs and 256 batch sizes. In addition, techniques such as early stopping and dropout[57] are used to prevent overtting. The relative deviations of the trained network on the test set and the training set have similar distributions, which indicates that the model has good generalization performance.

Key rate comparison

We use our neural network to predict, given the optimal light intensity, key rates of discrete-modulated CV-QKD at different distances and different excess noises after training the neural network under and according to the method described in “Methods” above. As shown in Fig. 4, we compare the key rates with the corresponding key rates obtained by the numerical method in Refs.[33,44]. The results show that all key rates predicted by the neural network are strictly lower than those obtained by the numerical method. It is worth noting that the relative deviations between them are basically within (relevant data can be found in “Detailed data” in Methods).
Figure 4

Secure key rate versus the transmission distance for homodyne detection discrete-modulated CV-QKD with excess noise of 0.002, 0.004, 0.008, 0.011 and 0.014 using our neural network (circles) and the numerical method in Refs.[33,44] (triangles). The light intensity is chosen to be optimal in the interval [0.35, 0.6]. Tht transmission efficiency . The reconciliation efficiency . The neural network used for comparison is trained by setting the hyperparameters and . The cutoff photon number in the numerical method is set as 10.

To illustrate the more general case, we test the test set containing 27,600 samples mentioned at the end of “Methods”. The results show that the number of samples, for which the key rates predicted by the neural network are lower than the corresponding results calculated by the numerical method, is 27,379. Namely, the probability that the key rate predicted by the neural network on the test set is secure is as high as . Our neural network shows greater advantages over the numerical method in terms of time and resource consumption. We compare the time required to predict the key rates with our neural network and the time required to calculate the key rates with the numerical method on a high-performance personal computer with a 3.3 GHz AMD Ryzen 9 4900H and 16 GB of RAM (Fig. 5). The neural network is 6–8 orders of magnitude of the numerical method for predicting the key rates of the discrete-modulated CV-QKD within 0–100 km for excess noise = 0.008–0.012. In addition, as the excess noise increases, the speed of the neural network increases even more. Refer to “Detailed data” for more detailed data.
Figure 5

Time consumption comparison between the neural network method and numerical method. The comparison results with excess noise of 0.008, 0.010 and 0.012 are shown as diamonds, circles and triangles, respectively. Each point represents the logarithm of the ratio of the running time of the numerical method divided by the running time of the neural network method. The neural network used for comparison is trained by setting the hyperparameters and . The cutoff photon number in the numerical method is set as 10.

Secure key rate versus the transmission distance for homodyne detection discrete-modulated CV-QKD with excess noise of 0.002, 0.004, 0.008, 0.011 and 0.014 using our neural network (circles) and the numerical method in Refs.[33,44] (triangles). The light intensity is chosen to be optimal in the interval [0.35, 0.6]. Tht transmission efficiency . The reconciliation efficiency . The neural network used for comparison is trained by setting the hyperparameters and . The cutoff photon number in the numerical method is set as 10.

Discussion

We have constructed neural networks and shown that these neural networks can predict the information-theoretically secure key rates of homodyne detection discrete-modulated CV-QKD with a great probability (up to ) at a distance of 0–100 km and an excess noise of no more than 0.015. In particular, with excess noise up to 0.008 or more, the speed of our method is at least improved by six orders of magnitude compared to that of the numerical method in Refs.[33,44]. For example, it takes an average of 190 s to numerically calculate the point with the excess noise around 0.008, which greatly affects the efficiency of QKD systems to calculate the secure key rate. In contrast, a neural network can calculate tens of thousands of key rates in 1 s. Considering that it takes a certain amount of time for the QKD system to collect data, the speed of predicting the key rates by the neural network completely meets practical applications. This advantage brings us one step closer to achieving low latency for discrete modulated CV-QKD on a low-power platform. Our method is applicable in principle to any protocol that already has reliable numerical methods. However, for protocols such as 16/64/256 QAM DM-CVQKD protocol with analytical methods whose effects are very close to those of numerical methods, it is not necessary to use the method proposed in this paper. Recently, there have been two main types of situations in which machine learning is used in QKD. One is used for experimental parameter optimization[58,59] and the other is used to assist experimental control[60-62]. They all use machine learning to replace traditional optimization or feedback control algorithms, which are significantly different from our work. To the best of our knowledge, this is the first time we have tried to apply machine learning methods to predict key rates of QKD. This poses a greater challenge than parameter optimization with machine learning methods. This is because the parameters predicted by the neural networks are substituted into numerical or analytical methods to find the corresponding key rates, which naturally ensures that the key rates are information-theoretically secure. However, the key rates obtained by neural networks do not guarantee this naturally, which forces us to redesign the loss function and seek better data preprocessing methods to guarantee the acquired key rate with information-theoretic security. Note that the probability () of our neural network predicting an insecure key rate is too large compared to conventional security parameters of the QKD protocol (e.g. ). In practice, however, we need to sample thousands of data points and calculate their respective key rates to obtain a usable keystring. The key here is that when we sum and average the key rates of all data points predicted by our neural network, the insecure probability of this averaged key rate can be reduced very low. If there are enough data points, this insecure probability can also approximate conventional security parameters of the QKD protocol. Time consumption comparison between the neural network method and numerical method. The comparison results with excess noise of 0.008, 0.010 and 0.012 are shown as diamonds, circles and triangles, respectively. Each point represents the logarithm of the ratio of the running time of the numerical method divided by the running time of the neural network method. The neural network used for comparison is trained by setting the hyperparameters and . The cutoff photon number in the numerical method is set as 10. We expect that larger excess noises and longer distances will require a deeper network, more sophisticated loss functions, and more detailed data preprocessing methods to improve the performance of neural networks on the training set. More training data are also necessary to improve the generalization ability of the neural networks. For deep neural networks, the rapid growth or rapid disappearance of the transmitted gradient hinders the optimization process; therefore, the debugging process is highly technical. The debugging process can be guided by monitoring the activation function values of the neurons and histograms 1 of those gradients[55]. Our machine learning approach is at least six orders of magnitude of the numerical method at predicting the secure key rates of homodyne detection discrete-modulated CV-QKD with excess noise up to 0.008 or more. However, training our neural network is still time consuming. This is because we need to use traditional numerical methods to obtain a number of key rates as the training set of the neural networks. In particular, the performance of our neural network is dependent on the choice of hyperparameters , and initial learning rate. This means that we may need to train several times to obtain a suitable neural network. To make our machine learning method more intelligent, further work is necessary to design another neural network to automatically find the most suitable hyperparameters. We have also tried other machine learning methods, such as boosting decision trees. These methods have smaller relative deviations, but have greater variances. We have left the fusion of these methods to future research. The important contribution of our work is that it opens the door to using classical machine learning to predict QKD key rates. In particular, our ideas and methods are very easy to generalize to other QKD protocols. We expect that our work will stimulate further research to help most QKD systems run on low-power chips[63] in mobile devices[64].

Methods

According to Ref.[44], homodyne detection discrete-modulated CV-QKD is described below: (1) State preparation.-Alice prepares a coherent state from the set according to the probability of , where is a predetermined amplitude and k is the number of rounds. Then Alice sends the state to Bob. (2) Measurement.-Bob performs a homodyne measurement on the received state. He chooses to measure a certain orthogonal component (q or p) according to the probability of . If q is chosen, Bob notes , otherwise he notes . Then, Bob records his measurement outcome . (3) Announcement and sifting.-After repeating the first two steps N times, Alice and Bob communicate via the classical authentication channel and divide the obtained data into the following four subsets:where [N] denotes the set of all integers from 1 to N. Then Alice and Bob randomly select a subset of size m from for generating keys. The key string at Alice is also determined according to the following rules:where f(j) is a function that maps from to . The remaining data in , , and are integrated into the set and used for parameter estimation. (4) Parameter estimation.-Alice and Bob perform parameter estimation based on the data in . First, they calculate the first and second moments of q and p quadratures for each of the four coherent states sent by Alice. Then they calculate the secret key rate based on the convex optimization problem in Eq. (8). If the result shows that the key rate is equal to 0, Alice and Bob abort the protocol and start over. Otherwise, they continue with the next step. (5) Reverse reconciliation key map.-The key string at Bob is determined according to Bob’s measurement outcome in step 2 and the following rules:where is determined by the postselection of data. Alice and Bob then pick out the location of the symbol and remove the data at that location by classical communication. The set and after removing is the raw key string. (6) Error correction and privacy amplification.-Alice and Bob choose a suitable error-correction protocol and a suitable privacy-amplification protocol to generate secret key rates. The key rate can be calculated using the well-known Devetak-Winter formula[65] in the asymptotic limit and under collective attacks. To apply this formula, we transform the prepare-and-measure protocol into the entanglement-based protocol. Alice prepares the state according to the ensemble in the prepare-and-measure protocol. In the equivalent entanglement-based protocol, Alice prepares the bipartite state in the form of . Here Alice keeps in register A and sends to Bob. changes as it passes through an insecure quantum channel. The process can be described by a completely positive and trace-preserving map . The bipartite state thus transforms intowhere is the identity transformation acting on A. Under reverse reconciliation[66], the key rate formula can be expressed according to Refs.[32,33] as

Details of data preprocessing

To improve the performance of our neural network, we preprocess the training inputs before training the neural network. The process can be expressed aswhere represents the j-th component of the i-th sample; and are the mean and variance of the j-th component in all samples, respectively; is the j-th component of the i-th sample after being preprocessed. The preprocessed data follow a standard normal distribution with a mean of 0 and a variance of 1. The process removes dimensional restrictions and facilitates the comparison of features of different dimensions. Since the maximum difference between different key rates in these samples is 4 orders of magnitude, we preprocess the labels as follows to speed up the training process of the neural networks:where is the label corresponding to the i-th sample after being preprocessed. Note that the outputs predicted by the neural networks trained with preprocessed labels need to be inverse solved using the following equation:where and are the output value and the predicted key rate of the neural networks for the i-th sample, respectively. Algorithms  1 and  2 show the detailed training process of the neural networks and the process of using trained neural networks to predict new samples, respectively. Relative deviations between key rates predicted by our neural network and the corresponding key rates obtained by the numerical method for the given optimal light intensity at different distances and different excess noises. Time consumption of the neural network versus the numerical method with excess noise of 0.008, 0.010 and 0.012. NM and NN are the abbreviations of the numerical method and neural network, respectively. L is the distance between Alice and Bob.

Detailed data

Table  1 shows the relative deviations between the key rates predicted by our neural network and the corresponding key rates obtained by the numerical method for the given optimal light intensity at different distances and different excess noises. This table is a supplement to Fig. 4.
Table 1

Relative deviations between key rates predicted by our neural network and the corresponding key rates obtained by the numerical method for the given optimal light intensity at different distances and different excess noises.

L (km)Relative deviations
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\xi =0.002$$\end{document}ξ=0.002\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\xi =0.005$$\end{document}ξ=0.005\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\xi =0.008$$\end{document}ξ=0.008\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\xi =0.011$$\end{document}ξ=0.011\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\xi =0.014$$\end{document}ξ=0.014
50.270.160.150.160.15
100.170.140.140.120.12
150.140.120.110.100.10
200.120.100.090.080.09
250.110.090.080.070.10
300.090.090.060.080.11
350.080.070.060.090.11
400.070.070.080.100.13
450.070.080.090.100.14
500.080.090.100.110.14
550.090.100.110.120.15
600.090.110.110.120.14
650.100.110.120.130.14
700.100.110.120.130.13
750.100.120.120.130.14
800.100.120.130.130.15
850.110.130.130.140.17
900.110.130.140.140.20
950.110.140.140.150.19
1000.110.140.140.140.06
Table  2 shows the specific data of the time consumption of the neural network and the numerical method with excess noise of 0.008, 0.010 and 0.012. In the numerical method, each point with excess noise of approximately 0.01 takes 200 s on average, which greatly affects the efficiency of the QKD system to calculate the secure key rate. In contrast, the neural network can calculate tens of thousands of key rates in 1 s. Considering that it takes a certain amount of time for the QKD system to collect data, the speed of predicting the key rates by the neural network completely meets practical applications.
Table 2

Time consumption of the neural network versus the numerical method with excess noise of 0.008, 0.010 and 0.012.

\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\xi =0.008$$\end{document}ξ=0.008\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\xi =0.010$$\end{document}ξ=0.010\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\xi =0.012$$\end{document}ξ=0.012
L (km)NM (s)NN (s)L (km)NM (s)NN (s)L (km)NM (s)NN (s)
5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1.42\times 10^2$$\end{document}1.42×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1.98 \times 10^{-4}$$\end{document}1.98×10-45\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1.54 \times 10^2$$\end{document}1.54×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3.28 \times 10^{-4}$$\end{document}3.28×10-45\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.16 \times 10^2$$\end{document}2.16×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1.31 \times 10^{-4}$$\end{document}1.31×10-4
10\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$7.86 \times 10^1$$\end{document}7.86×101\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$7.25 \times 10^{-5}$$\end{document}7.25×10-510\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$9.94 \times 10^1$$\end{document}9.94×101\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.85 \times 10^{-5}$$\end{document}5.85×10-510\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1.27 \times 10^2$$\end{document}1.27×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.70 \times 10^{-5}$$\end{document}4.70×10-5
15\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1.04 \times 10^2$$\end{document}1.04×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$6.60 \times 10^{-5}$$\end{document}6.60×10-515\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1.72 \times 10^2$$\end{document}1.72×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.70 \times 10^{-5}$$\end{document}5.70×10-515\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.24 \times 10^2$$\end{document}2.24×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.15 \times 10^{-5}$$\end{document}4.15×10-5
20\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1.09 \times 10^2$$\end{document}1.09×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$6.50 \times 10^{-5}$$\end{document}6.50×10-520\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.37 \times 10^2$$\end{document}2.37×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.40 \times 10^{-5}$$\end{document}5.40×10-520\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3.07 \times 10^2$$\end{document}3.07×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.30 \times 10^{-5}$$\end{document}4.30×10-5
25\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1.20 \times 10^2$$\end{document}1.20×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$6.65 \times 10^{-5}$$\end{document}6.65×10-525\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.45 \times 10^2$$\end{document}2.45×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$6.30 \times 10^{-5}$$\end{document}6.30×10-525\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.40 \times 10^2$$\end{document}4.40×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.25 \times 10^{-5}$$\end{document}4.25×10-5
30\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1.98 \times 10^2$$\end{document}1.98×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.65 \times 10^{-5}$$\end{document}5.65×10-530\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3.30 \times 10^2$$\end{document}3.30×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.75 \times 10^{-5}$$\end{document}4.75×10-530\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.92 \times 10^2$$\end{document}4.92×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.20 \times 10^{-5}$$\end{document}4.20×10-5
35\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.34 \times 10^2$$\end{document}2.34×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.90 \times 10^{-5}$$\end{document}5.90×10-535\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3.71 \times 10^2$$\end{document}3.71×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.90 \times 10^{-5}$$\end{document}5.90×10-535\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.33 \times 10^2$$\end{document}5.33×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.65 \times 10^{-5}$$\end{document}4.65×10-5
40\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.47 \times 10^2$$\end{document}2.47×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.70 \times 10^{-5}$$\end{document}5.70×10-540\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.18 \times 10^2$$\end{document}4.18×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.85 \times 10^{-5}$$\end{document}5.85×10-540\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.72 \times 10^2$$\end{document}5.72×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.60 \times 10^{-5}$$\end{document}4.60×10-5
45\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.50 \times 10^2$$\end{document}2.50×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$6.10 \times 10^{-5}$$\end{document}6.10×10-545\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.73 \times 10^2$$\end{document}2.73×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.70 \times 10^{-5}$$\end{document}5.70×10-545\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.94 \times 10^2$$\end{document}5.94×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.35 \times 10^{-5}$$\end{document}4.35×10-5
50\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.62 \times 10^2$$\end{document}2.62×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$6.35 \times 10^{-5}$$\end{document}6.35×10-550\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$6.24 \times 10^2$$\end{document}6.24×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.60 \times 10^{-5}$$\end{document}5.60×10-550\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.79 \times 10^2$$\end{document}5.79×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.55 \times 10^{-5}$$\end{document}4.55×10-5
55\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.74 \times 10^2$$\end{document}2.74×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$6.50 \times 10^{-5}$$\end{document}6.50×10-555\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.55 \times 10^2$$\end{document}5.55×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.10 \times 10^{-5}$$\end{document}5.10×10-555\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.83 \times 10^2$$\end{document}5.83×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.30 \times 10^{-5}$$\end{document}4.30×10-5
60\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.68 \times 10^2$$\end{document}2.68×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$6.65 \times 10^{-5}$$\end{document}6.65×10-560\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.28 \times 10^2$$\end{document}5.28×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.85 \times 10^{-5}$$\end{document}5.85×10-560\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.96 \times 10^2$$\end{document}5.96×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.30 \times 10^{-5}$$\end{document}4.30×10-5
65\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.55 \times 10^2$$\end{document}2.55×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$6.70 \times 10^{-5}$$\end{document}6.70×10-565\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.48 \times 10^2$$\end{document}5.48×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.10 \times 10^{-5}$$\end{document}5.10×10-565\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.96 \times 10^2$$\end{document}5.96×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.20 \times 10^{-5}$$\end{document}4.20×10-5
70\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.72 \times 10^2$$\end{document}2.72×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$6.55 \times 10^{-5}$$\end{document}6.55×10-570\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.82 \times 10^2$$\end{document}4.82×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.65 \times 10^{-5}$$\end{document}5.65×10-570\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.91 \times 10^2$$\end{document}5.91×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.30 \times 10^{-5}$$\end{document}5.30×10-5
75\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.60 \times 10^2$$\end{document}2.60×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$6.70 \times 10^{-5}$$\end{document}6.70×10-575\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.78 \times 10^2$$\end{document}4.78×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$6.70 \times 10^{-5}$$\end{document}6.70×10-575\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.87 \times 10^2$$\end{document}5.87×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.10 \times 10^{-5}$$\end{document}4.10×10-5
80\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.30 \times 10^2$$\end{document}2.30×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$6.00 \times 10^{-5}$$\end{document}6.00×10-580\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.19 \times 10^2$$\end{document}4.19×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.20 \times 10^{-5}$$\end{document}5.20×10-580\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.57 \times 10^2$$\end{document}5.57×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.35 \times 10^{-5}$$\end{document}4.35×10-5
85\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.34 \times 10^2$$\end{document}2.34×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.70 \times 10^{-5}$$\end{document}5.70×10-585\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3.63 \times 10^2$$\end{document}3.63×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.95 \times 10^{-5}$$\end{document}5.95×10-585\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.45 \times 10^2$$\end{document}5.45×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.35 \times 10^{-5}$$\end{document}4.35×10-5
90\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1.99 \times 10^2$$\end{document}1.99×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.75 \times 10^{-5}$$\end{document}5.75×10-590\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3.48 \times 10^2$$\end{document}3.48×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.35 \times 10^{-5}$$\end{document}5.35×10-590\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.37 \times 10^2$$\end{document}4.37×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.10 \times 10^{-5}$$\end{document}4.10×10-5
95\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1.72 \times 10^2$$\end{document}1.72×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.75 \times 10^{-5}$$\end{document}5.75×10-595\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.92 \times 10^2$$\end{document}2.92×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.10 \times 10^{-5}$$\end{document}5.10×10-595\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3.81 \times 10^2$$\end{document}3.81×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.35 \times 10^{-5}$$\end{document}4.35×10-5
100\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1.54 \times 10^2$$\end{document}1.54×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.85 \times 10^{-5}$$\end{document}5.85×10-5100\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.43 \times 10^2$$\end{document}2.43×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$6.60 \times 10^{-5}$$\end{document}6.60×10-5100\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3.47 \times 10^2$$\end{document}3.47×102\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.65 \times 10^{-5}$$\end{document}4.65×10-5

NM and NN are the abbreviations of the numerical method and neural network, respectively. L is the distance between Alice and Bob.

  15 in total

1.  Continuous variable quantum cryptography using coherent states.

Authors:  Frédéric Grosshans; Philippe Grangier
Journal:  Phys Rev Lett       Date:  2002-01-16       Impact factor: 9.161

2.  Quantum cryptography based on Bell's theorem.

Authors: 
Journal:  Phys Rev Lett       Date:  1991-08-05       Impact factor: 9.161

3.  Unconditional security proof of long-distance continuous-variable quantum key distribution with discrete modulation.

Authors:  Anthony Leverrier; Philippe Grangier
Journal:  Phys Rev Lett       Date:  2009-05-06       Impact factor: 9.161

4.  Experimental Machine Learning of Quantum States.

Authors:  Jun Gao; Lu-Feng Qiao; Zhi-Qiang Jiao; Yue-Chi Ma; Cheng-Qiu Hu; Ruo-Jing Ren; Ai-Lin Yang; Hao Tang; Man-Hong Yung; Xian-Min Jin
Journal:  Phys Rev Lett       Date:  2018-06-15       Impact factor: 9.161

5.  Quantum Autoencoders to Denoise Quantum Data.

Authors:  Dmytro Bondarenko; Polina Feldmann
Journal:  Phys Rev Lett       Date:  2020-04-03       Impact factor: 9.161

Review 6.  Quantum machine learning: a classical perspective.

Authors:  Carlo Ciliberto; Mark Herbster; Alessandro Davide Ialongo; Massimiliano Pontil; Andrea Rocchetto; Simone Severini; Leonard Wossnig
Journal:  Proc Math Phys Eng Sci       Date:  2018-01-17       Impact factor: 2.704

7.  Training deep quantum neural networks.

Authors:  Kerstin Beer; Dmytro Bondarenko; Terry Farrelly; Tobias J Osborne; Robert Salzmann; Daniel Scheiermann; Ramona Wolf
Journal:  Nat Commun       Date:  2020-02-10       Impact factor: 14.919

8.  Finite-size security of continuous-variable quantum key distribution with digital signal processing.

Authors:  Takaya Matsuura; Kento Maeda; Toshihiko Sasaki; Masato Koashi
Journal:  Nat Commun       Date:  2021-01-13       Impact factor: 14.919

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.