Literature DB >> 33286801

Inference for Convolutionally Observed Diffusion Processes.

Shogo H Nakakita1,2,3, Masayuki Uchida1,4.   

Abstract

We propose a new statistical observation scheme of diffusion processes named convolutional observation, where it is possible to deal with smoother observation than ordinary diffusion processes by considering convolution of diffusion processes and some kernel functions with respect to time parameter. We discuss the estimation and test theories for the parameter determining the smoothness of the observation, as well as the least-square-type estimation for the parameters in the diffusion coefficient and the drift one of the latent diffusion process. In addition to the theoretical discussion, we also examine the performance of the estimation and the test with computational simulation, and show an example of real data analysis for one EEG data whose observation can be regarded as smoother one than ordinary diffusion processes with statistical significance.

Entities:  

Keywords:  convolutional observation; diffusion processes; parametric inference; partial observation; statistical modelling; stochastic differential equations

Year:  2020        PMID: 33286801      PMCID: PMC7597089          DOI: 10.3390/e22091031

Source DB:  PubMed          Journal:  Entropy (Basel)        ISSN: 1099-4300            Impact factor:   2.524


1. Introduction

We consider a d-dimensional diffusion process defined by the following stochastic differential equation, where , is a standard r-dimensional Wiener process, is an -valued random variable independent of , and are unknown parameters, and are compact and convex parameter spaces, and are known functions. Our concern is statistical estimation for and from observation. denotes the true value of . We denote the observation as the discretised process with discretisation step such that and , where the convoluted process is defined as where is an -valued kernel function whose support is a subset of , and such that . In this paper, we specify which is a parametric kernel function whose support is a subset of defined as is the Dirac-delta function, is the smoothing parameter determining the smoothness of observation. That is to say, the observed process is defined as follows: for all . Let us consider both the problems that (i) is a known parameter, and (ii) is an unknown one whose true value is denoted by and this is estimated by observation , and the parameter space is denoted as , where . When assuming as a known parameter, we can find literature for parametric estimation for and/or based on observation schemes which can be represented as special cases for some specific . If , our scheme is simply equivalent to parametric inference based on discretely observed diffusion processes studied in [1,2,3,4,5,6,7,8] and references therein. If , we can regard the problem as parametric estimation for integrated diffusion processes discussed in Gloter [9], Ditlevsen and Sørensen [10], Gloter [11], Gloter and Gobet [12], Sørensen [13]. Even for the case where some axes correspond to direct observation and the others do to integrated observation, we give consistent estimators for and by considering the scheme of convolutionally observed diffusion processes and this is one of the contributions of our study. What is more, our contribution is to consider the scheme where is unknown and succeed in representation of the microstructure noise which makes the observation smoother than the latent diffusion process itself, which can be seen in some biomedical time series data. Statistical modelling of biomedical time series data with stochastic differential equations has been one of the topics eagerly studied e.g., [14,15,16,17]. As [18] states, the existence of microstructure noise in financial data affects realised volatilities to increase as the subsampling frequency gets higher, for instance, see Figure 7.1 on p. 217 in [19]. However, realised volatilities of some biomedical data such as EEG decrease as subsampling frequency increases. For instance, some time series data for the 2nd participant in the dataset named Two class motor imagery (002-2014) of BNCI Horizon 2020 [20] show clear tendency of decreasing realised volatilities as subsampling frequency increases. Figure 1 shows the path of the 2nd axis of the data S02E.mat BNCI Horizon 2020 [20] for all 222 seconds (the observation frequency is 512 Hz, and hence the entire data size is 113664) and that for the first one second; it seems to perturb like a diffusion process.
Figure 1

The path of the second column of S02E.mat of BNCI Horizon 2020 [20] for all 222 seconds (left) and the first one second (right).

We define realised volatilities with subsampling as for a sequence of real-valued observation , where is the subsampling frequency parameter, and provide a plot of the realised volatilities the 2nd axis of the data S02E.mat in Figure 2:
Figure 2

Realised volatilities with subsampling of the 2nd axis of data S02E.mat in two class motor imagery (002-2014) [20].

The altitudes of the graph represented in the y-axis correspond to the values of the realised volatilities with subsampling at every k observation represented in the x-axis. It is observable that the increasing subsampling frequency results in decreasing realised volatilities, which cannot be explained by the existent major microstructure noises, e.g., see [21,22,23,24,25]. To explain this phenomenon, we consider the smoother process than the latent one, though ordinarily microstructure noises make the observation rougher than the latent process, because quadratic variation of a sufficiently smooth function is zero. One way to deal with smoother observation than the latent state is convolutional observation. As a concrete example, we show a convolutionally observed diffusion process and its characteristics in realised volatilities: let us consider the following 1-dimensional stochastic differential equation defining an Ornstein–Uhlenbeck (OU) process: where . We simulate the stochastic differential equation by Euler–Maruyama method see [26] with parameters , , and and its convolution approximated by summation with the smoothing parameter (for details, see Section 5). Figure 3 shows the latent diffusion process and the convoluted observation on , and we can see that the observation is indeed smoothed compared to the latent state.
Figure 3

The left figure is the plot of the latent diffusion process, and the right one is that of the convolutionally observed process on respectively.

In Figure 4, we also give the plot of realised volatilities of the convolutionally observed process with subsampling as Figure 2.
Figure 4

The realised volatilities of the convolutionally observed diffusion process with subsampling.

It is seen that the convolutional observation of a diffusion process also has the characteristics of decreasing realised volatilities as subsampling frequency increases, which can bee seen in some biological data such as BNCI Horizon 2020 [20]. Of course, graphically comparing characteristics of simulation and real data is insufficient to verify the convolutional observation with smoothing parameter in 1-dimensional case; therefore, we propose statistical estimation method for unknown and hypothesis test with the null hypothesis and the alternative one from convolutional observation in Section 3. Moreover, in Section 6, we examine the real EEG data plotted in Figure 2 by the statistical hypothesis testing we propose, and see it is more appropriate to consider the data as a convolutional observation of a latent diffusion process with rather than direct observation of the latent process, which indicates the validity to deal with the problem of the convolutional observation scheme with unknown . The paper is composed of the following sections: Section 2 gives the notations and assumptions used in this paper; Section 3 discusses the estimation and test for smoothing parameter , and the discussion provides us with the tools to examine whether we should consider the convolutional observation scheme; Section 4 proposes the quasi-likelihood functions for the parameter of diffusion processes and , and corresponding estimators with consistency; Section 5 is for the computational simulation to examine the theoretical results in the previous sections; and Section 6 shows an application of the methods we propose in real data analysis.

2. Notations and Assumptions

2.1. Notations

First of all, we set , , and . We also give the notation for a matrix-valued function such that , where The continuity of the function is shown in the supplementary material. For the detailed discussion of the necessity of , see Remark 4. In addition, we also give the notation used throughout this paper. For every matrix A, is the transpose of A, and . For every set of matrices A and B of the same size, . Moreover, for any , and , . and denote the ℓ-th element of a vector v and the -th one of a matrix A, respectively. For any vector v and any matrix A, and . denotes the stochastic basis, where . denotes the integral of a -integrable function f where is a measure.

2.2. Assumptions

With respect to , we assume the following conditions. For a constant C, for all , For all , . There exists a unique invariant measure on and for all and with polynomial growth, For sufficient conditions of these regularity ones, please see [A1] and Remark 1 in Uchida and Yoshida [ There exists such that and have continuous derivatives satisfying With the invariant measure , , and denoting the true value of , we define For these functions, let us assume the following identifiability conditions hold. There exist and such that for all and , and .

3. Estimation and Test of the Smoothing Parameter

In this section, we discuss the case where the smoothing parameter of the kernel function is unknown. The estimation is significant for estimation of and since we utilise the estimate for in quasi-likelihood functions of and . The test problem for hypotheses and is also important to examine whether our framework of convolutional observation is meaningful.

3.1. Estimation of the Smoothing Parameter

For simplicity of notation, let us consider the case ; otherwise the discussion is quite parallel. We should note that for all , Let us consider the estimation of with using the next statistics: the full quadratic variation because of Proposition 3 in supplementary material Appendix A, and the reduced quadratic variation defined as converges in probability as follows. Under [A1], we have the convergence in probability such that Then we define the ratio of those statistics such that where R has the next property. R is a We define such that and then continuous mapping theorem for convergence in probability verifies the next result. Under [A1], We can compute

3.2. Test for Smoothed Observation

For all , we consider the next hypothesis testing: Let us consider the following test statistic: and we abbreviate to if . Under , we have the next result. Under We also obtain the result to support the consistency of the test. Under These results are intuitive since the quadratic variation and the reduced one with some appropriate scaling should converge to the same value if Hence when we set the significance level , then we have the rejection region where is the distribution function of the standard Gaussian distribution. Theorem 3 supports the consistency of the test. This test is essential in terms of examining the validity to consider the scheme of convolutional observation: if , then the ordinary observation scheme can be applied, but if , then we have the motivation to consider the convolutional observation scheme.

4. Least Square Estimation of the Diffusion and Drift Parameters

Let us set the least-square quasi-likelihood functions such that and the least-square estimators and satisfying when is known, and when is unknown. Under [A1]–[A3], The function

5. Simulations

In this simulation section, we only consider the case where is unknown and should be estimated by data with the method proposed in Section 3.

5.1. 1-Dimensional Simulation

We examine the following 1-dimensional stochastic differential equation whose solution is a 1-dimensional Ornstein–Uhlenbeck (OU) process: , , and . The procedure of the simulation is as follows: in the first place we iterate an approximated OU process by Euler–Maruyama scheme, for example, see [26] with simulation parameters , , where is a parameter to determine the precision of approximation; secondly, we give the approximation of convolution by summation such that where , the sampling frequency and . In this Section 5.1, we fix the true value of and as and , and change the true value of to see the corresponding changes of performance of estimation for , and test for in comparison to estimation by an existent method called local Gaussian approximation (LGA) for parametric estimation of discretely observed diffusion processes, e.g., see [4] which does not concern convolutional observation. All the numbers of iterations for different ’s are 1000. In the first place, we see the estimation and test with small values of such that to observe how the performance of statistics changes by difference in . Table 1 summarises the results of simulation of for ’s with respective empirical means and root mean square error (RMSE).
Table 1

Estimation performance of with small .

ρ 0.00.10.20.30.40.50.60.70.80.91.0
mean 0.00990 0.0971 0.198 0.298 0.398 0.498 0.598 0.699 0.799 0.899 0.999
RMSE (0.0182) (0.0256) (0.0235) (0.0215) (0.0197) (0.0180) (0.0164) (0.0150) (0.0135) (0.0123) (0.0110)
We can see the proposed estimator works well for small . With respect to the performance of the test statistic proposed in Section 3.2, Table 2 shows the empirical ratio of the number of iterations whose is lower than some typical critical values where indicates the distribution function of 1-dimensional standard Gaussian distribution as well as the maximum value of in 1000 iterations.
Table 2

Empirical ratio of test statistic less than some critical values, and the maximum value of in 1000 iterations.

Empirical Ratio of Tn Less Than…Max. of Tn
Φ10.10 Φ10.05 Φ10.025 Φ10.01 Φ10.001
ρ=0.0 0.101 0.053 0.025 0.005 0.000 3.060
ρ=0.1 0.989 0.980 0.966 0.914 0.759 0.710
ρ=0.2 1.000 1.000 1.000 1.000 1.000 4.593
ρ=0.3 1.000 1.000 1.000 1.000 1.000 9.341
ρ=0.4 1.000 1.000 1.000 1.000 1.000 13.985
ρ=0.5 1.000 1.000 1.000 1.000 1.000 19.152
ρ=0.6 1.000 1.000 1.000 1.000 1.000 24.816
ρ=0.7 1.000 1.000 1.000 1.000 1.000 30.848
ρ=0.8 1.000 1.000 1.000 1.000 1.000 37.557
ρ=0.9 1.000 1.000 1.000 1.000 1.000 44.829
ρ=1.0 1.000 1.000 1.000 1.000 1.000 52.759
Even for , the simulation result supports the theoretical discussion of the test with consistency. Because , all the iterations with result in rejection of with substantially significance level . Let us see the estimation for and by our proposal method and that by the LGA in Table 3.
Table 3

Estimation of by the proposed method and LGA with small .

The Proposed MethodLGA
α β1 β2 α β1 β2
True Value 3.0 2.0 1.0 3.0 2.0 1.0
ρ=0.0 mean 3.004 2.091 1.036 2.999 2.095 1.037
RMSE (0.0109) (0.318) (0.497) (0.00679) (0.320) (0.497)
ρ=0.1 mean 2.999 2.091 1.035 2.949 2.026 1.003
RMSE (0.0146) (0.319) (0.496) (0.0509) (0.297) (0.480)
ρ=0.2 mean 2.998 2.091 1.035 2.898 1.955 0.967
RMSE (0.0142) (0.319) (0.496) (0.102) (0.290) (0.464)
ρ=0.3 mean 2.998 2.092 1.036 2.846 1.885 0.932
RMSE (0.0139) (0.319) (0.497) (0.155) (0.299) (0.452)
ρ=0.4 mean 2.998 2.091 1.036 2.792 1.815 0.897
RMSE (0.0135) (0.319) (0.497) (0.208) (0.324) (0.442)
ρ=0.5 mean 2.998 2.092 1.036 2.738 1.744 0.862
RMSE (0.0132) (0.319) (0.497) (0.262) (0.361) (0.436)
ρ=0.6 mean 2.998 2.091 1.036 2.683 1.674 0.827
RMSE (0.0129) (0.319) (0.497) (0.317) (0.408) (0.434)
ρ=0.7 mean 2.998 2.092 1.036 2.626 1.604 0.792
RMSE (0.0126) (0.319) (0.497) (0.374) (0.460) (0.434)
ρ=0.8 mean 2.998 2.092 1.036 2.568 1.534 0.757
RMSE (0.0124) (0.319) (0.496) (0.432) (0.517) (0.439)
ρ=0.9 mean 2.998 2.092 1.036 2.509 1.464 0.722
RMSE (0.0121) (0.319) (0.497) (0.491) (0.578) (0.445)
ρ=1.0 mean 2.998 2.091 1.036 2.449 1.394 0.687
RMSE (0.0119) (0.319) (0.497) (0.551) (0.640) (0.456)
Note that the biases of the estimation by LGA increase as the true value of gets larger, while the estimation by our proposal method is not influenced by the true value of . This result of the simulation supports the theoretical discussion in Section 4 stating the consistency of , and necessity to consider the convolutional observation scheme where the LGA method does not work properly. Secondly, we consider the estimation and test with large such that = 10, 15, 20 to see if our proposal methods work even for large . We note that the maximum values of for = 10, 15, 20 in 1000 iterations are , and , and hence we can detect the smoothed observation easily. Table 4 shows the empirical means and RMSEs of for = 10, 15, 20 and we can see that the RMSEs increase as ’s increase; it indicates the difficulty to estimate accurately when is large.
Table 4

The performance of for in 1000 iterations.

ρ=10 ρ=15 ρ=20
mean of ρ^n 9.919 14.980 19.751
RMSE of ρ^n (0.145) (0.240) (0.409)
Table 5 summarises the estimation for by means and RMSE, and tells us that although the large RMSE of results in increase of RMSE of , estimation by our method is substantially better than that by LGA.
Table 5

Estimation of by the proposed method with large .

The Proposed MethodLGA
α β1 β2 α β1 β2
True Value 3.0 2.0 1.0 3.0 2.0 1.0
ρ=10 mean 2.989 2.101 1.030 0.933 0.204 0.0811
RMSE (0.0347) (0.323) (0.496) (2.067) (1.796) (0.920)
ρ=15 mean 2.996 2.095 1.027 0.765 0.138 0.0473
RMSE (0.0475) (0.321) (0.495) (2.235) (1.862) (0.953)
ρ=20 mean 2.977 2.090 1.024 0.664 0.104 0.0302
RMSE (0.0526) (0.319) (0.493) (2.336) (1.896) (0.970)

5.2. 2-Dimensional Simulation

We consider the following 2-dimensional stochastic differential equation whose solution is a 2-dimensional OU process:. The simulation is conducted with the settings as follows: firstly, we iterate the OU process by Euler–Maruyama scheme with the simulation sample size , and discretisation step , where is the precision parameter for approximation of convolution; in the second place, we approximate the convoluted process with summation such that where , the sampling scheme for inference is defined as and ; the true value of , and are set as , , ; the parameter spaces are defined as , , and ; the total iteration number is set to 1000. Table 6 summarises the estimation for with the method proposed in Section 3 (the inverse of r is computationally obtained) with empirical means and empirical RMSEs of in 1000 iterations. We can see that is sufficiently precise to estimate the true value of indeed in this result, which is significant to estimate the other parameters and .
Table 6

Summary for estimate.

ρ1 ρ2
true value 2.0 4.0
empirical mean 1.988 3.966
empirical RMSE (0.0207) (0.0514)
We also note that the maximum values of the test statistics for smoothed observation proposed in Section 3.2 in 1000 iterations are and for each axis. The p-value for them are smaller than ; therefore, we can conclude that it is possible to detect the smoothed observation with the proposed test statistic in the case if from this result. With respect to the estimation for and , we compare the estimates by our proposal method with that by LGA which does not concern convolutional observation. Table 7 is the summary for estimate by both the methods:
Table 7

Summary for estimate.

α1 α2 α3
True Value 2.0 0.0 3.0
Our proposalmean 1.993 0.000256 2.992
RMSE (0.0115) (0.00739) (0.0213)
LGAmean 1.295 0.00320 1.442
RMSE (0.705) (0.0154) (1.558)
We can see that the estimation precision for by our proposal outperforms those by LGA. This results support validity of our estimation method if we have convolutional observation for diffusion processes. Regarding , the simulation result is summarised in Table 8:
Table 8

Summary for estimate.

β1 β2 β3 β4 β5 β6
True Value 2.0 0.4 0.0 0.1 3.0 5.0
Our proposalmean 2.137 0.408 0.0439 0.0788 3.103 5.091
RMSE (0.362) (0.252) (0.540) (0.473) (0.399) (0.777)
LGAmean 0.917 0.340 0.326 0.696 0.221 1.243
RMSE (1.093) (0.802) (0.386) (0.804) (3.242) (3.765)
Though the estimation for by our method has the smaller bias in comparison to that by LGA, the RMSE of our method is larger than that of LGA; in the estimation for other parameters, our proposal method outperforms the method by LGA. We can conclude that our proposal for estimation of and concerning convolutional observation performs better than that not considering this observation scheme.

6. Real Data Analysis

In this section, we analyse the EEG dataset named S02E.mat provided in “2. Two class motor imagery (002-2014)” of BNCI Horizon 2020 [20]. The datasets including S02E.mat are also studied by Steryl et al. [27].

6.1. Estimation and Test for the Smoothing Parameters

In the first place, we pick up the first 15 axes of the dataset and compute and proposed in Section 3.1 and Section 3.2 respectively. The results are shown in Table 9.
Table 9

The values of and for the first 15 axes of S02.mat by BNCI Horizon 2020 [20].

1st axis2nd axis3rd axis4th axis5th axis
ρ^n 0.449 1.037 0.894 0.736 0.937
Tn 20.398 58.631 46.649 35.201 49.741
6th axis7th axis8th axis9th axis10th axis
ρ^n 0.951 0.971 1.017 0.958 0.967
Tn 51.392 52.607 55.455 51.221 51.797
11th axis12th axis13th axis14th axis15th axis
ρ^n 0.949 0.649 0.952 0.977 0.932
Tn 50.457 30.094 50.633 50.978 48.842
We can observe that all the 15 time series data have the smoothing parameter with statistical significance when we assume ordinary significance levels. These results motivate us to use our methods for parametric estimation proposed in Section 4 when we fit stochastic differential equations for these data.

6.2. Parametric Estimation for a Diffusion Process

We fit a 1-dimensional OU process for the time series data in the 2nd column of the data file S02E.mat with 512 Hz observation for 222 s (the plot of the path can be seen in Figure 1), whose is the largest among those for the 15 axes and it is larger than 0 with statistical significance. According to the simulation result shown in Section 5.1, this size of the smoothing parameter gives critical biases when we estimate and with LGA method not concerning convolutional observation scheme. The stochastic differential equation with parameters and is defined as follows: We set 5 s as the time unit: hence 113,664 and . If we fit the OU process with the LGA method, i.e., we do not concern convolutional observation scheme, we obtain the fitting result such that In the next place, we fit and with the least square method proposed in Section 4, and then we have the next fitting result: It is worth noting that this fitting result is substantially different to that by LGA as shown above: hence these results indicate the significance to examine if the observation is convoluted with the smoothing parameter and otherwise the estimation is strongly biased.

7. Summary

We have discussed the convolutional observation scheme which deals with the smoothness of observation in comparison to ordinary diffusion processes. The first contribution is to propose this new observation scheme with the statistical test to confirm whether this scheme is valid in real data. The second one is to prove consistency of the estimator for the smoothing parameter , those for parameters in diffusion and drift coefficient, i.e., and , of the latent diffusion process . Thirdly, we have examined the performance of those estimators and the test statistics in computational simulation, and verified these statistics work well in realistic settings. In the fourth place, we have shown a real example of observation where holds with statistical significance. If we combine the test for noise detection by Nakakita and Uchida [28] and that for smoothed observation proposed in this paper, we can test if the observed process is diffusion or not in terms that the observation is noisy or smoothed. Note that the realised volatilities of the noisy observation of diffusion processes increase as observation frequency increases while those of the smoothed observation decreases as the frequency grows. On that point, the noisy observation in ordinary meaning and the smoothed one are ‘opposite’ to each other. These contributions, especially the third one, will cultivate the motivation to study statistical approaches for convolutionally observed diffusion processes furthermore, such as estimation of kernel function V appearing in the convoluted diffusion , test theory for parameters and as likelihood-ratio-type test statistics, for example, see [29,30], large deviation inequalities for quasi-likelihood functions and associated discussion of Bayes-type estimators, e.g., [6,31,32,33]. By these future works, it is expected that the applicability of stochastic differential equations in real data analysis and contributions to the areas with high frequency observation of phenomena such as EEG will be enhanced.

8. Proofs

8.1. the Results for Some Laws of Large Numbers

In this subsection, we give the notations and statements of propositions without proofs except for Proposition 3: the detailed proofs are given in supplementary material. We assume , , and consider a class of -valued kernel functions on denoted as such that for all , it holds: Note one sufficient condition for for the Cauchy–Schwarz inequality, and Fubini’s theorem. It is easily checked that . Let p denote an integer such that , . We set the sequence of the kernels such that for some , and there exist a matrix such that a set such that there exist functions for such that a function such that We define and the following random quantities such that where , , are in -class, and their first and second derivatives and themselves are at most polynomial growth uniformly in . Under [A1], If Under [A1], We set , and see the evaluation of B, and G when setting our kernel as follows (for the derivation of the evaluation, see the supplementary material): we have , , , where , for because of independent increments of the Wiener process, and where . By following the proof of the Proposition 3, it is sufficient to evaluate for the asymptotic behaviour of the reduced quadratic variation by choosing If , and if , Hence, we obtain the proof (for details, see the supplementary material). □ Continuity is obvious, and monotonicity is obtained as follows: if , and if , and if , The inverse can be obtained directly. □ It follows from Lemma 1, 2 and continuous mapping theorem. □ We can clearly prove the result by using Lemma 7 in Kessler [4], Proposition 7 in Nakakita and Uchida [28], and Slutsky’s theorem. □ By Lemma 1, there exists a number such that and hence it is sufficient to show that and it is obvious that and by Proposition A in Gloter [9], and a parallel result holds for . Hence we obtain the result. □ We only deal with the case where is unknown because the discussion for the case where is known is parallel. First of all, we prove the consistency of . We obtain that because continuous mapping theorem holds. Therefore, it follows from Proposition 1 and Proposition 3 that where indicates the term converging in probability to zero uniformly in . Then we obtain that in the same way as Kessler [4] with Assumption [A3]. In the next place, we consider the consistency of . Firstly, we consider the case for an integer . Then it is sufficient to show due to Assumption [A3]. Because the evaluation where using independent increments of the Wiener process, Proposition 1 and Proposition 2 verify where In addition, the exact convergences such that hold for all , since for all , Therefore, for any , For the case for an integer , we similarly obtain because we have and and it holds that for any , Hence it is shown that with Assumption [A3]. □
  2 in total

1.  Random forests in non-invasive sensorimotor rhythm brain-computer interfaces: a practical and convenient non-linear classifier.

Authors:  David Steyrl; Reinhold Scherer; Josef Faller; Gernot R Müller-Putz
Journal:  Biomed Tech (Berl)       Date:  2016-02       Impact factor: 1.411

Review 2.  A review on estimation of stochastic differential equations for pharmacokinetic/pharmacodynamic models.

Authors:  Sophie Donnet; Adeline Samson
Journal:  Adv Drug Deliv Rev       Date:  2013-03-22       Impact factor: 15.470

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.