Literature DB >> 33267302

Kernel Risk-Sensitive Mean p-Power Error Algorithms for Robust Learning.

Tao Zhang1,2, Shiyuan Wang1,2, Haonan Zhang1,2, Kui Xiong1,2, Lin Wang1,2.   

Abstract

As a nonlinear similarity measure defined in the reproducing kernel Hilbert space (RKHS), the correntropic loss (C-Loss) has been widely applied in robust learning and signal processing. However, the highly non-convex nature of C-Loss results in performance degradation. To address this issue, a convex kernel risk-sensitive loss (KRL) is proposed to measure the similarity in RKHS, which is the risk-sensitive loss defined as the expectation of an exponential function of the squared estimation error. In this paper, a novel nonlinear similarity measure, namely kernel risk-sensitive mean p-power error (KRP), is proposed by combining the mean p-power error into the KRL, which is a generalization of the KRL measure. The KRP with p = 2 reduces to the KRL, and can outperform the KRL when an appropriate p is configured in robust learning. Some properties of KRP are presented for discussion. To improve the robustness of the kernel recursive least squares algorithm (KRLS) and reduce its network size, two robust recursive kernel adaptive filters, namely recursive minimum kernel risk-sensitive mean p-power error algorithm (RMKRP) and its quantized RMKRP (QRMKRP), are proposed in the RKHS under the minimum kernel risk-sensitive mean p-power error (MKRP) criterion, respectively. Monte Carlo simulations are conducted to confirm the superiorities of the proposed RMKRP and its quantized version.

Entities:  

Keywords:  correntropic; kernel adaptive filters; kernel risk-sensitive mean p-power error; quantized; recursive

Year:  2019        PMID: 33267302      PMCID: PMC7515077          DOI: 10.3390/e21060588

Source DB:  PubMed          Journal:  Entropy (Basel)        ISSN: 1099-4300            Impact factor:   2.524


1. Introduction

Online kernel-based learning is to extend the kernel methods to online settings where the data arrives sequentially, which has been widely applied in signal processing thanks to its excellent performance in addressing nonlinear issues [1]. The development of kernel methods is of great significance for practical applications. In kernel methods, the input data are transformed from the original space into the reproducing kernel Hilbert space (RKHS) using the kernel trick [2]. As the representative of the kernel methods, kernel adaptive filters (KAFs) provide an effective way to transform a nonlinear problem into a linear one, which have been widely introduced in system identification and time-series prediction [3,4,5]. Generally, KAFs are designed for Gaussian and non-Gaussian noises from the aspect of cost function, respectively. For Gaussian noises, the second-order similarity measures of errors are generally used as a cost function of KAFs to achieve desirable filtering accuracy. Therefore, in the Gaussian noise environment, KAFs based on the second-order similarity measures of errors are mainly divided into three categories, i.e., the kernel least mean square (KLMS) algorithm [6], the kernel affine projection algorithm (KAPA) [7], and the kernel recursive least square algorithm (KRLS) [8]. However, the network size of KAFs increases linearly with the length of training, leading to large computational and storage burdens. To curb this structure growth, many sparsification methods are required, such as the surprise criterion (SC) [9], novelty criterion (NC) [10], coherence criterion [11], and approximate linear dependency (ALD) criterion [8]. However, these sparsification methods only discard the redundant data, leading to reduction of filtering accuracy. Unlike the aforementioned sparsification methods, the vector quantization (VQ) utilizes the redundant data to update the weights for accuracy improvement. The VQ is combined into KAFs to generate quantized KAFs, e.g., the quantized kernel least mean square algorithm (QKLMS) [12] and quantized kernel recursive least squares algorithm (QKRLS) [13]. However, the second-order similarity measures used in the aforementioned algorithms merely contain the second order statistics of errors, which cannot address non-Gaussian noises or outliers, efficiently [14]. Thus, it is very important to design a cost function beyond the second-order statistics of errors for combating non-Gaussian noises. The non-second order similarity measures can be divided into three categories, i.e., the mean p-power error (MPE) criterion [15], information theoretic learning (ITL) [14], and risk-sensitive loss (RL) based criteria [16,17]. The MPE criterion based on the pth absolute moment of the error can deal with non-Gaussian data with a proper p-value, efficiently. In general, MPE is robust to large outliers when [15], generating robust adaptive filters [15], e.g., the kernel least mean p-power (KLMP) algorithm [18] and the kernel recursive least mean p-power (KRLP) algorithm [18]. ITL can incorporate the complete distribution of errors into the learning process, resulting in the improvement of filtering precision and robustness to outliers. The most widely used ITL criterion is the maximum correntropy criterion (MCC) [19,20,21,22,23,24]. As a local similarity measure defined as a generalized correlation in the RKHS, the correntropy used in MCC can leverage higher order statistics of data to combat outliers [25]. However, the performance surface of the correntropic loss (C-Loss) is highly non-convex, which may lead to poor convergence performance. In the RL-based criteria, e.g., minimum risk-sensitive loss [16] and minimum kernel risk-sensitive loss (MKRL) [17,26], the risk-sensitive loss in the RKHS is convex extremely, which is more efficient for combating non-Gaussian noises or outliers than MCC [17,26]. However, since the MKRL uses the stochastic gradient descent (SGD) method to update its weights, the desirable filtering performance cannot be achieved for some complex nonlinear issues. The recursive update rules with excellent tracking ability can improve the filtering performance of adaptive filtering algorithms [21]. For example, KRLS based on the recursive update rule can improve the filtering performance of KLMS based on the SGD, significantly. To the best of our knowledge, however, it has not yet been proposed to design a recursive MKRL algorithm for desirable filtering performance in the RKHS by a recursive update rule. In this paper, to inherit the advantages of both KRL and MPE for robustness improvement, we propose the risk-sensitive mean p-power error (RP) defined as the expectation of an exponential function of the pth absolute moment of the estimation error, and its kernel RP (KRP). The KRP can outperform the KRL by setting an appropriate p-value for robust learning, and the KRP with reduces to the KRL. The proposed KRP criterion is used to derive a novel recursive minimum kernel risk-sensitive mean p-power error (RMKRP) algorithm for desirable filtering performance by combining the weighted output information. Furthermore, to curb the growth of network size in the RMKRP, the VQ is combined into RMKRP to generate quantized RMKRP (QRMKRP). The rest of this paper is organized as follows. In Section 2, we define the KRP, and give some basic properties. The KRP criterion is derived to develop a recursive adaptive algorithm by combining the weighted output information, called RMKRP algorithm in Section 3. To further reduce the network size of RMKRP, the vector quantization method is applied in RMKRP, thus generating the quantized RMKRP (QRMKRP) in Section 3. In Section 4, Monte Carlo simulations are conducted to validate the superiorities of the proposed algorithms in nonlinear examples. The conclusion is summarized in Section 5.

2. Kernel Risk-Sensitive Mean p-Power Error

2.1. Definition

According to [17], the risk-sensitive loss is defined in RKHS, called the kernel risk-sensitive loss (KRL). Given two arbitrary scalar random variables X and Y, where , the KRL is defined by where is a risk-sensitive scalar parameter; is a nonlinear mapping induced by a Mercer kernel , which transforms the data from the original space into the RKHS equipped with an inner product satisfying ; denotes the mathematical expectation; denotes the norm in RKHS ; denotes the joint distribution function of . A shift-invariant Gaussian kernel with bandwidth is given as follows: However, the joint distribution of is usually unknown, and only N samples are available. Hence, the nonparametric estimate of is obtained by applying the Parzen windows [19] as . Note that the inner product in the RKHS for the same input is calculated by using kernel trick and (2), i.e, . In this paper, we define a new non-second order similarity measure in the RKHS, i.e., the kernel risk-sensitive mean p-power error (KRP) loss. Given two random variables X and Y, the KRP loss is defined by where is the power parameter. Note that the KRL can be regarded as a special case of the KRP with . However, the joint distribution of X and Y is usually unknown in practice. Hence, the empirical KRP is defined as follows: where denotes the available finite number of samples. The empirical KRP can be regarded as a distance between both and .

2.2. Properties

In the following, we give some important properties of the proposed KRP. Straightforward since . □ Straightforward since , and if . □ As λ is small enough, it holds that For a small enough , we have , i.e., Therefore, we can obtain  □ As σ is large enough, it holds that Since is approximated by for a small enough x, for the case of large enough , i.e., . Thus, we can obtain the approximation as Similarly, when for large enough , we can also obtain the approximation as According to (7) and (8), we have  □ According to Properties As p is small enough, it holds that Property 5 holds because of . □ Let Since , the Hessian matrix of with respect to can be derived as where with . When , we have if . From (11), if and , or and , we have . Therefore, we have if where . Thus, we have . □ According to Property As where According to Property

3. Application to Adaptive Filtering

In this section, to combat non-Gaussian noises, two recursive robust adaptive algorithms under the proposed KRP criterion are proposed in the RKHS using the kernel trick and vector quantization technique, respectively.

3.1. RMKRP

The recursive strategy is introduced into the KRP loss function, namely the recursive minimum kernel risk-sensitive mean p-power error (RMKRP) algorithm. The offline solution to minimum of the KRP loss is first obtained. Based on the obtained offline solution, the recursive solution or online solution to minimum of the KRP loss is then derived using some matrix operations, which generates the RMKRP algorithm. The details of RMKRP are shown as follows. Consider the prediction of a continuous input-output model based on adaptive filtering shown in Figure 1, where is the ith D-dimensional input vector, is the ith scalar desired output contaminated by a noise , i.e., . A sequence of training samples is used to perform the prediction of in an adaptive filter. The nonlinear mapping of input is denoted by for simplicity. Hence, in the RKHS , the training samples are changed to , where the desired output vector is and the input kernel mapping matrix is . The prediction denoted by in the RKHS is therefore given as , where is the weight vector in a high dimensional feature space .
Figure 1

Block diagram of adaptive filtering.

An exponentially-weighted loss function is used here to put more emphasis on recent data and to de-emphasize data on the remote past [28]. When are available, the weight vector is obtained as the offline solution to minimizing the following weighted cost function: where denotes the forgetting factor in the interval , is the regularization factor, , and denotes the jth estimate error. The second term is a norm penalizing term, which is to guarantee the existence of the inverse of the input data autocorrelation matrix especially during the initial update stages. In addition, the regularization term is weighted by , which deemphasizes regularization as time progresses. According to Property 6, the empirical KRP as a function of is convex at any point satisfying , , and . To obtain the minimum of (15), its gradient is calculated, i.e., Setting (16) to zero, i.e., , we can obtain the offline solution to minimum of (15) as follows: where with , , and denotes an identity matrix with an appropriate dimension. To obtain an efficient recursive solution to the minimum of (15), a Mercer kernel is used to construct the RKHS. Here, the Gaussian kernel is used as a Mercer kernel, which is denoted as with being the kernel width. The inner product in the RKHS can be calculated by using the kernel trick [28], i.e., , efficiently, which can avoid the direct calculation of nonlinear mapping . Since the matrix inversion lemma [28] is described by , by letting , , , and , we rewrite (17) as Substituting (18) into (17) yields Note that in (19) can be computed by the kernel trick, efficiently. The weight vector is therefore described explicitly as a linear combination of the input data in the RKHS, i.e., where denotes the coefficients vector. It can be seen from (20) that the recursive form of is changed to that of . Hence, in the following, the key for finding a recursive solution to the minimum of (15) is to obtain the recursive form of . The coefficients vector is calculated using the kernel trick as For simplicity, we obtain the update form of indirectly by defining as where . Then, the update form of can be further obtained where . By using some matrix operations, we further simplify (23) as where . By using the following block matrix inversion identity [18,21,28] then, we can obtain the update equation for the inverse of the growing matrix in (24) as where and . Combining (21) with (26), the coefficients vector of the weight vector is shown as follows: where denotes the difference between the desired output and the system output . is the jth element of and all the previous data are the centers. The coefficients and all the previous data should be stored at each iteration. Finally, the RMKRP algorithm is summarized in Algorithm 1.

3.2. QRMKRP

The RMKRP algorithm generates a linearly growing network owing to the used kernel trick. The online vector quantization (VQ) method [12] has been successfully applied in KAFs to curb its network growth efficiently. Thus, we incorporate the online VQ method into the RMKRP to develop the quantized RMKRP (QRMKRP) algorithm, which is shown as follows. Suppose that the dictionary contains L vectors at discrete time i, i.e., , , which means that there are L distinctive quantization regions. In the RKHS, the prediction is therefore expressed as , where is the weight vector in RKHS . The cost function of QRMKRP based on is denoted as where denotes the number of input data those lie in the kth quantization region of and satisfies and , and is the desired output corresponding to the nth element within the kth quantization region. The offline solution to the minimization of (28) can be described by where with elements; denotes a accumulated diagonal matrix; denotes a accumulated weighted output vector; denotes corresponding to the nth entry of the kth quantization region; denotes an identity matrix with an appropriate dimension. Since (29) has a similar form to (17), we simplify (29) as where . To obtain the recursive solution to the minimization of (28), we let and denote To update in (30) recursively, two cases are therefore considered. (1) First, Case: dis: In this case, we have and , which means the input is therefore quantized to the th element of dictionary , where = . The matrix and the vector have a similar form to [13]. Here, and can be shown as where is a -dimensional column vector whose th element is 1 and all other elements are 0. Combining (32) into (31), the matrix can be expressed as . By using the matrix inversion lemma [28], we obtain where and represent the th columns of the matrices and , respectively. Therefore, in (30) can be calculated as (2) Second Case: dis: In this case, we have , , and we have where is the null column vector with a compatible dimension; and . Combining (31), (35), , and the block matrix inversion identity [28], we obtain where Furthermore, due to , we obtain The QRMKRP algorithm is summarized in Algorithm 2, where L denotes the dictionary size.

4. Simulation

In this section, to validate the performance of the proposed RMKRP algorithm and its quantized version, two examples, i.e., Mackey–Glass (MG) chaotic time series prediction and nonlinear system identification, are used to validate the performance superiorities of the proposed two algorithms. In this example, the noise environment considered is the impulsive noise, which is modeled by the combination of two independent noise processes [17], i.e., where is an ordinary noise disturbance with small variance and represents large outliers with large variance; is of binary distribution random process over with and ( is an occurrence probability). Here, we select . The distribution of is considered as a Binary distribution over with probability mass . In addition, is modeled by the -stable process, owing to its heavy-tailed probability density function. The -stable process is described by the following characteristic function [29]: where with being the characteristic factor, being the symmetry parameter, being the dispersion parameter, denotes the sign function, , and being the location parameter. Generally, a smaller generates a heavier tail and a smaller generates fewer large outliers. The characteristic function denoted as is chosen as to model the impulse noise in the simulations.

4.1. Chaotic Time Series Prediction

The MG chaotic time series is generated from the following differential equation [9]: where . Here, we set , , and . The time series is discretized at a sampling period of six seconds. The training set includes a segment of 2000 samples corrupted by the additive noises which are shown in (39), and another 200 samples without noise are used as the testing set. The kernel size in the Gaussian kernel is set to 1. The filter length is set at , which means that is used to predict . To evaluate the filtering accuracy, the testing mean square error (MSE) is defined as follows: where is the estimate of , and N is the length of testing data. The KLMS [6], KMCC [22], MKRL [26], KRMC [21], and KRLS [8] algorithms are chosen for performance comparison with RMKRP thanks to their excellent filtering performance. The other sparsification algorithms, i.e., the QKLMS [12], QKMCC [30], QMKRL [26], QKRLS [13], and KRMC-NC [21] algorithms are used for performance comparison with QRMKRP owing to their modest space complexities and excellent performance. All simulation results are averaged over 50 independent Monte Carlo runs. Since power parameter p, risk-sensitive parameter , and kernel width are crucial parameters in the proposed RMKRP and QRMKRP algorithms, the influence of these parameters on the performance is first discussed. In the simulations, we take 12 points evenly in the close interval and , respectively. The influence of p on the steady-state performance of RMKRP is shown in Figure 2a, where the steady-state MSEs are obtained as averages over the last 100 iterations. The parameters are set as: p is set within ; risk-sensitive parameter in the KRP is set as 1; and ; kernel size in the KRP is set as 1. As can be seen from Figure 2a, we have that the filtering accuracy of RMKRP is the highest when and decreases gradually when p is either too small or too large. Then, the influence of on the filtering performance of RMKRP with is shown in Figure 2b, where the steady-state MSEs are obtained as averages over the last 100 iterations. The parameters are set as: risk-sensitive parameter is fixed at 1; lies in . From Figure 2b, we see that RMKRP can achieve the highest filtering accuracy when is about 1. It is reasonable to note that RMKRP are sensitive to outliers when the kernel width is large, and decreases its ability of error-correction when the kernel width is small. Finally, the influence of on the filtering performance of RMKRP with and is shown in Figure 2c, where the steady-state MSEs are obtained as averages over the last 100 iterations. The parameters are set as: the range of is selected as . From Figure 2c, we see that has a slight influence on the filtering accuracy when is small, and a large can increase the steady-state MSE obviously. Therefore, from Figure 2, the parameters of RMKRP can be chosen by trials to obtain the best performance in practice. Similarly, the parameters of QRMKRP can be chosen by the same method as that in RMKRP.
Figure 2

Steady-state MSE of RMKRP with different p in MG time series prediction (a); steady-state MSE of RMKRP with different in MG time series prediction (b); steady-state MSE of RMKRP with different in MG time series prediction (c).

The performance comparison of QKLMS, QKMCC, QMKRL, KLMS, KMCC, MKRL, KRLS, KRMC, KRMC-NC, and QKRLS is conducted in the same environments as in (39). The parameters of the proposed algorithms are selected by trials to achieve desirable performance, and the parameters of compared algorithms are chosen such that they have almost the same convergence rate. , , and are set for RMKRP; , , , and for QRMKRP; for KLMS; and for KMCC; , , and for MKRL; and for QKLMS; , , and for QKMCC; , , , and for QMKRL; , , and for KRMC; the novelty criterion thresholds , , , , and for KRMC-NC; for KRLS; and for QKRLS. Figure 3 shows the compared MSEs of RMKRP, QRMKRP, and the compared algorithms. As can be seen from Figure 3, RMKRP achieves a better filtering accuracy than KRLS, KRMC, KLMS, KMCC, and MKRL. QRMKRP achieves a better steady-state testing MSE than the sparsification algorithms including QKRLS, KRMC-NC, QKLMS, QKMCC, and QMKRL. We also see from Figure 3 that the proposed algorithms provide good robustness to impulsive noises. For detailed comparison, the dictionary size, consumed time, and steady-state MSEs in Figure 3 are shown in Table 1. Note that the steady-state MSEs of KLMS, QKLMS, KRLS, and QKRLS are not shown in Table 1 since they cannot converge in such impulsive noise environment. From Table 1, we see that RMKRP has similar consumed time to KRLS and KRMC but provides better filtering accuracy. In addition, QRMKRP provides the highest filtering accuracy in all the compared sparsification algorithms and approaches the filtering accuracy of RMKRP with a significantly lower network size.
Figure 3

Comparison of the MSEs of KLMS, KMCC, MKRL, KRLS, KRMC, and RMKRP in MG time series prediction (a); comparison of the MSEs of QKLMS, QKMCC, QMKRL, QKRLS, KRMC-NC, and QRMKRP in MG time series prediction (b).

Table 1

Simulation results of QKLMS, QKMCC, QMKRL, QKRLS, KRMC-NC, KLMS, KMCC, MKRL, KRLS, KRMC, RMKRP, and QRMKRP in MG time series prediction.

AlgorithmsSizeTime (s)MSE (dB)
KLMS [6]200030.9501 sN/A
QKLMS [12]282.1011 sN/A
KRLS [8]200058.5358 sN/A
QKRLS [13]282.3374 sN/A
KMCC [22]200030.8285 s−18.5063
QKMCC [30]282.0995 s−17.8707
MKRL [26]200030.9117 s−18.7312
QMKRL [26]282.1063 s−18.1037
KRMC [21]200058.1229 s−25.1618
KRMC-NC [21]4622.8045 s−21.5183
QRMKRP282.3443 s−24.9326
RMKRP200058.2196 s−28.1802

4.2. Nonlinear System Identification

To further validate the performance superiorities of the proposed RMKRP and QRMKRP algorithms, the nonlinear system identification is considered. Here, the nonlinear system is of the following form [31]. where denotes the output at discrete time t with the initial and . The two previous outputs are utilized as the input to estimate the current output . The training set includes a segment of 2000 samples corrupted by the additive noises shown in (39), and another 200 samples without noise are used as the testing set. The kernel width is set to 1 for the Gaussian function. All simulation results are averaged over 50 independent Monte Carlo runs. Similar to MG chaotic time series prediction, the influence of power parameter p, risk-sensitive parameter , and kernel width on the performance of RMKRP is also discussed in nonlinear system identification. The influence of p on the steady-state performance of RMKRP is shown in Figure 4a, where the steady-state MSEs are obtained as averages over the last 100 iterations. The parameters are set as: p is set within ; is set as ; and ; kernel size in the KRP is set as 1. The influence of on the filtering performance of RMKRP is shown in Figure 4b, where risk-sensitive parameter is fixed at ; lies in ; p is set as 4. The influence of on the filtering performance of RMKRP is shown in Figure 4c, where the range of is selected as ; is set as 1; p is set as 4. As can be seen from Figure 4, we can obtain the same conclusions as those in Figure 2.
Figure 4

Steady-state MSE of RMKRP with different p in nonlinear system identification (a); steady-state MSE of RMKRP with different in nonlinear system identification (b); steady-state MSE of RMKRP with different in nonlinear system identification (c).

We compare the filtering performance of QKLMS, QKMCC, QMKRL, KLMS, KMCC, MKRL, KRLS, KRMC, KRMC-NC, and QKRLS in the same environments as in (39). The parameters of the proposed algorithms are selected by trials to achieve desirable performance, and the parameters of compared algorithms are chosen such that they have almost the same convergence rate. , , and are set for RMKRP; , , , and for QRMKRP; for KLMS; and for KMCC; , , and for MKRL; and for QKLMS; , , and for QKMCC; , , , and for QMKRL; , , and for KRMC; the novelty criterion thresholds , , , , and for KRMC-NC; for KRLS; and for QKRLS. Figure 5 shows the compared MSEs of RMKRP, QRMKRP, and the compared algorithms. For detailed comparison, the dictionary size, consumed time, and steady-state MSEs in Figure 5 are also shown in Table 2, where the steady-state MSEs of KLMS, QKLMS, KRLS, and QKRLS are not shown since they cannot converge in such impulsive noise environments. From Figure 5 and Table 2, we can obtain the same conclusions as those in Figure 3 and Table 1.
Figure 5

Comparison of the MSEs of KLMS, KMCC, MKRL, KRLS, KRMC, and RMKRP in nonlinear system identification (a); comparison of the MSEs of QKLMS, QKMCC, QMKRL, QKRLS, KRMC-NC, and QRMKRP nonlinear system identification (b).

Table 2

Simulation results of QKLMS, QKMCC, QMKRL, QKRLS, KRMC-NC, KLMS, KMCC, MKRL, KRLS, KRMC, RMKRP, and QRMKRP in nonlinear system identification.

AlgorithmsSizeTime (s)MSE (dB)
KLMS [6]200021.2447 sN/A
QKLMS [12]141.7284 sN/A
KRLS [8]200048.6055 sN/A
QKRLS [13]141.9643 sN/A
KMCC [22]200021.1328 s−19.233
QKMCC [30]141.763 s−17.9723
MKRL [26]200021.0313 s−19.5390
QMKRL [26]141.7243 s−18.5748
KRMC [21]200048.7601 s−28.7583
KRMC-NC [21]4962.6874 s−23.671
QRMKRP141.9681 s−27.3128
RMKRP200048.6101 s−34.0790

5. Conclusions

In this paper, the kernel risk-sensitive mean p-power error (KRP) criterion is proposed by constructing mean p-power error (MPE) into kernel risk-sensitive loss (KRL) in RKHS, and some basic properties are presented. The KRP criterion with power parameter p is more flexible than KRL to handle the signal corrupted by impulsive noises. Two kernel recursive adaptive algorithms are derived to obtain desirable filtering accuracy under the minimum KRP (MKRP) criterion, i.e., the recursive minimum KRP (RMKRP) and quantized RMKRP (QRMKRP) algorithms. The RMKRP can achieve higher accuracy but with almost identical computational complexity as that of the KRLS and KRMC. The vector quantization method is introduced into RMKRP, thus generating QRMKRP, and QRMKRP can effectively reduce network size while maintaining the filtering accuracy. Simulations conducted in Mackey–Glass (MG) chaotic time series prediction and nonlinear system identification under impulsive noises illustrate the superiorities of RMKRP and QRMKRP from the aspects of robustness and filtering accuracy.
  8 in total

1.  Robust principal component analysis based on maximum correntropy criterion.

Authors:  Ran He; Bao-Gang Hu; Wei-Shi Zheng; Xiang-Wei Kong
Journal:  IEEE Trans Image Process       Date:  2011-01-06       Impact factor: 10.856

2.  A linear recurrent kernel online learning algorithm with sparse updates.

Authors:  Haijin Fan; Qing Song
Journal:  Neural Netw       Date:  2013-11-20

3.  An information theoretic approach of designing sparse kernel adaptive filters.

Authors:  Weifeng Liu; Il Park; José C Principe
Journal:  IEEE Trans Neural Netw       Date:  2009-11-17

4.  A Resource-Allocating Network for Function Interpolation.

Authors:  John Platt
Journal:  Neural Comput       Date:  1991       Impact factor: 2.026

5.  Robust Learning With Kernel Mean -Power Error Loss.

Authors:  Badong Chen; Lei Xing; Xin Wang; Jing Qin; Nanning Zheng
Journal:  IEEE Trans Cybern       Date:  2017-07-25       Impact factor: 11.448

6.  Quantized kernel least mean square algorithm.

Authors:  Badong Chen; Songlin Zhao; Pingping Zhu; José C Príncipe
Journal:  IEEE Trans Neural Netw Learn Syst       Date:  2012-01       Impact factor: 10.451

7.  Quantized kernel recursive least squares algorithm.

Authors:  Badong Chen; Songlin Zhao; Pingping Zhu; José C Príncipe
Journal:  IEEE Trans Neural Netw Learn Syst       Date:  2013-09       Impact factor: 10.451

8.  Maximum Correntropy Criterion for Robust Face Recognition.

Authors:  Ran He; Wei-Shi Zheng; Bao-Gang Hu
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2010-12-10       Impact factor: 6.226

  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.