Literature DB >> 35741558

A Generic Formula and Some Special Cases for the Kullback-Leibler Divergence between Central Multivariate Cauchy Distributions.

Nizar Bouhlel1, David Rousseau2.   

Abstract

This paper introduces a closed-form expression for the Kullback-Leibler divergence (KLD) between two central multivariate Cauchy distributions (MCDs) which have been recently used in different signal and image processing applications where non-Gaussian models are needed. In this overview, the MCDs are surveyed and some new results and properties are derived and discussed for the KLD. In addition, the KLD for MCDs is showed to be written as a function of Lauricella D-hypergeometric series FD(p). Finally, a comparison is made between the Monte Carlo sampling method to approximate the KLD and the numerical value of the closed-form expression of the latter. The approximation of the KLD by Monte Carlo sampling method are shown to converge to its theoretical value when the number of samples goes to the infinity.

Entities:  

Keywords:  Kullback–Leibler divergence (KLD); Lauricella D-hypergeometric series; Multivariate Cauchy distribution (MCD); multiple power series

Year:  2022        PMID: 35741558      PMCID: PMC9222751          DOI: 10.3390/e24060838

Source DB:  PubMed          Journal:  Entropy (Basel)        ISSN: 1099-4300            Impact factor:   2.738


1. Introduction

Multivariate Cauchy distribution (MCD) belongs to the elliptical symmetric distributions [1] and is a special case of the multivariate t-distribution [2] and the multivariate stable distribution [3]. MCD has been recently used in several signal and image processing applications for which non-Gaussian models are needed. To name a few of them, in speckle denoizing, color image denoizing, watermarking, speech enhancement, among others. Sahu et al. in [4] presented a denoizing method for speckle noise removal applied to a retinal optical coherence tomography (OCT) image. The method was based on the wavelet transform where the sub-bands coefficients were modeled using a Cauchy distribution. In [5], a dual tree complex wavelet transform (DTCWT)-based despeckling algorithm was proposed for synthetic aperture radar (SAR) images, where the DTCWT coefficients in each subband were modeled with a multivariate Cauchy distribution. In [6], a new color image denoizing method in the contourlet domain was suggested for reducing noise in images corrupted by Gaussian noise where the contourlet subband coefficients were described by the heavy-tailed MCD. Sadreazami et al. in [7] put forward a novel multiplicative watermarking scheme in the contourlet domain where the watermark detector was based on the bivariate Cauchy distribution and designed to capture the across scale dependencies of the contourlet coefficients. Fontaine et al. in [8] proposed a semi-supervised multichannel speech enhancement system where both speech and noise follow the heavy-tailed multi-variate complex Cauchy distribution. Kullback–Leibler divergence (KLD), also called relative entropy, is one of the most fundamental and important measures in information theory and statistics [9,10]. KLD was first introduced and studied by Kullback and Leibler [11] and Kullback [12] to measure the divergence between two probability mass functions in the case of discrete random variables and between two univariate or multivariate probability density functions in the case of continuous random variables. In the literature, numerous entropy and divergence measures have been suggested for measuring the similarity between probability distributions, such as Rényi [13] divergence, Sharma and Mittal [14] divergence, Bhattacharyya [15,16] divergence and Hellinger divergence measures [17]. Other general divergence families have been also introduced and studied like the -divergence family of divergence measures defined simultaneously by Csiszár [18] and Ali and Silvey [19] where the KLD measure is a special case, the Bregman family divergence [20], the R-divergences introduced by Burbea and Rao [21,22,23], the statistical f-divergences [24,25] and recently the new family of a generalized divergence called the -divergence measures introduced and studied in Menéndez et al. [26]. Readers are referred to [10] for details about these divergence family measures. KLD has a specific interpretation in coding theory [27] and is therefore the most popular and widely used as well. Since information theoretic divergence and KLD in particular are ubiquitous in information sciences [28,29], it is therefore important to establish closed-form expressions of such divergence [30]. An analytical expression of the KLD between two univariate Cauchy distributions was presented in [31,32]. To date, the KLD of MCDs has no known explicit form, and it is in practice either estimated using expensive Monte Carlo stochastic integration or approximated. Monte Carlo sampling can efficiently estimate the KLD provided that a large number of independent and identically distributed samples is provided. Nevertheless, Monte Carlo integration is a too slow process to be useful in many applications. The main contribution of this paper is to derive a closed-form expression for the KLD between two central MCDs in a general case to benchmark future approaches while avoiding approximation using expensive Monte Carlo (MC) estimation techniques. The paper is organized as follows. Section 2 introduces the MCD and the KLD. Section 3 gives some definitions and propositions related to a multiple power series used to compute the closed-form expression of the KLD between two central MCDs. In Section 4 and Section 5, expressions of some expectations related to the KLD are developed by exploiting the propositions presented in the previous section. Section 6 demonstrates some final results on the KLD computed for the central MCD. Section 7 presents some particular results such as the KLD for the univariate and the bivariate Cauchy distribution. Section 8 presents the implementation procedure of the KLD and a comparison with Monte Carlo sampling method. A summary and some conclusions are provided in the final section.

2. Multivariate Cauchy Distribution and Kullback–Leibler Divergence

Let be a random vector of which follows the MCD, characterized by the following probability density function (pdf) given as follows [2] This is for any , where p is the dimensionality of the sample space, is the location vector, is a symmetric, positive definite scale matrix and is the Gamma function. Let and be two random vectors that follow central MCDs with pdfs and given by (1). KLD provides an asymmetric measure of the similarity of the two pdfs. Indeed, the KLD between the two central MCDs is given by Since the KLD is the relative entropy defined as the difference between the cross-entropy and the entropy, we have the following relation: where denotes the cross-entropy and the entropy. Therefore, the determination of KLD requires the expression of the entropy and the cross-entropy. It should be noted that the smaller , the more similar are and . The symmetric KL similarity measure between and is . In order to compute the KLD, we have to derive the analytical expressions of and which depend, respectively, on and . Consequently, the closed-form expression of the KLD between two zero-mean MCDs is given by To provide the expression of these two expectations, some tools based on the multiple power series are required. The next section presents some definitions and propositions used for this goal.

3. Definitions and Propositions

This section presents some definitions and exposes some propositions related to the multiple power series used to derive the closed-form expression of the expectation and , and as a consequence the KLD between two central MCDs. The Humbert series of n variables, denoted The Pochhammer symbol indicates the i-th rising factorial of q, i.e., for an integer The following integral representation is true for where The power series of exponential function is given by By substituting the expression of the exponential into the multiple integrals we have where the multivariate integral , which is a generalization of a beta integral, is the type-1 Dirichlet integral (Section 1.6.1 in [34]) given by Knowing that , the expression of can be written otherwise Finally, plugging (13) back into (11) leads to the final result □

3.1. Integral Representation for

Given Proposition 1, we consider the particular cases one by one as follows: Case where is the confluent hypergeometric function of the first kind (Section 9.21 in [35]). Case where the double series is one of the components of the Humbert series of two variables [36] that generalize Kummer’s confluent hypergeometric series of one variable. The double series converges absolutely at any , . We define a new multiple power series, denoted by The multiple power series (

3.2. Multiple Power Series

The multiple power series can also be transformed into two other expressions as follows By Horn’s rule for the determination of the convergence region (see [37], Section 5.7.2), the multiple power series (18) and (19) are absolutely convergent on region in . Equation (18) can then be deduced from (17) by using the following development where the function can be written as and is used here to alleviate writing equations. Using the definition of Gauss’ hypergeometric series [34] and the Pfaff transformation [38], we can write By substituting (23) into (20), and using the following two relations: we can get (18). The second transformation is given as follows By substituting (27) into (20), we get (19). The multiple power series By using Equation (18) of the multiple power series and after having simplified to the numerator and to the denominator, we can get the result. □ The following integral representation is true for where and The multiple power series and the confluent hypergeometric function are absolutely convergent on . Using these functions in the above integral and changing the order of integration and summation, which is easily justified by absolute convergence, we get where integral is defined as follows Substituting the integral expression of in the previous equation and replacing to alleviate writing equations, we have Knowing that [35] and the new expression of is then given by Using the fact that and , and developing the same method to , the final complete expression of the integral is then given by □ Let where Expectation is developed as follows where . Utilizing the following property  , as a consequence the expectation is given as follows Consider the transformation where . The Jacobian determinant is given by (Theorem 1.12 in [40]). The new expression of the expectation is given by Let be a transformation where the Jacobian determinant is given by (Lemma 13.3.1 in [41]) The new expectation is as follows Using the definition of beta function, we can write that The derivative of the last integral w.r.t a is given by Finally, the expression of is given by □ Let where To prove Proposition 4, different steps are necessary. They are described in the following:

5. Expression of

5.1. First Step: Eigenvalue Expression

Expectation is computed as follows where . Consider transformation where . The Jacobian determinant is given by (Theorem 1.12 in [40]) and matrix is a real symmetric matrix since and are real symmetric matrixes. Then, the expectation is evaluated as follows Matrix can be diagonalized by an orthogonal matrix with and where is a diagonal matrix composed of the eigenvalues of . Considering that , the expectation can be written as Let with be a transformation where the Jacobian determinant is given by . Using the fact that and , then the previous expectation (51) is given as follows where ,…, are the eigenvalues of .

5.2. Second Step: Polar Decomposition

Let the independent real variables be transformed to the general polar coordinates r, as follows, where , , , [40], The Jacobian determinant according to theorem (1.24) in [40] is It is clear that with the last transformations, we get and the multiple integral in (53) is then given as follows By replacing the expression of by , for , we have the following expression Let be a transformation to use where . Then the expectation given by the multiple integral over all , is as follows where , and . In the following, we use the notation instead of to alleviate writing equations. Let be transformation to use. Then, one can write In order to solve the integral in (62), we consider the following property given by   and the following equation given as follows Making use of the above equation, we obtain a new expression of (62) given as follows where is defined as

5.3. Third Step: Expression for H(t,y) by Humbert and Beta Functions

Let , be transformations to use. Then Adding equations from (67) to (70), we can state that the new expression of the function becomes Then, the multiple integral given by (66) can be written otherwise Let the real variables be transformed to the real variables as follows The Jacobian determinant is given by Accordingly, the new expression of becomes As a consequence, the new domain of the multiple integral (72) is , and the expression of is given as follows Using Proposition 1, we subsequently find that where is the Humbert series of variables and is the multivariate beta function. Applying the following successive two transformations () and (), the new expression of the expectation given by (65) is written as follows

5.4. Final Step

The last integral is related to the confluent hypergeometric function of the second kind as follows As a consequence, the new expression is Using the transformation and the Proposition 2, and taking into account the expression of A, the new expression becomes Knowing that the new expression of becomes Applying the expression given by (18) of Definition 2 and relying on Lemma 1, the final result corresponds to the D-hypergeometric function of Lauricella given by The final development of the previous expression is as follows □ In this section, we presented the exact expression of . In addition, the multiple power series which appears to be a special case of provides many properties and numerous transformations (see Appendix A) that make easier the convergence of the multiple power series. In the next section, we establish the KLD closed-form expression based on the expression of the latter expectation.

6. KLD between Two Central MCDs

Plugging (39) and (93) into (5) yields the closed-form expression of the KLD between two central MCDs with pdfs and . This result is presented in the following theorem. Let where Lauricella [39] gave several transformation formulas (see Appendix A), whose relations (A5)–(A7), and (A9) are applied to in (94). The results of transformation are as follows Considering the above equations, it is easy to provide different expressions of shown in Table 1. The derivative of the Lauricella D-hypergeometric series with respect to a goes through the derivation of the following expression
Table 1

KLD and KL distance computed when and are two random vectors following central MCDs with pdfs and .

        KL(X1||X2)(106)=12logi=1pλi+1+p2logλpaFD(p)a,12,,12,a+12p;a+1+p2;1λ1λp,,1λp1λp,11λp|a=0(107)=12logi=1pλi1+p2λpp2i=1p1λi12aFD(p)1+p2,12,,12,a+12p;a+1+p2;1λpλ1,,1λpλp1,1λp|a=0(108)=12logi=1pλi+1+p2logλ1aFD(p)a,12,,12,a+12p;a+1+p2;1λpλ1,,1λ2λ1,11λ1|a=0(109)=12logi=1pλi1+p2aFD(p)a,12,,12p;a+1+p2;1λ1,,1λp|a=0(110)=12logi=1pλi1+p2i=1pλi12aFD(p)1+p2,12,,12p;a+1+p2;11λ1,,11λp|a=0
       KL(X2||X1)(111)=12logi=1pλi1+p2logλp+aFD(p)a,12,,12,a+12p;a+1+p2;1λpλ1,,1λpλp1,1λp|a=0(112)=12logi=1pλi1+p2λpp2i=1p1λi12aFD(p)1+p2,12,,12,a+12p;a+1+p2;1λ1λp,,1λp1λp,11λp|a=0(113)=12logi=1pλi1+p2logλ1+aFD(p)a,12,,12,a+12p;a+1+p2;1λ1λp,,1λ1λ2,1λ1|a=0(114)=12logi=1pλi1+p2aFD(p)a,12,,12p;a+1+p2;11λ1,,11λp|a=0(115)=12logi=1pλi1+p2i=1pλi12aFD(p)1+p2,12,,12p;a+1+p2;1λ1,,1λp|a=0
dKL(X1,X2)=1+p2[logλpaFD(p)a,12,,12,a+12p;a+1+p2;1λ1λp,,1λp1λp,11λp|a=0λpp2i=1p1λi12(116)×aFD(p)1+p2,12,,12,a+12p;a+1+p2;1λ1λp,,1λp1λp,11λp|a=0]=1+p2[aFD(p)a,12,,12p;a+1+p2;1λ1,,1λp|a=0+i=1pλi12a{FD(p)(1+p2,12,,12p;a+1+p2;(117)1λ1,,1λp)}|a=0]=1+p2[i=1pλi12aFD(p)1+p2,12,,12p;a+1+p2;11λ1,,11λp|a=0+a{FD(p)(a,12,,12p;a+1+p2;(118)11λ1,,11λp)}|a=0]
The derivative with respect to a of the Lauricella D-hypergeometric series and its transformations goes through the following expressions (see Appendix B for demonstration) To derive the closed-form expression of we have to evaluate the expression of . The latter can be easily deduced from as follows Proceeding in the same way by using Lauricella transformations, different expressions of are provided in Table 1. Finally, given the above results, it is straightforward to compute the symmetric KL similarity measure between and . Technically, any combination of the and expressions is possible to compute . However, we choose the same convergence region for the two divergences for the calculation of the distance. Some expressions of are given in Table 1. KLD and KL distance computed when and are two random vectors following central MCDs with pdfs and .

7. Particular Cases: Univariate and Bivariate Cauchy Distribution

7.1. Case of

This case corresponds to the univariate Cauchy distribution. The KLD is given by where is the Gauss’s hypergeometric function. The expression of the derivative of is given as follows (see Appendix C.1 for details of computation) Accordingly, the KLD is then expressed as We conclude that KLD between Cauchy densities is always symmetric. Interestingly, this is consistent with the result presented in [31].

7.2. Case of

This case corresponds to the Bivariate Cauchy distribution. The KLD is then given by where is the Appell’s hypergeometric function (see Appendix A). The expression of the derivative of can be further developed In addition, when the eigenvalue for takes some particular values, the expression of the KLD becomes very simple. In the following, we show some cases: or For this particular case, we have The demonstration of the derivation is shown in Appendix C.2. Then, KLD becomes equal to For this particular case, we have For more details about the demonstration see Appendix C.3. The KLD becomes equal to It is easy to deduce that This result can be demonstrated using the same process as . It is worth to notice that which leads us to conclude that the property of symmetry observed for the univariate case is no longer valid in the multivariate case. Nielsen et al. in [32] gave the same conclusion.

8. Implementation and Comparison with Monte Carlo Technique

In this section, we show how we practically compute the numerical values of the KLD, especially when we have several equivalent expressions which differ in the region of convergence. To reach this goal, the eigenvalues of are rearranged in a descending order . This operation is justified by Equation (53) where it can be seen that the permutation of the eigenvalues does not affect the expectation result. Three cases can be identified from the expressions of KLD.

8.1. Case

The expression of is given by Equation (109) and is given by (115).

8.2. Case

is given by the Equation (110) and is given by (114).

8.3. Case and

This case guarantees that , and . The expression of the is given by Equation (106) and is given by (112) or (113). To perform an evaluation of the quality of the numerical approximation of the derivative of the Lauricella series, we consider a case where an exact and simple expression of is possible. The following case where allows the Lauricella series to be equivalent to the Gauss hypergeometric function given as follows This relation allows us to compare the computational accuracy of the approximation of the Lauricella series with respect to the Gauss function. In addition, to compute the numerical value the indices of the series will evolve from 0 to N instead of infinity. The latter is chosen to ensure a good approximation of the Lauricella series. Table 2 shows the computation of the derivative of and , along with the absolute value of error , where . The exact expression of when is given by Equation (129). We can deduce the following conclusions. First, the error is reasonably low and decreases as the value of N increases. Second, the error increases for values of close to 1 as expected, which corresponds to the convergence region limit.
Table 2

Computation of and when and .

N=20 N=30 N=40
1λ A B |ϵ| B |ϵ| B |ϵ|
0.10.06940.06949.1309 × 10160.06949.1309 × 10160.06949.1309 × 1016
0.30.22910.22913.7747 × 10140.22911.1102 × 10160.22911.1102 × 1016
0.50.42920.42922.6707 × 1090.42921.2458 × 10120.42926.6613 × 1016
0.70.70220.70225.9260 × 1060.70228.2678 × 1080.70221.3911 × 109
0.91.16731.16340.00381.16657.2760 × 1041.16711.6081 × 104
0.991.70431.58010.12411.62670.07761.65140.0529
In the following section, we compare the Monte Carlo sampling method to approximate the KLD value with the numerical value of the closed-form expression of the latter. The Monte Carlo method involves sampling a large number of samples and using them to calculate the sum rather than the integral. Here, for each sample size, the experiment is repeated 2000 times. The elements of and are given in Table 3. Figure 1 depicts the absolute value of bias, mean square error (MSE) and box plot of the difference between the symmetric KL approximated value and its theoretical one, given versus the sample sizes. As the sample size increases, the bias and the MSE decrease. Accordingly, the approximated value will be very close to the theoretical KLD when the number of samples is very large. The computation time of the proposed approximation and the classical Monte Carlo sampling method are recorded using Matlab on a 1.6 GHz processor with 16 GB of memory. For the proposed numerical approximation, the computation time is evaluated to 1.56 s with . The value of N can be increased to further improve the accuracy, but it will increase the computation time. For the Monte Carlo sampling method, the mean time values at sample sizes of {65,536; 131,072; 262,144} are seconds, respectively.
Table 3

Parameters and used to compute KLD for central MCD.

Σ Σ11, Σ22, Σ33, Σ12, Σ13, Σ23
Σ1 1, 1, 1, 0.6, 0.2, 0.3
Σ2 1, 1, 1, 0.3, 0.1, 0.4
Figure 1

Top row: Bias (left) and MSE (right) of the difference between the approximated and theoretical symmetric KL for MCD. Bottom row: Box plot of the error. The mean error is the bias. Outliers are larger than or smaller than , where , , and are the 25th, 75th percentiles, and the interquartile range, respectively.

To further encourage the dissemination of these results, we provide a code available as attached file to this paper. This is given in Matlab with a specific case of . This can be easily extended to any value of p, thanks to the general closed-form expression established in this paper.

9. Conclusions

Since the MCDs have various applications in signal and image processing, the KLD between central MCDs tackles an important problem for future work on statistics, machine learning and other related fields in computer science. In this paper, we derived a closed-form expression of the KLD and distance between two central MCDs. The similarity measure can be expressed as function of the Lauricella D-hypergeometric series . We have also proposed a simple scheme to compute easily the Lauricella series and to bypass the convergence constraints of this series. Codes and examples for numerical calculations are presented and explained in detail. Finally, a comparison is made to show how the Monte Carlo sampling method gives approximations close to the KLD theoretical value. As a final note, it is also possible to extend these results on the KLD to the case of the multivariate t-distribution since the MCD is a particular case of this multivariate distribution.
  2 in total

1.  A study of multiplicative watermark detection in the contourlet domain using alpha-stable distributions.

Authors:  Hamidreza Sadreazami; M Omair Ahmad; M N S Swamy
Journal:  IEEE Trans Image Process       Date:  2014-07-16       Impact factor: 10.856

2.  Statistical Divergences between Densities of Truncated Exponential Families with Nested Supports: Duo Bregman and Duo Jensen Divergences.

Authors:  Frank Nielsen
Journal:  Entropy (Basel)       Date:  2022-03-17       Impact factor: 2.524

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.