Literature DB >> 35626567

Asymptotic Normality for Plug-In Estimators of Generalized Shannon's Entropy.

Jialin Zhang1, Jingyi Shi1.   

Abstract

Shannon's entropy is one of the building blocks of information theory and an essential aspect of Machine Learning (ML) methods (e.g., Random Forests). Yet, it is only finitely defined for distributions with fast decaying tails on a countable alphabet. The unboundedness of Shannon's entropy over the general class of all distributions on an alphabet prevents its potential utility from being fully realized. To fill the void in the foundation of information theory, Zhang (2020) proposed generalized Shannon's entropy, which is finitely defined everywhere. The plug-in estimator, adopted in almost all entropy-based ML method packages, is one of the most popular approaches to estimating Shannon's entropy. The asymptotic distribution for Shannon's entropy's plug-in estimator was well studied in the existing literature. This paper studies the asymptotic properties for the plug-in estimator of generalized Shannon's entropy on countable alphabets. The developed asymptotic properties require no assumptions on the original distribution. The proposed asymptotic properties allow for interval estimation and statistical tests with generalized Shannon's entropy.

Entities:  

Keywords:  Shannon’s entropy; asymptotic normality; generalized Shannon’s entropy; plug-in estimation

Year:  2022        PMID: 35626567      PMCID: PMC9141039          DOI: 10.3390/e24050683

Source DB:  PubMed          Journal:  Entropy (Basel)        ISSN: 1099-4300            Impact factor:   2.738


1. Introduction

1.1. Introduction and Related Work

Shannon’s entropy, introduced by [1], is one of the building blocks of Information Theory and a key aspect of Machine Learning (ML) methods (e.g., Random Forests). It is one of the most popular quantities on countable alphabet (An countable alphabet is a space that could be either finite, or countably infinite; the elements in an alphabet can be either ordinal (e.g., numbers) or non-ordinal (e.g., letters)), particularly on non-ordinal space with categorical data. For example, in [2], all reviewed feature selection methods on non-ordinal space boiled down to a function of Shannon’s entropy. In addition, Shannon’s entropy is one of the most important foundations for all tree-based ML algorithms, sometimes substitutable with the Gini impurity index [3,4,5]. As one of the essential information-theoretical quantities, Shannon’s entropy and its estimation are widely studied in the past decades [6,7,8,9,10,11,12]. In particular, [9] proved that an unbiased estimator of Shannon’s entropy does not exist. Current state-of-art Shannon’s entropy point estimator was provided in [10] with the fastest bias decaying rate (exponentially-decaying). Nevertheless, Shannon’s entropy is only finitely defined for distributions with fast decaying tails [13]. It is never known if the real distribution yields a finite Shannon’s entropy in practice. Furthermore, all existing results on Shannon’s entropy require it to be finitely defined, which results in a usage restriction when adopting the entropy-based methods. This is, in fact, a void in the foundation of all Shannon’s entropy-related results. (Unbounded Shannon’s Entropy). Let a distribution The effort to generalize Shannon’s entropy has been long and extensive in the existing literature. As summarized in [14], the main perspective in the generalization in the existing literature is based on axiomatic characterization of Shannon’s entropy [15,16]. For example, Refs. [17,18] are efforts with respect to the functional form, , under certain desirable axioms, is uniquely determined up to a multiplicative constant; if the strong additivity axiom is relaxed to be one of the weaker versions, say -additivity or composability, then may be of other forms, which give rise to Rényi’s entropy [19], and the Tsallis entropy [20]. However, all such generalization effort does not seem to lead to an information measure on a joint alphabet that would possess all the desirable properties of mutual information, which is supported by an argument via the Kullback–Leibler divergence [21]. Interested readers may refer to [14] for details. To further address the deficiency of Shannon’s entropy [14] proposed generalized Shannon’s entropy (GSE) and showed that GSE enjoys all properties of a finite Shannon’s entropy. In addition, GSE is finitely defined on all distributions. Due to the advantages of GSE and the deficiency of Shannon’s entropy, the use of Shannon’s entropy should eventually be transited to GSE.

1.2. Summary and Contribution

To aid the transition, the estimation of GSE needs to be studied. In practice, the plug-in estimator is one of the most popular estimation approaches. For plug-in estimation of GSE, asymptotic properties are required for statistical tests and confidence intervals. This article studies the asymptotic properties for plug-in estimators of GSE. As a summary of the article’s results, Theorem 1 and Corollary 1 provide asymptotic normality properties for the plug-in estimators of GSE for all orders (An explanation of the order is given in Definition 2) on countably infinite alphabet. Corollary 2 provides the asymptotic normality properties for the plug-in estimators of GSE for all orders on finite alphabet, except the underlying distribution being uniform (Under a uniform distribution, the estimation of GSE is reduced to an estimation of population size. Interested readers can read [22]). The presented asymptotic normality properties immediately allow interval estimation and hypothesis testing with plug-in estimators of GSE. The numerical results in Section 3 show that the developed asymptotic properties converge fast, especially when the order is 2. The presented properties and performance of GSE plug-in estimators suggest that GSE’s use is full of promising potential. One may be concerned the construction of CDOTC (Defined in Definition 1) would bring additional estimation challenges to the already-difficult estimation of Shannon’s entropy. Yet, the convergence speed for GSE plug-in estimators is fast. To further unlock the potentials of GSE, additional estimation methods of GSE and asymptotic properties of functions of GSE (e.g., Generalized Mutual Information, also originated in [14]) shall be visited. This article’s results and proofs’ approaches provide a solid direction toward that end. The rest of this paper is organized as follows. Section 2 formally states the problem and gives our main results. In Section 3, we provide a small-scale simulation study. In Section 4, we discuss the potential of GSE. Proofs are postponed to Section 5.

2. Main Results

Let Z be a random element on a countable alphabet with an associated distribution . Let the cardinality or support on be denoted , where is the indicator function. K is possibly finite or infinite. Let denote the family of all distributions on . Shannon’s entropy, H, is defined as To state our main result, we need to state Definitions 1 and 2 given by [14], and Definition 3. (Conditional Distribution of Total Collision (CDOTC)). Given where (Generalized Shannon’s Entropy (GSE)). Given where It is clear that is a probability distribution induced from . Furthermore, for each m, , and uniquely determined each other (Lemma 1 in [14]). To help understand Definitions 1 and 2, Examples 2 and 3 are provided as follows. (The 2nd order CDOTC). Given where for (The 2nd order GSE). Given where The definition of the plug-in estimator of GSE is stated in Definition 3. (Plug-in estimator of GSE). Let Our main results are stated in Theorem 1, Corollary 1 and 2. Let where Let where Let where Corollary 2 is a special case of Theorem 1. All proofs are provided in Section 5.

3. Simulations

One of the main applications of our results is the ability to construct confidence intervals, and hence testing hypothesis. Specifically, Corollary 1 implies that an asymptotic confidence interval for is given by where is given by (1) and is a number such that and . In this section, we give a small scale simulation study to check the finite sample performance of this confidence interval. We consider Zeta distribution with and 2.5, where is the Riemann zeta function given by The simulations were performed as follows. For the given distribution, we obtained a random sample of size n and used it to evaluate a confidence interval for a given index using (2). We then checked to see if the true value of the was in the interval or not. This was repeated 5000 times, and the proportion of times when the true value was in the interval was calculated. When the asymptotics works well, this proportion should be close to . We repeated this for sample sizes ranging from 10 to 1000 in increments of 10. The results for , order and are given in Figure 1 and Figure 2; the results for , order and are given in Figure 3 and Figure 4.
Figure 1

Effectiveness of the 95% confidence intervals as a function of sample size. Simulations from Zeta distribution with and GSE with order . The horizontal dashed line is at 0.95.

Figure 2

Effectiveness of the 95% confidence intervals as a function of sample size. Simulations from Zeta distribution with and GSE with order . The horizontal dashed line is at 0.95.

Figure 3

Effectiveness of the 95% confidence intervals as a function of sample size. Simulations from Zeta distribution with and GSE with order . The horizontal dashed line is at 0.95.

Figure 4

Effectiveness of the 95% confidence intervals as a function of sample size. Simulations from Zeta distribution with and GSE with order . The horizontal dashed line is at 0.95.

The results suggest that convergence is fast, particularly when the order is . We conjecture that this may be caused by the fact that, when m is larger, the probabilities in the corresponding CDOTC are smaller and hence require a larger sample size for convergence. For the same reason, the results with converge faster than that of , because yields a thinner tail distribution which requires a larger sample size for convergence. Although GSE with order may shed some light on specific information, GSE with order is enough to well exist with asymptotic properties for any valid underlying probability distribution .

4. Discussion

The proposed asymptotic properties in Corollary 1 and 2 make it possible for interval estimation and statistical tests. Based on the simulation results, the convergence is quite fast, particularly under order . Note that a GSE with order already enjoys all asymptotic properties without any assumption on original distribution . We recommend using GSE with order in place of Shannon’s entropy in all entropy-based methods when applicable. By replacing Shannon’s entropy with GSE, one still enjoys all the benefits of Shannon’s entropy with a fast convergence speed. Moreover, using GSE is risk-free compared to Shannon’s entropy because Shannon’s entropy (1) does not exist on some thick-tailed distributions and (2) requires thinner tail distribution for some asymptotic properties [11]. Additional research is required to aid the transition. The proposed asymptotic results allow interval estimation and statistical tests on the modified entropy-based methods that replaced Shannon’s entropy with GSE. Future research should aim to provide additional estimation methods of GSE and statistical properties of functions of GSE, such as GMI. The proposed asymptotic properties in this article directly provide asymptotic normality for the plug-in estimator of GMI when the real underlying GMI is not 0. The asymptotic behavior for the plug-in estimator of GMI when the real underlying GMI is 0 remains an open question, which we will address in future work.

5. Proofs

The proofs require several lemmas. The first lemma is state below. ([11,23]). Assume that In this case where Furthermore, if where Different proofs of Lemma 1 are provided in [11,23]. The spirit for proof of Theorem 1 is to regard CDOTC as an original distribution and utilize the result from Lemma 1. Toward that end, several lemmas are needed and stated below. (Equivalent conditions in Lemma 1). For any valid distribution and that there is a deterministic sequence ( in Theorem 1). In Theorem 1, ( in Corollary 1). In Corollary 1, Note that for any to be a valid distribution, the tail of must be thicker than because diverges. Hence is thicker than for any by definition. It is shown in Example 3 of [11] that such tail satisfies the mentioned conditions. □ Because of Lemma 2, can be obtained under finite K and then let . For a finite K, it can be verified that for , Let We have , where is the covariance matrix given by According to the first-order Delta method, Given Lemma 2, let ,  □ Lemma 4 is because of . □ With Lemmas 1–4, and Slutsky’s theorem, Theorem 1 and Corollary 1 are proved. □ Corollary 2 is a directly result of Theorem 1, except under uniform distribution when for all . □
  3 in total

1.  Entropy estimation in Turing's perspective.

Authors:  Zhiyi Zhang
Journal:  Neural Comput       Date:  2012-02-01       Impact factor: 2.026

2.  Tree-Based Analysis.

Authors:  Mousumi Banerjee; Evan Reynolds; Hedvig B Andersson; Brahmajee K Nallamothu
Journal:  Circ Cardiovasc Qual Outcomes       Date:  2019-05

Review 3.  A Brief Review of Generalized Entropies.

Authors:  José M Amigó; Sámuel G Balogh; Sergio Hernández
Journal:  Entropy (Basel)       Date:  2018-10-23       Impact factor: 2.524

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.