| Literature DB >> 33265739 |
Abstract
The maximum entropy principle introduced by Jaynes proposes that a data distribution should maximize the entropy subject to constraints imposed by the available knowledge. Jaynes provided a solution for the case when constraints were imposed on the expected value of a set of scalar functions of the data. These expected values are typically moments of the distribution. This paper describes how the method of maximum entropy PDF projection can be used to generalize the maximum entropy principle to constraints on the joint distribution of this set of functions.Entities:
Keywords: PDF projection; maximum entropy principle; statistical inference
Year: 2018 PMID: 33265739 PMCID: PMC7513173 DOI: 10.3390/e20090650
Source DB: PubMed Journal: Entropy (Basel) ISSN: 1099-4300 Impact factor: 2.524
Figure 1Comparison of entropy Q and average log-likelihood L for three distributions. The vertical lines are the locations of training samples.
Figure 2(Left) illustration of projected PDF for , , , on a slice of , at ; (Right) samples drawn from the sampling procedure (see text).
Figure 3(Left) illustration of projected PDF for , ; (Right) samples drawn from the sampling procedure (see text).
Figure 4Illustration of projected PDF for , .