Literature DB >> 33495693

Estimation of Tail Probabilities by Repeated Augmented Reality.

Benjamin Kedem1, Saumyadipta Pyne2,3.   

Abstract

Synthetic data, when properly used, can enhance patterns in real data and thus provide insights into different problems. Here, the estimation of tail probabilities of rare events from a moderately large number of observations is considered. The problem is approached by a large number of augmentations or fusions of the real data with computer-generated synthetic samples. The tail probability of interest is approximated by subsequences created by a novel iterative process. The estimates are found to be quite precise. © Grace Scientific Publishing 2021.

Entities:  

Keywords:  B-curve; Density ratio model; Iterative process; Repeated out of sample fusion; Residential radon; Upper bounds

Year:  2021        PMID: 33495693      PMCID: PMC7816841          DOI: 10.1007/s42519-020-00152-1

Source DB:  PubMed          Journal:  J Stat Theory Pract        ISSN: 1559-8608


Introduction

The citation accompanying his U.S. National Medal of Science in 2002 honored Calyampudi Radhakrishna Rao “as a prophet of new age for his pioneering contributions to the foundations of statistical theory and multivariate statistical methodology and their applications.” When Professor Rao organized the ‘International Conference on the Future of Statistics, Practice and Education’ in Hyderabad (Indian School of Business, 12.29.04–01.01.05), one of us participated in it. Befitting this connection, we decided to contribute what we believe is a “futuristic” application of augmented reality in his honor. In its February 4th 2017 edition, The Economist noted the promise of augmented reality, claiming that “Replacing the real world with a virtual one is a neat trick. Combining the two could be more useful.” We concur. Combining real data with synthetic data, i.e., augmented reality (AR), opens up new perspectives regarding statistical inference. Indeed, augmentation of real data with virtual information is an idea that has already found applications in fields such as robotics, medicine, and education. In this article, we advance the notion of repeated augmented reality in the estimation of very small tail probabilities even from moderately sized samples. Our approach, much like the bootstrap, is computationally intensive and could not have been viable without the computing power of modern systems. However, rather than looking repeatedly inside the sample, we look repeatedly outside the sample. Fusing a given sample repeatedly with computer-generated data is referred to as repeated out of sample fusion (ROSF) in Pan et al. [1, 2]. Related ideas concerning a single fusion are studied in Fithian and Wager [3], Fokianos and Qin [4], Katzoff et al. [5], and Zhou [6]. In 1984, the so-called Watras incident led to intense media and congressional attention in the USA to the problem of residential exposure to radon, a known carcinogenic gas. Radon in the home of Stanley Watras, a construction engineer, located in Boyertown, Berks county, on the Reading Prong geological formation in Pennsylvania, was recorded as almost 700 times the safe level, which is a lung cancer risk equivalent of smoking 250 packs of cigarettes per day! As noted by George [7], this news caused a major alarm and led the US EPA to establish a radon measurement program. In this regard, the present article will review the underpinnings of ROSF in estimation of small tail exceedance probabilities. We will illustrate its application using residential radon level data from Beaver County, Pennsylvania.

The Problem

Consider a random variable and the corresponding moderately large random sample where all the observations are smaller than a high threshold T, that is . We wish to estimate without knowing g. However, as is, the sample may not contain sufficient amount of information to tackle the problem. To gain more information, the problem is approached by combining or fusing the sample repeatedly with externally generated computer data. That is, ROSF.

The Approach

Let denote the ith computer-generated sample of size . Then, the fused samples are the augmentationswhere is a real reference sample and the are different independent computer-generated samples supported on (0, U), where . The number of fusions can be as large as we wish. From each pair , under a mild condition, we get in a certain way an upper bound for p. Let be the sequence of order statistics. Then, the sorted pairsproduce a monotone curve, referred to as the B-curve, which for large N, contains a point “” as in Fig. 1. As N increases, the ordinate of the point essentially coincides with p with probability approaching one. It follows that the sequencecontains subsequences which approach p. The subsequences can be obtained by an iterative process to be described in Sect. 3.
Fig. 1

B-Curves, 10,000 B’s, from residential radon sample . , , , , . values: top left 77.9, top right 107. Bottom left 143, bottom right 193.7. The point “” moves to the left as increases relative to . The fusion samples are uniform with support covering T

Illustrations of an Iterative Process

Deferring details to later sections, it is helpful to shed light early on and introduce our iterative method which produces estimates of tail probabilities, using reference samples from and LN(1, 1) distributions. In the first illustration, is a random sample from , , giving . Here, , , and the computer-generated samples consist of independent . With fusions, and starting from , our iterative process (9) bellow produces a converging subsequence which approaches p from above, a “Down” subsequence:Starting from , our iterative process (9) produces an “Up” subsequence which converges by a single iteration giving:In the second illustration, is a random sample from LN(1, 1), , giving . Here, , , and the computer-generated samples consist of independent . With fusions, and starting from , our iterative process (9) bellow produces a converging “Down” subsequence which approaches p from above by a single iteration:And staring from , our iterative process (9) produces an “Up” subsequence which converges by a single iteration giving:Notice that the “Down-Up” convergence in both illustrations is remarkably close to the true . We have had quite a few similar results where the tail behavior differed markedly. The computation here required an important parameter called “p-increment” which in the present examples was 0.0001. We shall deal with this numerical issue soon.

A Useful Feature

A useful feature of the present article is the realization that we can come up with educated guesses as to the magnitude of p from the value of relative to T. And this in turn suggests a set of discrete points in the interval at which p-estimates are evaluated, along the “p-increments” mentioned above. The p-increment is a single number used to create the grid for searching for p-estimates.

Getting Upper Bounds for p by Data Fusion

Recall that is a reference sample from some reference probability density (pdf) g(x) and let G(x) denote the corresponding distribution function (CDF) . Since we shall deal with radon data, we assume that . The goal is to estimate a small tail probability Let be a computer-generated random sample of size and assume . The augmentationof size gives the fused data from and . We shall assume the density ratio model [8, 9]where is a scalar parameter, is an vector parameter, and is an vector valued function. Clearly, to generate , we must know the corresponding . However, beyond the generating process, we do not make use of this knowledge. Thus, by our estimation procedure, none of the probability densities and the corresponding , and none of the parameters and are assumed known, but, strictly speaking, the so called tilt function must be a known function. However, in the present application, the requirement of a known is weakened considerably by the mild assumption (4) below, which may hold even for misspecified , as numerous examples with many different tail types show. Accordingly, based on numerous experiments, some of which discussed in Pan et al. [1], we assume the “gamma tilt” . Further justification for the gamma tilt is provided by our data analysis below. Under the density ratio model (11), the maximum likelihood estimate of G(x) based on the fused data is given in (14) in the “Appendix A.1”, along with its asymptotic distribution described in Theorem A.1. From the theorem, we obtain confidence intervals for for any threshold T using (17). In particular, we get an upper bound for p. In the same way, from additional independent computer-generated samples we get upper bounds for p from the pairs . Thus, conditional on , the sequence of upper bounds is then an independent and identically distributed sequence of random variables from some distribution . It is assumed thatso thatLet be a sequence of order statistics from smallest to largest. Then, as , decreases and increases. Hence, as mentioned before, as the number of fusions N increases the plot consisting of the pairscontains a point “” whose ordinate is p with probability approaching 1. It follows that as , there is a which essentially coincides with p. The plot of points consisting of the pairs in (5) is referred to as the B-curve. We now make the following important observations. Assumption (4) implies that as N increases, with probability approaching one. The point “” moves down the B-curve when approaches T. The point “” moves up the B-curve when decreases away from T. Hence, as N increases, the size of relative to T provides useful information as to the approximate magnitude of p. Specifically, the first quartile of is a sensible guess of p as approaches T, and the third quartile, or even , is a sensible approximation of p when is small. Otherwise the mean or median of provides practical guesses of the approximate magnitude of p. Let be the empirical distribution obtained from the sequence of upper bounds . Then, from the Glivenko–Cantelli Theorem, converges to almost surely uniformly as N increases. Since the number of fusions can be as large as we wish, our key idea, is known for all practical purposes. Hence, as seen from b, provides information about p. Knowing is a significant consequence of repeated out of sample fusion. Its implication is that the exact distribution of any is practically known.

Capturing p

For a sufficiently large number of fusions N, the monotonicity of the B-curve and (6) imply there are which approach p from above so that there is a very close to p. Likewise, the can approach p from below. Thus, the B-curve establishes a relationship between j and p. Another relationship between j and p is obtained from the well-known distribution of order statistics,which can be computed since is practically known for sufficiently large N. Iterating between these two relationships provides a way to approximate p as is described next. From (7), we can get the smallest such thatThe 0.95 probability bound was chosen arbitrarily and can be replaced by other high probabilities. It is important to note that in practice, and in what follows, the in (8) are evaluated on a grid incrementally along specified small increments. Thus, with ’s from the B-curve, and ’s the smallest p’s satisfying (8) with , and closest to , , we have the iterative process [1, 10],so that keeps giving the same (and hence the same ) and vice versa. This can be expressed more succinctly as,In general, starting with any j, convergence occurs when for the first time for some k and we keep getting the same probability . Clearly, the sequence could decrease or increase producing “down” and “up” subsequences. For example, suppose that the probabilitiesare sufficiently high probabilities, and that from the B-curve we get the closest approximationsThen, with a high probability we get a decreasing “down” sequenceHowever, when the probabilities are sufficiently low it is possible for the closest approximations of the to reverse course leading to an increasing “up” sequenceThis “down-up” tendency has been observed numerous times with real and artificial data. It manifests itself clearly in the radon examples below. In particular, as was illustrated earlier in Sect. 1.3, this “down-up” phenomenon tends to occur in a neighborhood of the true p, where a transition or shift occurs from “down” to “up” or vice versa, resulting in a “capture” of p. This is summarized in the following proposition.

Proposition

Assume that the samples size of is large enough, and that the number of fusions N is sufficiently large so that . Consider the smallest which satisfy the inequality (8) where the are evaluated along appropriate numerical increments. Then, (8) produces “down” and “up” sequences depending on the relative to . In particular, in a neighborhood of the true tail probability p, with a high probability, there are “down” sequences which converge from above and “up” sequences which converge from below to points close to p.

Illustrations Using Radon Data

We shall now demonstrate the proposition using radon data examples. Many additional examples were given in Pan et al. [1]. All the examples point to a remarkable “down-up” patterns in a neighborhood of the true p, providing surprisingly precise estimates of p. It should be noted that the number of iterations decreases as the approach p, a telltale sign that convergence is about to occur. The iterative process (9) is repeated with different starting j’s until a clear pattern emerges where different adjacent j’s give rise to Down-Up subsequences which converge to the same value, it being our estimate . The process may be repeated with different p-increments.

Computational Considerations

To enable computation with R, in (8) the binomial coefficients were evaluated with , as if there were 1000 fusions only. However, there are no restrictions on the number of fusions and was obtained throughout from 10, 000 fusions, and hence 10, 000 B’s. Each entry in the following tables was obtained from a different sample of 1000 B’s sampled at random from 10,000 B’s. More precisely, each entry was obtained from an approximate B-curve obtained from the sampled 1000 B’s and an approximate (8) with . Thus, for each entry, we iterated between an approximate B-curve and approximate (8) with .

Choice of p-Increment

An important consideration is the choice of the increments of p along which the probability (8) is evaluated. Certainly, any approximation of p must reside between consecutive B’s. Hence, sensible p-increments are fractions of either the mean, median, first or third quartiles, or even fractions of . In the following example, the p-increments are of the order of magnitude approximately equal to one tenth of one of these quantities.

Beaver County Radon Tail Probabilities

Radon-222, or just radon, is a tasteless, colorless and odorless radioactive gas, which is a product of Uranium-238 and Radium-226, both of which are naturally abundant in the soil. Radon is known around the world as a carcinogen, and its exposure is the leading risk factor of lung cancer among non-smokers. Geological radon exposure takes place mostly through cracks and openings in the ground due to underlying geological formations. Approximately 40 percent of Pennsylvania (PA) homes have radon levels above the US EPA action guideline of 4 pCi/L. Residential radon test levels were collected by PA Department of Environmental Protection (PADEP) statewide in the period from 1990 to 2007. See Zhang et al. [11] for a study of indoor radon concentrations from Beaver County and its neighboring counties in PA. In the following examples, ROSF is applied to Beaver County radon data from 1989 to 2017, for various p-increments. There were 7425 radon observations, taken as a population, of which only 2 exceed 200. Hence, with we wish to estimate the small probability . Throughout the examples, is a reference random sample chosen without replacement from the 7425 radon observations. The generated samples are from and In the tables below, “Down”, “Up”, “No j change”, means that in the iterative process (9) there was a downward, or upward, or no change in j, respectively. B-Curves, 10,000 B’s, from residential radon sample . , , , , . values: top left 77.9, top right 107. Bottom left 143, bottom right 193.7. The point “” moves to the left as increases relative to . The fusion samples are uniform with support covering T Figure 1 shows how the “” moves along the B-curve as a function of the size of relative to T. The figure should be referred to when reading the following examples.

Example 1

Since 107 is close to T/2, the “” point is in the “middle” of the B-curve, far removed from both ends. Hence, we use as p-increment . We observed that the third quartile was 0.0002686, very close to the true p. From Table 1, the shift from down to up occurs at very close the true , giving an error of an order of .
Table 1

, , , , , , p-increment 0.000018

Starting jConvergence toIterations
10000.00070093893Down
8020.00028693891Down
7610.00026893891Down
7570.00026893891Down
7550.00026893891Down
7540.00026893891Up
7510.00026893891Up
7500.00026893891Up
7400.00026893891Up
7380.00026893891Up
, , , , , , p-increment 0.000018

Example 2

A different reference sample was fused again 10,000 times with different independent samples. Since , again we have, relative to , a “middle” “” point suggesting a p-increment of one tenth of the mean of the B’s. As the order of the mean was we chose p-increment 0.00002, which is of the same order as that of Mean(B)/10. From Table 2, the shift from Down to Up occurs at not far from , giving an error on the order of .
Table 2

, , , , , , p-increment 0.00002

Starting jConvergence toIterations
8000.000340125418Down
7500.000300125418Down
1400.00028012542Down
1350.00026012541Down
1330.00026012541Down
1300.00026012541Up
1220.00026012541Up
1210.00026012541Up
1200.00026012541Up
1120.00026012541Up
, , , , , , p-increment 0.00002

Example 3

A different reference sample was fused again 10,000 times with different independent samples. Since , we have, relative to , a “” point close to the lower end of the B-curve, suggesting a p-increment on the order of one tenth of the first quartile of the 10,000 B’s. As the first quartile was 0.0002697, we chose a p-increment of 0.00001. A p-increment of 0.00002 gave identical results. We observe that the first quartile is very close to p. From Table 3, the shift from Down to Up occurs at not far from , giving an error on the order of .
Table 3

, , , , , , p-increment 0.00001

Starting jConvergence toIterations
8000.000360081821Down
6000.000260081819Down
4400.00027008189Down
3000.00026008184Down
2460.00026008181Down
2450.00026008181Down
2440.00026008181Up
2430.00026008181Up
2400.00026008181Up
2370.00026008181Up
2220.00025008181Up
2000.00024008181Up
, , , , , , p-increment 0.00001

Example 4

A different reference sample was fused again 10,000 times with different independent samples. Since , we have, relative to , a “” point close to the upper end of the B-curve, a difficult case, suggesting a p-increment on the order of one tenth of from 10,000 B’s. As , we chose a p-increment of 0.00004583. From Table 4, the shift from Down to Up occurs at not far from , giving an error on the order of .
Table 4

, , , , , , p-increment 0.00004583

Starting jConvergence toIterations
10000.00027445042Down
9990.00027445041Down
9980.00027445041Down
9970.00022862042Down
9960.00022862041Down
9940.0002286204No j change
9930.00022862041Up
9920.00022862041Up
9910.00022862041Up
9900.00022862041Up
9890.00022862041Up
9880.00022862041Up
, , , , , , p-increment 0.00004583

Summary of ROSF Applied to Beaver Radon Data

Table 5 provides our estimates of from various random radon samples of size fused repeatedly with independent of size . In all cases, . Some of the samples are the same, but the p-increments are different still leading to similar results. The mean and standard deviation of the in the table are equal to and 1.052197e–05, respectively. In general, variance estimates can be obtained by repeating ROSF again and again using different B-curves and different p-increments. Evidently the choice of is a reasonable choice as the present radon analysis and many other examples with very diverse tail types indicate.
Table 5

, , , ,

\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathrm{max}}(\varvec{X}_0)$$\end{document}max(X0)p-increment\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\hat{p}}$$\end{document}p^Error
77.90.000045830.00022862044.073987e–05
107.00.000020000.00025893891.042137e–05
107.00.000025000.00027393894.578631e–06
107.00.000030000.00026893894.213694e–07
107.00.000018000.00026893894.213694e–07
107.00.000026860.00026753891.821369e–06
107.00.000011750.00025743891.192137e–05
113.70.000022000.00026376565.594700e–06
123.10.000020000.00026012549.234869e–06
125.20.000020000.00026003109.329269e–06
130.70.000030000.00026390575.454600e–06
143.00.000021400.00025652101.283927e–05
193.70.000010000.00026008189.278469e–06
193.70.000020000.00026008189.278469e–06
, , , ,

Discussion

There are numerous situations where the interest is in the prediction of an observable exceeding a large or even a catastrophically large threshold level where the data at hand fall short of the threshold. For example, consider the daily rainfall amount in a region where all the diurnal amounts fall short of a high threshold level, say, 10 inches in 24 hours, and yet for risk management it is important to obtain the chance that a future amount exceeds 10 inches in 24 hours, an extreme situation by all accounts. Similar problems regard annual flood levels, daily coronavirus counts, monthly insurance claims, earthquake magnitudes, and so on, where the sample values are below certain high thresholds, and the interest is in very small tail probabilities. Furthermore, in many cases, the given data could be only moderately large. In this paper, it has been shown how to approach such problems by a large number of augmentations or fusions of the given data with computer-generated external samples. From this we obtained a curve, called B-curve, containing a point whose ordinate was close to the tail probability of interest. Moreover, the magnitude of the largest sample value relative to a given high threshold provided rough guesses as to the true value of the tail probability. The rough guesses were needed for successful applications of our iterative method which produced accurate estimates of tail probabilities. Indeed, as illustrated in the paper, relative to T provides useful information about the true tail probability p represented as “the point,” and this fact can be interpreted in terms of under-specification and over-specification of the tail probability under the density ratio model. This clearly is a consequence of the fact that provides information about p. The large number of fusions resulted in a large number of upper bounds , for a tail probability p, from some unknown CDF where it was assumed that . The examples in this paper and many more in Pan et al. [1] indicate that the choice of the (mostly misspecified) tilt function in the density ratio model did not go against that assumption. Clearly, other tilts are possible as long as is bounded away from 0 and 1. The estimation of very small tail probabilities can be approached by extreme value methods. A well-known method is referred to as peaks-over-threshold [12, 13], whereas the name suggests, only values above a sufficiently high threshold are used. However, if the sample is not large to begin with, any reduction in the sample size, by discarding those values deemed not sufficiently large, reduces the sample size and calls into question the applicability of the method. A comparison with ROSF is given in Wang [10] and in Pan et al. [1]. The estimation of tail probabilities from fused residential radon data has been studied recently in Zhang et al. [11, 14] by using the density ratio model with variable tilts. There a given radon sample from a county of interest was fused with radon samples from neighboring counties.
  2 in total

Review 1.  The history, development and the present status of the radon measurement programme in the United States of America.

Authors:  A C George
Journal:  Radiat Prot Dosimetry       Date:  2015-04-24       Impact factor: 0.972

2.  Interval estimation of small tail probabilities - applications in food safety.

Authors:  Benjamin Kedem; Lemeng Pan; Wen Zhou; Carlos A Coelho
Journal:  Stat Med       Date:  2016-02-17       Impact factor: 2.373

  2 in total
  1 in total

1.  Multivariate Tail Probabilities: Predicting Regional Pertussis Cases in Washington State.

Authors:  Xuze Zhang; Saumyadipta Pyne; Benjamin Kedem
Journal:  Entropy (Basel)       Date:  2021-05-27       Impact factor: 2.524

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.