| Literature DB >> 35741478 |
Gayan Dharmarathne1, Anca M Hanea2, Andrew Robinson2.
Abstract
Estimates based on expert judgements of quantities of interest are commonly used to supplement or replace measurements when the latter are too expensive or impossible to obtain. Such estimates are commonly accompanied by information about the uncertainty of the estimate, such as a credible interval. To be considered well-calibrated, an expert's credible intervals should cover the true (but unknown) values a certain percentage of time, equal to the percentage specified by the expert. To assess expert calibration, so-called calibration questions may be asked in an expert elicitation exercise; these are questions with known answers used to assess and compare experts' performance. An approach that is commonly applied to assess experts' performance by using these questions is to directly compare the stated percentage cover with the actual coverage. We show that this approach has statistical drawbacks when considered in a rigorous hypothesis testing framework. We generalize the test to an equivalence testing framework and discuss the properties of this new proposal. We show that comparisons made on even a modest number of calibration questions have poor power, which suggests that the formal testing of the calibration of experts in an experimental setting may be prohibitively expensive. We contextualise the theoretical findings with a couple of applications and discuss the implications of our findings.Entities:
Keywords: credible intervals; equivalence test; experts’ calibration; experts’ hit rates
Year: 2022 PMID: 35741478 PMCID: PMC9222732 DOI: 10.3390/e24060757
Source DB: PubMed Journal: Entropy (Basel) ISSN: 1099-4300 Impact factor: 2.738
Coverage probability options and the number of elicited intervals used in simulating data.
| Coverage Probability | Number of Elicited Intervals ( | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| 80% | 10 | 20 | 30 | 40 | 50 | 80 | 100 | 150 | 200 | 250 |
| 90% | 10 | 20 | 30 | 40 | 50 | 80 | 100 | 150 | 200 | 250 |
Rejection regions of the equivalence test.
| Number of Elicited Intervals ( |
|
|
|---|---|---|
| 10 | 9 | 10 |
| 20 | 18 | 19 |
| 30 | 27 | 28 |
| 40 | 36 | 37 |
| 50 | 45 | 46 |
| 80 | 72 | 73 |
| 100 | 90 | 92 |
| 150 | 134 | 138 |
| 200 | 178 | 185 |
| 250 | 222 | 232 |
Figure 1The power of the direct and equivalence tests to correctly identify well-calibrated experts at true level of coverage of elicited intervals.
Figure 2The probabilities of the direct and equivalence tests to identify the experts as 90% well-calibrated when true levels of coverage of elicited intervals that are less than 90%.
Figure 3The probabilities of the direct test to identify the experts as 90% well-calibrated when true levels of coverage of elicited intervals that are less than 90% for small number of elicited intervals.
Figure 4Size of the direct and equivalence tests in testing experts’ calibration on eliciting 90% credible intervals for small number of elicited intervals.
Figure 5The power of the direct and non-randomized equivalence tests to correctly identify well-calibrated experts at true level of coverage of elicited intervals.
Figure 6The probabilities of the direct and non-randomized equivalence tests to identify the experts as 90% well-calibrated when true levels of coverage of elicited intervals that are less than 90%.
Data from [16] used in the current analysis.
| Expert ID | Num. Elicited Questions | Num. Intervals Covering the Truth |
|---|---|---|
| 52b | 13 | 9 |
| 54h | 12 | 9 |
| 64i | 13 | 10 |
Data from [18] used in the current analysis.
| Expert ID | Num. Elicited Questions | Num. Intervals Covering the Truth |
|---|---|---|
| Exp16 | 16 | 14 |
| Exp21 | 16 | 12 |