| Literature DB >> 33951092 |
Gabriel Recchia1, Alexandra L J Freeman1, David Spiegelhalter1.
Abstract
Throughout the COVID-19 pandemic, social and traditional media have disseminated predictions from experts and nonexperts about its expected magnitude. How accurate were the predictions of 'experts'-individuals holding occupations or roles in subject-relevant fields, such as epidemiologists and statisticians-compared with those of the public? We conducted a survey in April 2020 of 140 UK experts and 2,086 UK laypersons; all were asked to make four quantitative predictions about the impact of COVID-19 by 31 Dec 2020. In addition to soliciting point estimates, we asked participants for lower and higher bounds of a range that they felt had a 75% chance of containing the true answer. Experts exhibited greater accuracy and calibration than laypersons, even when restricting the comparison to a subset of laypersons who scored in the top quartile on a numeracy test. Even so, experts substantially underestimated the ultimate extent of the pandemic, and the mean number of predictions for which the expert intervals contained the actual outcome was only 1.8 (out of 4), suggesting that experts should consider broadening the range of scenarios they consider plausible. Predictions of the public were even more inaccurate and poorly calibrated, suggesting that an important role remains for expert predictions as long as experts acknowledge their uncertainty.Entities:
Mesh:
Year: 2021 PMID: 33951092 PMCID: PMC8099086 DOI: 10.1371/journal.pone.0250935
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Questions asked of participants with corresponding forecast medians, median absolute deviation (MAD), median absolute error (MAE) and median relative error (MRE).
| How many people in the country you’re living in do you think will have died from COVID-19 by December 31st 2020? | How many people in the country you’re living in do you think will have been infected by COVID-19 by December 31st 2020? | Out of every 1000 people who will have been infected by the virus worldwide, how many do you think will have died by December 31st 2020 as a result? | Out of every 1000 people who will have been infected by the virus in the country you’re living in, how many do you think will have died by December 31st 2020 as a result? | |
| Total number of “deaths within 28 days of positive test” having a date of death earlier than 1 Jan 2021 | Number of infections implied by dividing the total number of COVID-19 deaths in the UK (left) by the UK infection fatality rate estimated by Imperial College COVID-19 response team in Oct 2020 | 1000 multiplied by the age-specific infection fatality rates estimated by the Imperial College COVID-19 response team in Oct 2020, weighted by worldwide age distribution | 1000 multiplied by the UK infection fatality rate estimated by the Imperial College COVID-19 response team in Oct 2020 | |
| 75,346 | 6,385,254 | 4.55 | 11.8 | |
| 30,000 (15,000) | 4,000,000 (3,687,500) | 10 (5) | 9.5 (4.5) | |
| 25,000 (10,000) | 800,000 (700,000) | 30 (20) | 30 (22) | |
| 20,000 (10,000) | 250,000 (247,000) | 50 (45) | 40 (35) | |
| 45,346 | 5,585,254 | 5.45 | 6.80 | |
| 55,346 | 6,085,254 | 25.45 | 18.20 | |
| 55,346 | 6,235,254 | 45.45 | 28.20 | |
| 2.51 | 3.19 | 1.98 | 2.03 | |
| 3.32 | 7.98 | 5.59 | 3.20 | |
| 3.77 | 25.54 | 9.19 | 3.98 |
Proportions of participants from each group (experts, high-numeracy nonexperts, and all nonexperts) for whom the outcome fell within their own 75% confidence intervals.
| 1 | 39/108 (36%) | 78/483 (16%) | 169/1757 (10%) | 22.2 ( | 72.1 ( |
| 2 | 40/100 (40%) | 58/479 (12%) | 133/1737 (8%) | 45.8 ( | 115.9 ( |
| 3 | 41/98 (42%) | 47/466 (10%) | 159/1634 (10%) | 62.0 ( | 93.3 ( |
| 4 | 55/96 (57%) | 129/474 (27%) | 330/1673 (20%) | 33.0 ( | 75.2 ( |
Fig 1Consensus distributions (linear opinion pools) for Q1 (a), Q2 (b), Q3 (c), and Q4 (d). Axes truncated to allow the overall shapes of the distributions to be visible.
Descriptive and inferential statistics for the sets of 100 approximate continuous ranked probability scores generated from expert and high-numeracy nonexpert consensus distributions.
| 1 | 24,301 | 289 | 31,301 | 364 | 150.66 | 188.27 | < .001 |
| 2 | 3,210,153 | 63,585 | 3,563,702 | 32,721 | 49.44 | 148.00 | < .001 |
| 3 | 7.44 | 0.17 | 26.60 | 0.60 | 306.35 | 115.06 | < .001 |
| 4 | 3.46 | 0.07 | 20.27 | 0.54 | 310.51 | 102.50 | < .001 |
Regressions of calibration and accuracy on gender, age, and expert/nonexpert status.
| Expert status | 1.284*** | -352.22*** | -177.61** | -633.91*** | -588.34*** |
| Age | -0.001 | 0.061 | -0.592 | -0.545 | -1.658* |
| Male gender | 0.162*** | -22.22 | -73.68** | -102.21*** | -82.76** |
| 0.145 | 0.013 | 0.006 | 0.053 | 0.045 |
Note. ‘Error’ represents rank-transformed absolute error. Stars represent significance at p < .05 (*), p < .01 (**), p < .001 (***).