Literature DB >> 33357248

Forecasting the final disease size: comparing calibrations of Bertalanffy-Pütter models.

Norbert Brunner1, Manfred Kühleitner1.   

Abstract

Using monthly data from the Ebola-outbreak 2013-2016 in West Africa, we compared two calibrations for data fitting, least-squares (SSE) and weighted least-squares (SWSE) with weights reciprocal to the number of new infections. To compare (in hindsight) forecasts for the final disease size (the actual value was observed at month 28 of the outbreak) we fitted Bertalanffy-Pütter growth models to truncated initial data (first 11, 12, …, 28 months). The growth curves identified the epidemic peak at month 10 and the relative errors of the forecasts (asymptotic limits) were below 10%, if 16 or more month were used; for SWSE the relative errors were smaller than for SSE. However, the calibrations differed insofar as for SWSE there were good fitting models that forecasted reasonable upper and lower bounds, while SSE was biased, as the forecasts of good fitting models systematically underestimated the final disease size. Furthermore, for SSE the normal distribution hypothesis of the fit residuals was refuted, while the similar hypothesis for SWSE was not refuted. We therefore recommend considering SWSE for epidemic forecasts.

Entities:  

Keywords:  Bertalanffy–Pütter model (BP-model); epidemic trajectory; final disease size; forecast; weighted least-squares

Mesh:

Year:  2020        PMID: 33357248      PMCID: PMC8057487          DOI: 10.1017/S0950268820003039

Source DB:  PubMed          Journal:  Epidemiol Infect        ISSN: 0950-2688            Impact factor:   2.451


Introduction

Epidemiology uses a wide variety of mathematical tools to model the spread of infectious diseases. ‘Mathematical models […] take many forms, depending on the level of biological knowledge of processes involved and data available. Such models also have many different purposes, influencing the level of detailed required’ [1]. Examples of the most common model types that use highly aggregated data (infection counts over time) are trend models, as considered in this paper (another example is GGM of [2]), and compartmental dynamical systems models (including the classical continuous and deterministic SIR in [3], or stochastic SIR and SEIR in [4] or [5], respectively). By contrast, individual-based simulation models describe the interactions of large numbers of individuals and therefore need detailed information to characterise them [6]. This paper focuses on trend-models with sigmoidal (S-shaped) growth curves, as these have been useful for practitioners. For example, the simple logistic growth model correctly predicted the slowing down of the 2013–2016 Ebola outbreak in West Africa [7-11]. In this paper, we considered the more general Bertalanffy–Pütter (BP) differential equation (details in the section ‘Method’) for modelling epidemic trajectories. For an illustration, we fitted BP models to data from the 2013–2016 Ebola outbreak in West Africa, using the method of least-squares (nonlinear regression). In order to assess the practical significance of this model-fitting exercise, we used initial data from the Ebola outbreak to forecast the final disease size. Forecasting the final size of an epidemic is an important task for modelling; underestimating it may lead to a false sense of security. In previous papers, we have studied the BP equation for modelling the growth of tumours, chicken, dinosaurs and fish [12-15] and we found that it was a versatile tool that resulted in significant improvements in the fit of the model to the data, when compared to previous results in the literature. However, in that papers we also found that the least-squares method underestimated the potential of further growth. We therefore proposed a calibration that for the considered size-at-age data provided more realistic asymptotic size estimates. For the Ebola data, we observed the same problem; using ordinary least-squares the final disease size was underestimated. As to the reason, ordinary least-squares assumes equal variances while animal sizes and epidemic data displayed size-dependent variances (heteroscedasticity). However, the pattern of heteroscedasticity was different: although for the size of smaller/larger animals the variance was lower/higher, for epidemic data (total disease counts) the variance was highest at the epidemic peak (most new cases) and low at the initial and final phases. For the Ebola data, we therefore used a weighted least-squares method with a new weight function (details in the section ‘Method’): Using this calibration, prediction of the final disease size became reliable for the late phase of the outbreak (after it has peaked).

Method

Materials

The data and the results of the optimisations were recorded in spreadsheets (Microsoft Excel); see the Supporting information. The computations used Mathematica 12.0 software (Wolfram Research). We used commercially available PCs and laptops (Intel i7 core). For them, CPU-time for fitting the BP model to monthly data was 1 week per dataset (less for SSE). However, for weekly data and SWSE, data fitting used 8 weeks. Therefore, the paper reports the results for the monthly data.

Data

CDC [16] compiled data from three countries of West Africa, based on weekly Ebola reports by the WHO (Fig. 1) that counted the confirmed, probable and suspected infections and fatalities in each country. CDC [16] added the numbers of infections since the start of the outbreak to the number of total cases.
Fig. 1.

Total weekly and monthly count of Ebola cases in West Africa; blue dots connected with a black line are the values of Table 1 and brown rings are the counts from CDC [16]; plotted using MS Excel.

Total weekly and monthly count of Ebola cases in West Africa; blue dots connected with a black line are the values of Table 1 and brown rings are the counts from CDC [16]; plotted using MS Excel.
Table 1.

Total monthly count of Ebola cases in West Africa

MonthCasesaMonthCasesaMonthCasesaMonthCasesa
0b18305215 25 17822 28 547
159655316 26 29823 28 601
312010 13 67617 27 14524 28 604
424211 16 89918 27 54025 28 603
530912 20 17119 27 84026 28 603
659913 22 05720 28 06527 28 610
7132214 23 69421 28 38828 28 616

For each month, the table records the maximum of the total number of cases reported from Guinea, Liberia and Sierra Leone since the beginning of the outbreak.

Month 0 is December 2013 (index patient).

Source: Adapted from [16].

Unfortunately, there were some flaws in the data. First, there were false-positive and false-negative diagnoses: initially Ebola was diagnosed as fatal diarrhoea (December 2013 and January 2014) and at the peak of the outbreak villages reported Ebola cases that were later falsified, but the total counts in the reports were not corrected retrospectively. Second, the published data contained typos, incorrect arithmetic and an obvious outlier at 24 June of 2015. (We suppose that it resulted from a typo: 24 472 instead of 27 472.) We therefore aggregated the CDC list to Table 1, which eliminated data uncertainty as much as possible without altering the data. It also reduced random fluctuations. Nevertheless, it still was representative of the raw data (Fig. 1). Total monthly count of Ebola cases in West Africa For each month, the table records the maximum of the total number of cases reported from Guinea, Liberia and Sierra Leone since the beginning of the outbreak. Month 0 is December 2013 (index patient). Source: Adapted from [16]. In addition, we also fitted the BP models to the full set of data from CDC [16] about West Africa. As amongst the 265 data-points (brown rings in Fig. 1) there were contradictory values for 19 April, 3 May and 30 June of 2015, we took the larger counts (reducing the size of the dataset to 262 data-points) and left the data otherwise unaltered (not removing the outlier).

The BP model class

The BP differential equation (1) describes the cumulative count of infected individuals, y(t), as a function of time, t, since the start of an infection. It uses five model parameters that are determined from fitting the model to cumulative epidemiological data; non-negative exponent pair a < b, non-negative scaling constants p and q and the initial value y(0) = c > 0:Equation (1) appeared originally in Pütter [17]; von Bertalanffy [18, 19] popularised it as a model for animal growth. In epidemiology, and for a ≤ 1, this model has been proposed as the generalised Richards model [20, 21]. It can be solved analytically, although in general not by means of elementary functions [22]. We interpret equation (1) as a model class, where each exponent pair (a, b) defines a unique model BP(a, b) in the BP class of models; BP(a, b) has three free parameters (c, p, q). Equation (1) then includes several trend models that have been used to describe epidemic trajectories [23-25], such as the Brody [26] model of bounded exponential growth BP(0, 1), [27] logistic growth BP(1, 2), the model BP(2/3, 1) of von Bertalanffy [18] or the model BP(3/4, 1) of West et al. [28]. Also, the Gompertz [29] model fits into this scheme: it is the limit case BP(1, 1), with a different differential equation, where b converges to a = 1 from above [30]. Equation (1) also includes several classes of models, such as the generalised Bertalanffy model (b = 1 with a variable) and the Richards [31] model (a = 1 with b variable). However (Fig. 2), when compared to the range of models searched for this paper (yellow area), these named models appear as rather exceptional.
Fig. 2.

Named models (blue) and initial search region (yellow) of BP models (plot using Mathematica 12.0): 0 ≤ a ≤ 1.3, a < b ≤ a + 3, step size in both directions 0.01. When needed, the grid was extended.

Named models (blue) and initial search region (yellow) of BP models (plot using Mathematica 12.0): 0 ≤ a ≤ 1.3, a < b ≤ a + 3, step size in both directions 0.01. When needed, the grid was extended. Some authors added further assumptions about the parameter values in order to simplify the model (e.g. setting c = 1 in advance for the indicator case). We do not consider such simplifications.

Model calibration

The (ordinary) least-squares method is common in epidemiology for large case counts, as for the present data [32]. It measures the goodness of the fit to the data by means of SSE, the sum of squared errors (fit residuals). In the present context, we sought parameters a, b, c, p and q so that for the solution y(t) of equation (1) the following sum SSE was minimised, whereby (t, y) were the first n total case counts y at time t (here 10 ≤ n ≤ 28):An implicit assumption of the least-squares method is a normal distribution of the errors: in formal terms, SSE is equivalent to the maximum-likelihood estimate under the assumption of normally distributed data with expected value y(t) and time-independent variance s > 0. If this assumption is not satisfied, as for the animal size-at-age data or the outbreak data, other methods of calibration need to be considered. For the animal data we could work well under the assumption of a log-normal distribution, where the variance increases with the size. However, for the Ebola data (total case counts) this assumption was problematic, too, as the variance was maximal at an intermediate stage (epidemic peak). We therefore developed another measure for the goodness of fit, weighted least-squares that aimed at finding parameters to minimise the following sum of weighted squared errors SWSE:This weight function tolerates a higher variability in the total count during the epidemic peak. In formal terms, SWSE identifies the maximum-likelihood estimate under the assumptions that the data are normally distributed with expected value y(t) and a variance s⋅y′(t) with s a constant. (Thus, the variance of the total counts is assumed to be proportional to the absolute value of the derivative of y. This derivative corresponds to the number of new cases.) In the following subsections, we will explain our notation using SSE; the same definitions will also apply to SWSE.

Optimisation

For model class (1) standard optimisation tools (e.g. Mathematica: NonLinearModelFit) may not identify the best-fit parameters (numerical instability). To overcome this difficulty, we used a custom-made variant of the method of simulated annealing [33] which solves the optimisation problem to a prescribed accuracy [12]. Note that using simulated annealing to optimise all five parameters of equation (1) at once may not always come close to the optimum parameters, because the region of nearly optimal parameters may have a peculiar pancake-like shape ([15]; extremely small in one direction and extremely large in other ones). We confined optimisation to a grid (Fig. 2). For each exponent pair on the grid 0 ≤ a ≤ 1.3 and a < b ≤ 3 with distance 0.01 in both directions we searched for the best-fit model BP(a, b); i.e. we minimised SSE for the model BP(a, b). Thus, we defined the following function SSE on the grid:Summarising this notation, for each exponent pair (a, b) on the grid we identified the best-fit model BP(a, b) with certain parameters c, p, q and the corresponding least sum of squared errors SSE(a, b). These values were recorded in a spreadsheet. Finally, the best-fit model had the overall least sum of squared errors (SSE). It minimised the function SSE at the optimal exponent pair (a, b). From the above-mentioned spreadsheet we could then read off the other parameters p, q, c that minimised SSE(a, b) of the model BP(a, b). However, if that optimal exponent pair was on the edge of the search grid, then we extended the search grid and continued the optimisation. (Otherwise, if the optimal exponent pair was surrounded by suboptimal ones, then we stopped the search for an optimum.)

Model comparison

For each exponent pair (a, b) of the search grid we identified a best-fitting model BP(a, b), whose fit was given by SSE(a, b) of formula (4). Amongst these models we selected the model with the least SSE. In the literature, there are various alternative criteria for the comparison of models, amongst them the root mean squared error RMSE and the Akaike's [34] information criterion (AIC). RMSE is the square-root of SSE/n and AIC = n ⋅ ln (SSE/n) + 2K, where n is the number of data-points (between 10 and 28 for the monthly truncated data), K = 4 is the number of optimised parameters of the model BP(a, b) with a given exponent pair (counting c, p, q and SSE), and SSE = SSE(a, b). As an additional measure to compare the goodness of fit, we used formula (5) for the relative Akaike weight: This formula has been interpreted as probability, , that the (worse) model with higher AIC would be ‘true’, when compared to the model with the least AIC [35, 36]. Thereby (citation from [37], on p. 272, referring to information theory of [38]) ‘true’ means that the model ‘is, in fact, the K-L best model for the data… This latter inference about model selection uncertainty is conditional on both the data and the full set of a priori models [in formula 5: the models with AIC and with AIC] considered’. Furthermore, as does AIC, the Akaike weight assumes a normal distribution of the (weighted) fit residuals. Using SSE or AIC is only meaningful with reference to a fixed dataset (different models are fitted to the same data), whereas RMSE and make also sense, if we consider datasets with different sizes, as for the forecasts, where we successively use truncated initial data with 10, 11, …, 28 months. Thereby, we prefer the probabilities for their more intuitive appearance. Note that is the maximal value that can be attained by (comparison of the best-fit model with its copy).

Multi-model approach

For each exponent pair (a, b) of the search grid we identified the best-fit growth curve y(t) for the model BP(a, b); its parameters p, q and c were optimised according to formula (4). Thereby, we could observe a high variability in the best-fit exponents: even for exponent pairs (a, b) with notable differences from the optimal exponent pair (a, b) the best-fit growth curve y(t) barely differed from the data. To describe this phenomenon, we used the following terminology: a model BP(a, b) and its exponent pair (a, b) are u-near-optimal with model-uncertainty u, if SSE(a, b) ≤ (1 + u) ⋅ SSE. Figure 3 illustrates this concept by plotting near-optimal exponent pairs for different levels u = 0.17 and u = 0.5 of model-uncertainty (red and blue dots).
Fig. 3.

Optimal exponent-pair (black) for fitting the data of Table 1 with respect to SSE; red and black dots for 245 and 1330 exponent pairs with up to 17% and 50% higher SSE; exponent-pairs of the Bertalanffy, Gompertz and Verhulst models (cyan) and part of the search grid (yellow).

Optimal exponent-pair (black) for fitting the data of Table 1 with respect to SSE; red and black dots for 245 and 1330 exponent pairs with up to 17% and 50% higher SSE; exponent-pairs of the Bertalanffy, Gompertz and Verhulst models (cyan) and part of the search grid (yellow). Using formula (5), we translated the level u of near optimality into a probability . For example, if we wish to consider models (exponent pairs) with a probability of at least 10%, this corresponds to a model uncertainty of at most u = 0.55 for the 10-month data (n = 10) and u = 0.17 for the 28-month data. For each dataset, we then may ask about the forecasts that would be supported by models with e.g. or higher, discarding other possible forecasts as unlikely, given the data. Thus, given a model probability, , we studied the forecasts that came from the ensemble of the growth curves y(t) corresponding to the near-optimal models BP(a, b) with that probability or higher. For example, for each of these y(t) we estimated its final disease size by the asymptotic limit. The upper and lower bounds for these estimates defined a prediction interval that informed about the likely disease size. Note that this is an analysis of model uncertainty, not of data uncertainty, whence the prediction interval is not a confidence interval. (Confidence intervals assess data uncertainty by means of simulations that add random errors to the data and use best-fit functions to compute bounds for the confidence interval. For prediction intervals, different models are fitted to the same data.)

Results

Best fits

Table 2 identified the optimal parameters of BP models that were fitted by ordinary least-squares, equation (2) for SSE, to initial segments of the monthly data, meaning the data of Table 1 for the first 10 ≤ n ≤ 28 months. As slight changes in the parameter values could result in highly suboptimal growth curves, from which standard optimisation tools could not find back to the region of near optimality, certain parameters were given with an accuracy of 30 or more decimals. Table 3 identified the optimal parameters for initial segments, using the method of weighted least-squares, formula (3) for SWSE.
Table 2.

Parameters for the best-fit (SSE) model (1) to the data up to the indicated month

MonthaModel parametersbGoodness of fitcAsymptotic limitc
aminbmincminpminqminSSEmin/106RMSE
101.191.4217.77410.3172280.02284230.015339.09 93 647
111.026.68.450460.6635361.657 × 10−240.021143.84 16 976
121.381.627.2630.1238490.01387761.4188343.85 21 006
131.291.354.847030.8190360.4486721.9064382.94 22 711
141.21.291.336731.122730.4527782.4962422.26 24 105
151.021.40.07816971.442650.03065483.5057483.44 25 220
161.151.160.18803812.607211.38714.5167531.31 26 333
171.11.250.358051.406070.3048885.7890583.55 26 655
181.01.320.05627171.680910.06400256.3774595.23 27 256
191.031.30.1845431.461540.09258676.9879606.45 27 416
201.021.290.1593571.541470.09729627.3897607.85 27 788
210.951.380.01866361.967220.02412077.9605615.69 27 878
221.01.310.1237871.609940.06729198.3679616.73 28 047
231.071.230.4611961.498590.2909888.7655617.34 28 106
241.031.260.237581.546090.1463998.8929608.72 28 240
251.021.270.2179451.560390.1203529.0691602.30 28 256
260.991.30.1015591.704560.07100089.1356592.76 28 362
271.031.250.2293431.584290.1661049.2035583.84 28 318
281.01.280.1359941.673230.09474759.2829575.79 28 414

This indicates the data from month 0 to the displayed month.

The table reports the parameters based on the best-fit grid-point exponent pairs (the overall optima were slightly different).

Estimates above the actual count of 28 616 cases in italics.

Table 3.

Best-fit parameters with respect to SWSE for the data up to the indicated month

MonthaminbmincminpminqminSWSEminRMSEAsymptotic limit
101.059.614.0226000.5151531.08640 × 10−36133.9343.65971 14 881
111.056.1214.0057000.5153791.83792 × 10−22133.9973.49021 16 996
121.162.0420.0083000.2777160.0000429172520.8066.58790 21 412
131.231.6519.3364000.2265810.0033374200668.7557.17235 22 986
141.211.5213.7644000.3106000.01355670001055.0608.68110 24 392
151.131.456.1630900.5512040.02136200001785.13010.9091 25 791
161.021.532.3623700.9452490.00521320002605.26012.7604 26 811
171.041.351.1902401.1029000.04631790003139.26013.5890 27 622
181.001.390.7637621.2783700.02358740003425.32013.7948 27 931
191.061.322.0460301.0181000.07098120003537.84013.6456 28 096
200.991.461.5490801.1783700.00954937004010.67014.1610 28 157
210.961.360.2026421.6977100.02806860004329.33014.3582 28 451
221.001.250.1964571.7235100.13254300004595.86014.4535 28 591
231.021.220.2923131.7242300.22139600004733.78014.3463 28 650
240.941.430.4564171.6685700.01092190005501.11015.1398 28 657
250.931.511.2232801.5672900.00407388006211.63015.7628 28 651
260.901.590.7714131.8259500.00153534006535.00015.8539 28 637
270.931.430.2549401.7986500.01062920005423.32014.1726 28 635
280.901.611.1440701.7744900.00121544006846.48015.6370 28 630

Notes as in Table 2 (referring to SWSE rather than to SSE).

Parameters for the best-fit (SSE) model (1) to the data up to the indicated month This indicates the data from month 0 to the displayed month. The table reports the parameters based on the best-fit grid-point exponent pairs (the overall optima were slightly different). Estimates above the actual count of 28 616 cases in italics. Best-fit parameters with respect to SWSE for the data up to the indicated month Notes as in Table 2 (referring to SWSE rather than to SSE). At a first glance, the growth curves for the same data looked similar, regardless of the method of calibration. Figure 4 plots the monthly data and the growth curves that were fitted to the initial segments by means of ordinary least-squares and Figure 5 plots the monthly data and the best-fit curves using weighted least-squares. The two figures look alike and for both methods good forecasts (asymptotic limits in Tables 2 and 3) for the final disease size of 28 616 cases could be obtained from the data truncated at month n, n between 16 and 28, whereby the forecasts using weighted least-squares were more accurate: the relative error of the forecasts was 1–8% for SSE and 0–6% for SWSE. All growth curves were bounded, whereby for SSE the first curve (data truncated at month 10) exceeded the data and the following curves approached the data from below; all asymptotic limits remained below the actual final disease size. For SWSE the growth curves initially approached the data from below and starting with month 23 their asymptotic limits were slightly above the final disease size.
Fig. 4.

Monthly data (black dots) with the best-fit growth curves (SSE) for the data until month 10 (red), 11, … 28 (green); month 0 is December 2013. The best-fit parameters are from Table 2 (plotted using Mathematica 12.0).

Fig. 5.

Monthly data (black dots) with the best-fit growth curves (SWSE) for the data until month 10 (red), 11, … 28 (green); month 0 is December 2013. The best-fit parameters are from Table 5 (plotted using Mathematica 12.0).

Monthly data (black dots) with the best-fit growth curves (SSE) for the data until month 10 (red), 11, … 28 (green); month 0 is December 2013. The best-fit parameters are from Table 2 (plotted using Mathematica 12.0). Monthly data (black dots) with the best-fit growth curves (SWSE) for the data until month 10 (red), 11, … 28 (green); month 0 is December 2013. The best-fit parameters are from Table 5 (plotted using Mathematica 12.0).
Table 5.

10% prediction intervals for asymptotic limits (SSE) and probabilities for predicting the month-28 count

MonthaAsymptotic limitbProbabilityc (%)MonthaAsymptotic limitbProbabilityc (%)
LowHighLowHigh
10 29 806 2 706 403820 27 165 28 2170.75
11 15 72125.1121 27 377 28 4291.01
12 19 768 23 4880.1322 27 613 28 4801.4
13 21 571 24 7050.0323 27 799 28 5943.8
14 23 038 25 3980.0124 27 874 28 5393.65
15 24 143 26 3590.0325 27 940 28 5579.19
16 25 191 27 3660.1726 28 005 28 6218
17 25 953 27 5970.3827 28 098 28 63025.11
18 26 423 27 9030.5528 28 145 28 5870.13
19 26 893 28 1390.65

Truncated data from month 0 to the displayed month.

Minimum and maximum of the asymptotic limits for models with 10% probability to be true, based on SSE, with limits above 28 616 cases displayed in italics.

Minimum of the maximal probabilities of models BP(a, b), whose trajectories at month 28 were above or below the actual count of 28 616 cases.

The best fit curves to the full data were analysed in more detail, including for the weekly data (details given in the Supporting information). Figure 6 plots the growth curves that were fitted to the full sets of monthly and weekly data, respectively, using the two calibrations. Again, there were only slight differences between the curves that used ordinary and weighted least-squares; for SSE and SWSE the asymptotic limits were below and above the final disease size, respectively. Furthermore, the growth curves for the weekly and monthly data that were calibrated by the same method were largely overlapping.
Fig. 6.

Weekly data (blue dots, with correction of three inconsistencies), best-fitting growth curve to these data using SSE (red) and SWSE (green) and best-fitting growth curves fitted to the monthly data using SSE (orange) and SWSE (cyan), whereby at day x we evaluated the growth function at month (x + 84)/30.4, because the daily data started later. The parameters are given in the Supporting information; plotted using Mathematica 12.0.

Weekly data (blue dots, with correction of three inconsistencies), best-fitting growth curve to these data using SSE (red) and SWSE (green) and best-fitting growth curves fitted to the monthly data using SSE (orange) and SWSE (cyan), whereby at day x we evaluated the growth function at month (x + 84)/30.4, because the daily data started later. The parameters are given in the Supporting information; plotted using Mathematica 12.0. Despite these similarities there were obvious differences for the exponent pairs that were computed using different calibrations; Figure 7 plots them. Thereby, except for the SSE fit to the 28-month-data (Richards' model), all exponent pairs were clearly distinct from the exponent pairs of the named models mentioned in the section ‘Method’.
Fig. 7.

Best-fit exponent pairs for the truncated monthly data using SSE (blue) from Table 2 and SWSE (green) from Table 3. Lines connect exponent pairs for successive data, starting with the exponent pair for the 10-month data (red) and ending with the exponent pair for the 28-month data (black); plotted using Mathematica 12.0.

Best-fit exponent pairs for the truncated monthly data using SSE (blue) from Table 2 and SWSE (green) from Table 3. Lines connect exponent pairs for successive data, starting with the exponent pair for the 10-month data (red) and ending with the exponent pair for the 28-month data (black); plotted using Mathematica 12.0.

Testing distribution assumptions

We now explain, why despite the apparent utility of the ordinary least-squares method we will recommend our method of weighted least-squares. One reason was that for the present data its implicit distribution assumption was not outright false: the criteria weights for SWSE were motivated from the observation that the variance of the cumulative count depends on the number of new cases (derivative of the cumulative count), as it was low at the beginning and end of an outbreak and maximal at the peak of the epidemics, while SSE assumed equal variances for all data. We checked these distribution assumptions for the full dataset by testing for SSE the normal distribution hypothesis for the residuals and for SWSE the normal distribution hypothesis for the residuals divided by the square root of y′. Table 4 summarises the results. We tested the fits to the full set of monthly data and for both calibration methods we took the residuals from the best-fit models that we obtained from additional simulated annealing steps (parameters given in the Supporting information). We applied a set of the most common distribution fit tests to the residuals; they relate to different aspects for normality [39]. For ordinary least-squares (SSE) with one exception (skewness) the tests supported the conclusion that at 95% confidence the residuals were not normally distributed (P-values below 5%). Hence, an implicit assumption of this method (and of the AIC) was refuted. For weighted least-squares (SWSE) this difficulty did not occur: none of the tests rejected the assumption of a normal distribution for the residuals divided by the square root of the best-fit trajectory y′, as all tests had sufficiently large P-values (20–70%).
Table 4.

Testing the normal distribution hypothesis for SSE and SWSE

TestSSESWSE
StatisticP-value (%)StatisticP-value (%)
Anderson and Darling0.8431132.90.39559737.5
Baringhaus and Henze0.6160574.40.20359142.3
Cramér and von Mises0.1380783.40.056117243.3
Jarque and Bera10.65583.62.0309624.4
Kolmogorov and Smirnov0.1836021.70.12932926.9
Kuiper0.2501172.80.17851531.9
Mardia combined10.65583.62.0309624.4
Mardia kurtosis2.078553.80.37016971.1
Mardia skewness1.3839823.91.1628528.1
Pearson χ212.03.56.2857127.9
Shapiro and Wilk0.9118882.20.96170338.2
Watson U20.1379922.40.052408945.8

Note: Based on the best-fit growth curves y to the 28-month data, distribution-fit tests were applied to the residuals (SSE) and to the residuals divided by the square-root of y′ (SWSE), respectively. The best-fit parameters are described in the text.

Testing the normal distribution hypothesis for SSE and SWSE Note: Based on the best-fit growth curves y to the 28-month data, distribution-fit tests were applied to the residuals (SSE) and to the residuals divided by the square-root of y′ (SWSE), respectively. The best-fit parameters are described in the text.

Multi-model comparisons

In this section, we outline another drawback of the ordinary least-squares method (SSE): using SSE, for the present data good conservative forecasts for the estimated final disease size could not be obtained with models that fitted well to the data. Thus, paradoxically, a prudent forecaster using ordinary least-squares would deliberately use models that are highly unlikely in terms of their probability (Akaike weight) of equation (5). By contrast, for weighted least-squares (SWSE) the prediction intervals for the asymptotic limit contained the final disease size, whereby the prediction intervals used only likely models (high probability ). Tables 5 and 6 inform about the 10% prediction intervals for the asymptotic limits, using truncated monthly data, best-fit models using ordinary and weighted least-squares, respectively, and assuming for all models a probability . If the data were truncated at month 10, then for both methods of calibration the 10% prediction intervals were useless: for SSE, already the lower estimate was too pessimistic and for SWSE the interval was unbounded above. For SSE there were only two data with 10% prediction intervals that contained the final disease size of 28 616 cases: the data truncated at months 26 and 27. By contrast, using SWSE for all datasets truncated at any of the months 21–28, the 10% prediction intervals contained the final disease size.
Table 6.

10% prediction intervals for asymptotic limits (SWSE) and probabilities for predicting the month-28 count

MonthaAsymptotic limitbProbabilityc (%)MonthaAsymptotic limitbProbabilityc (%)
LowHighLowhigh
10 12 8995020 28 034 28 4260.42
11 16 918 17 7520.00321 28 304 28 64614.72
12 20 529 23 3090.1122 28 468 28 69137.52
13 22 341 23 9960.0123 28 527 28 73148.25
14 23 959 25 2090.0124 28 584 28 71641.04
15 25 259 26 6720.125 28 596 28 69446.34
16 26 364 27 5840.3326 28 591 28 67947.42
17 27 183 28 3962.827 28 594 28 67345.35
18 27 581 28 4261.7628 28 600 28 66446.62
19 27 882 28 4643.79

Notes as for Table 5 (referring to SWSE rather than to SSE).

10% prediction intervals for asymptotic limits (SSE) and probabilities for predicting the month-28 count Truncated data from month 0 to the displayed month. Minimum and maximum of the asymptotic limits for models with 10% probability to be true, based on SSE, with limits above 28 616 cases displayed in italics. Minimum of the maximal probabilities of models BP(a, b), whose trajectories at month 28 were above or below the actual count of 28 616 cases. 10% prediction intervals for asymptotic limits (SWSE) and probabilities for predicting the month-28 count Notes as for Table 5 (referring to SWSE rather than to SSE). We applied the notion of the prediction interval also to other predictors, such as the forecasted disease size at month 28 (and in the Supporting information, Tables S1–S3: best-fit model parameters for SSE and SWSE and coordinates of the inflection point). Tables 5 and 6 inform about these intervals in an abbreviated form, informing about the least probability so that the actual disease size of 28 616 cases could be found in the prediction interval. For example, the month-10 entry of Table 5 was obtained as follows: the best-fit model was BP(1.07, 1.29) and for its epidemic trajectory y1.07, 1.29(t) the value y(28) was above the actual disease size, while for the model BP(1.19, 1.42) and for its best fitting epidemic trajectory y1.19, 1.42(t) the value y(28) was below the actual disease size. The probabilities for these models were and (this was the maximal probability for such a model); the minimum of these probabilities was reported in the table. For ordinary least-squares, models with a probability below 10% were needed to enclose the actual disease size in the prediction interval. Specifically, for the full data (month 28), only unlikely trajectories (, meaning 0.0013) exceeded the true count at month 28. Again, for weighted least-squares and all datasets truncated at any of the month 21–28, the 10% prediction intervals contained the final disease size. Furthermore, for the data truncated at any of the months 21–28, all likely growth curves (probability 10% to be true) deviated from the final case count by at most ±2%. Thereby, there were both likely curves above and below the final case count. Thus, other than for ordinary least-squares, for weighted least-squares there was no bias towards too optimistic forecasts, and forecasts were not too pessimistic, either.

Discussion

Our results have several implications for practical applications. With respect to the choice of BP models, Viboud et al. [21] recommended to use only exponents a ≤ 1 to model the initial phases of epidemics. However, for most truncated data better fits were achieved for a > 1. Therefore, such a restriction appears premature. Concerning the two methods (SSE and SWSE) for the calibration of the models, we observed that the best-fit BP models obtained by means of ordinary least-squares in general had asymptotic limits that were too low, when compared to the final disease size. For the modelling of epidemics this resulted in overly optimistic forecast of the final disease sizes. Better estimates for the final disease size could be obtained by weighted least-squares. Thereby, different types of data needed different weight functions. For epidemic data (total case counts) we recommend using as weight function the reciprocal of the derivative of the growth function; formula (3). This recommendation was supported by an analysis of the (weighted) fit residuals. The normal distribution assumption of the ordinary least-squares method was refuted, but the similar assumption for weighted least-squares was not refuted. For the present data we concluded that a reasonable prognosis of upper and lower bounds of the final disease size was possible much earlier when using this weight function rather than ordinary least-squares, namely already at month 21 or later, while for ordinary least-squares it was not sure, if that prognosis was feasible at all. However (Supporting information, Table S3), as follows from the prediction intervals for the inflection point, the peak of the disease was attained around t ≈ 10 months regardless of the method of calibration. Thus, even with a suitable weight function the prognosis of the final disease size may be possible only at a surprisingly late phase of a disease. Obviously, no calibration can extract more information about the future trajectories than is available from the given data. Forecasts using data from an early stage of epidemics prior to its peak (e.g. 10-month data) remained uninformative also with the new calibration. Thus, for precise forecasts about the final size enough data-points beyond the peak of the epidemics were needed to inform about how fast the epidemics slowed down. However, for the present data the proposed weight function had the advantage that its estimates of the final size of the disease were reliable earlier than for the ordinary least-squares method that currently is the standard method for such purposes.

Conclusion

This paper proposes a new calibration of phenomenological models in the context of modelling infectious disease outbreaks. We recommend weighted least-squares using the reciprocals of the derivatives of the growth function (this corresponds to the numbers of new cases) as weights. We tested this calibration for the modelling of the West African Ebola outbreak of 2013–2016 by means of the BP model. Although for these data the estimates remained comparable to those of the least-squares method, we conducted several tests (distribution of the fit residuals and forecasting intervals for estimating the final disease size) that confirmed that the new calibration was superior to ordinary least-squares.
  25 in total

1.  A general model for ontogenetic growth.

Authors:  G B West; J H Brown; B J Enquist
Journal:  Nature       Date:  2001-10-11       Impact factor: 49.962

2.  Problems of organic growth.

Authors:  L BERTALANFFY
Journal:  Nature       Date:  1949-01-29       Impact factor: 49.962

3.  Models overestimate Ebola cases.

Authors:  Declan Butler
Journal:  Nature       Date:  2014-11-06       Impact factor: 49.962

4.  Forecasting Ebola with a regression transmission model.

Authors:  Jason Asher
Journal:  Epidemics       Date:  2017-02-27       Impact factor: 4.396

5.  Using phenomenological models for forecasting the 2015 Ebola challenge.

Authors:  Bruce Pell; Yang Kuang; Cecile Viboud; Gerardo Chowell
Journal:  Epidemics       Date:  2016-11-19       Impact factor: 4.396

6.  Comparative analysis of phenomenological growth models applied to epidemic outbreaks.

Authors:  Raimund Bürger; Gerardo Chowell; Leidy Yissedt Lara-Díıaz
Journal:  Math Biosci Eng       Date:  2019-05-16       Impact factor: 2.080

7.  High time-resolution simulation of E. coli on hands reveals large variation in microbial exposures amongst Vietnamese farmers using human excreta for agriculture.

Authors:  Timothy R Julian; Hasitha S K Vithanage; Min Li Chua; Matasaka Kuroda; Ana K Pitol; Pham Hong Lien Nguyen; Robert A Canales; Shigeo Fujii; Hidenori Harada
Journal:  Sci Total Environ       Date:  2018-04-13       Impact factor: 7.963

8.  Is West Africa Approaching a Catastrophic Phase or is the 2014 Ebola Epidemic Slowing Down? Different Models Yield Different Answers for Liberia.

Authors:  Gerardo Chowell; Lone Simonsen; Cécile Viboud; Yang Kuang
Journal:  PLoS Curr       Date:  2014-11-20

9.  Assessing the relationship between epidemic growth scaling and epidemic size: The 2014-16 Ebola epidemic in West Africa.

Authors:  Tapiwa Ganyani; Kimberlyn Roosa; Christel Faes; Niel Hens; Gerardo Chowell
Journal:  Epidemiol Infect       Date:  2018-10-15       Impact factor: 2.451

Review 10.  Modeling infectious disease dynamics in the complex landscape of global health.

Authors:  Hans Heesterbeek; Roy M Anderson; Viggo Andreasen; Shweta Bansal; Daniela De Angelis; Chris Dye; Ken T D Eames; W John Edmunds; Simon D W Frost; Sebastian Funk; T Deirdre Hollingsworth; Thomas House; Valerie Isham; Petra Klepac; Justin Lessler; James O Lloyd-Smith; C Jessica E Metcalf; Denis Mollison; Lorenzo Pellis; Juliet R C Pulliam; Mick G Roberts; Cecile Viboud
Journal:  Science       Date:  2015-03-13       Impact factor: 47.728

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.