Literature DB >> 33897300

COVID-19 mortality analysis from soft-data multivariate curve regression and machine learning.

Antoni Torres-Signes1, María P Frías2, María D Ruiz-Medina3.   

Abstract

A multiple objective space-time forecasting approach is presented involving cyclical curve log-regression, and multivariate time series spatial residual correlation analysis. Specifically, the mean quadratic loss function is minimized in the framework of trigonometric regression. While, in our subsequent spatial residual correlation analysis, maximization of the likelihood allows us to compute the posterior mode in a Bayesian multivariate time series soft-data framework. The presented approach is applied to the analysis of COVID-19 mortality in the first wave affecting the Spanish Communities, since March 8, 2020 until May 13, 2020. An empirical comparative study with Machine Learning (ML) regression, based on random k-fold cross-validation, and bootstrapping confidence interval and probability density estimation, is carried out. This empirical analysis also investigates the performance of ML regression models in a hard- and soft-data frameworks. The results could be extrapolated to other counts, countries, and posterior COVID-19 waves. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00477-021-02021-0.
© The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021.

Entities:  

Keywords:  COVID-19 analysis; Curve regression; Hard-data; Machine learning; Multivariate time series; Soft-data

Year:  2021        PMID: 33897300      PMCID: PMC8053745          DOI: 10.1007/s00477-021-02021-0

Source DB:  PubMed          Journal:  Stoch Environ Res Risk Assess        ISSN: 1436-3240            Impact factor:   3.379


Introduction

Coronavirus disease 2019 (COVID-19) rapidly spreads around many other countries, since December 2019 when arises in China (see Sivakumar 2020; Wang et al. 2020; Zhou et al. 2020). The effective allocation of medical resources requires the derivation of predictive techniques, describing the spatiotemporal dynamics of COVID-19 (see, e.g., Du et al. 2020; Khan and Atangana 2020; Nishiura et al. 2020; Remuzzi and Remuzzi 2020, just to mention a few). Epidemiological models can contribute to the analysis of the causes, dynamics, and spread of this pandemic (see, e.g, Huppert and Katriel 2013; Keeling and Rohani 2008; Laaroussi et al. 2018, and the references therein). Short-term forecasts can be obtained adopting the framework of compartmental SIR (susceptible-infectious-recovered) models, based on ordinary differential equations (see, e.g. Angulo et al. 2013; Elhia et al. 2014; Ji et al. 2012; Kermack and McKendrick 1927; Kuznetsov and Piccardi 1994; Milner and Zhao 2008; Pathak et al. 2010; Tornatore et al. 2005; Yu et al. 2009; Zhang et al. 2008). An extensive literature is available, including different versions of compartmental models, like SIR-susceptible (SIRS, Dushoff et al. 2004), and delay differential equations (see Beretta et al. 2001; McCluskey 2010; Sekiguchi and Ishiwata 2010). Spatial extensions, based on reaction-diffusion models, reflecting the infectious disease spread over a spatial region can be found, for instance, in Guin and Mandal (2014) and Webb (1981). SEIRD (susceptible, exposed, infected, recovered, deceased) models, incorporating the spatial spread of the disease with inhomogeneous diffusion terms are also analyzed (see Roques and Bonnefon 2016 and Roques et al. 2011). The stochastic version of SIR-type models intends to cover several limitations detected regarding uncertainly in the observations, and the hidden dynamical epidemic process. Markov chain SIR based modelling (see Anderson and Britton 2000; Xu et al. 2007), and some recent stochastic formulations involving complex networks (see Volz 2008; Zhou et al. 2006) or drug-resistant influenza (see Chao et al. 2012) constitute some alternatives. A Bayesian hierarchical statistical SIRS model framework is adopted in Aalen et al. (2008); Abboud et al. (2019); Anderson and Britton (2000); Fleming and Harrington (1991) taking into account the observation error in the counts, and uncertainty in the parameter space. Beyond SIR modeling, the multivariate survival analysis approach offers a suitable modelling framework, regarding infection, incubation and recovering random periods, affecting the containment of COVID-19 (see, e.g., Bolker and Grenfell 1996; Keeling et al. 1997; Pak et al. 2020; Wasiur et al. 2019). In a first stage, most of the above referred models have been adapted and applied to approximate the space/time evolution of COVID-19 incidence and mortality. That is the case, for instance, of the three models presented in Roosa et al. (2020), which were validated with outbreaks of other diseases different from COVID-19. Alternative SEIR type models, involving stochastic components, are formulated in Kucharski et al. (2020). A revised SEIR model has also been proposed in Zhang et al. (2020) (see also He et al. 2020). A -SEIHRD model, able to estimate the number of cases, deaths, and needs of beds in hospitals, is introduced in Ivorra et al. (2020), adapted to COVID-19, based on the Be-CoDiS model (see Ivorra et al. 2015). Due to the low quality of the records available, and the hidden sample information, the most remarkable feature in this research area is the balance between complexity and indentifiability of model parameters. Recently, an attempt to simplify modelling strategies, applied to COVID-19 data analysis, is presented in Ramosa et al. (2020), in terms of -SEIHQRD model. Mitigation of undersampling is proposed in Langousis and Carsteanu (2020), based on re-scaling of summary statistics characterizing sample properties of the pandemic process, useful between countries with similar levels of health care. Nowadays ML models have established themselves as serious contenders to classical statistical models in the area of forecasting. Research started in the eighties with the development of the neural network model. Subsequently, research extended this concept to alternative models, such as support vector machines, decision trees, and others (see, e.g., Alpaydin 2004; Blanquero et al. 2020; Hastie et al. 2001; Mohammady et al. 2021). In general, curve regression techniques based on a function basis, usually in the space of square integrable functions with respect to a suitable probability measure, allow short- and long- term forecast. Thus, depending on our choice of the function basis, and the probability measure selected, particle and field views could be combined. Note that the classical stochastic diffusion models offer a particle rather than a field view (see, e.g., Malesios et al. 2016). Linear regression, multilayer perceptron and vector autoregression methods have been applied in Sujath et al. (2020a, 2020b) to predicting COVID-19 spread, anticipating the potential patterns of COVID-19 effects (see also Section 2 of Sujath et al. (2020a), on related work). Early stage location of COVID-19 is addressed in Barstugan et al. (2020), applying machine learning strategies actualized on stomach Computed Tomography pictures. Chien and Chen (2020) evaluates association between meteorological factors and COVID-19 spread. They concluded that average temperature, minimum relative humidity, and precipitation were better predictors, displaying possible non-linear correlations with COVID-19 variables. These conclusions are crucial in the subsequent machine learning regression based analysis. This paper presents a multiple objective space-time forecasting approach, where curve trigonometric log-regression is combined with multivariate time series spatial residual analysis. In our curve regression model fitting, we are interested on reflecting the cyclical behavior of COVID-19 mortality induced by the hardening or relaxation of the containment measures, adopted to mitigate the increase of infections and mortality. The trigonometric basis (sines and cosines) is then selected in our spatial heterogeneous curve log-regression model fitting. The ratio of the expected minimized empirical risk, and the corresponding expected value of the quadratic loss function at such a minimizer is considered for model selection (see, e.g., Chapelle et al. 2002). Note that this selection procedure provides an agreement between the expected minimum empirical risk, and the corresponding expected theoretical loss function value. The penalized factor proposed in Chapelle et al. (2002), applied to our choice of the truncation parameter, leads to the dimension of the subspace where our curve regression estimator is approximated at any spatial location. This model selection procedure is asymptotically equivalent to Akaike correction factor. A robust modification of the Akaike information criterion can be found, for example, in Agostinelli (2001). As an alternative, one can consider cross-validation criterion for selecting the best subset of explanatory variables (see Takano and Miyashiro 2020, where a mixed-integer optimization approach is proposed in this context). Beyond asymptotic analysis, model selection from finite sample sizes constitutes a challenging topic in our approach. To address this problem, a bootstrap estimator of the ratio between the expected quadratic loss function and the expected training quadratic error, from different sets of explanatory variables, is implemented. Bootstrap confidence intervals are also provided for the spatial mean of the curve regression predictor, and for the expected training error of the curve regression, and of the multivariate time-series residual predictor. The bootstrap approximation of the probability distribution of these statistics is also computed. In our multivariate time series analysis of the regression residuals, a classical and Bayesian componentwise estimation of the spatial linear correlation is achieved. The presented multiple objective forecasting approach is applied to the spatiotemporal analysis of COVID-19 mortality in the first wave affecting the Spanish Communities, since March, 8, 2020 until May, 13, 2020. Our results show a remarkable qualitative agreement with the reported epidemiological data. The spatiotemporal approach presented in this paper makes the fusion of generalized random field theory, and our multiple-objective space-time forecasting, based on nonlinear parametric regression, and bayesian analysis of the spatiotemporal correlation structure. Regarding the site-specific or specificatory knowledge bases (see Christakos et al. 2002), in our approach, several information sources can be incorporated in the description of the hidden epidemic process. Particularly, we distinguish here between hard-data or hard measurements providing a satisfactory level of accuracy for practical purposes, and soft-data displaying a non-negligible amount of uncertainty. That is, in this second data category, we include missing observations or imperfect observations, categorical data and fuzzy inputs (see also Christakos 2000, 2002; Christakos and Hristopulos 1998, and the references therein). In this paper, we consider hard-data sets given by numerical values of our count process at the Spanish Communities analyzed. Our soft-data sample complements hard measures, in terms of interpolated, smoothed, and spatial projected data. Particularly, spatial correlations between regions are incorporated in terms of soft-data. Additional information about the continuous functional nature of the underlying space-time COVID-19 mortality process is also reflected in our soft-data set. This information helps the implementation of the proposed estimation methodology in the framework of Functional Data Analysis (FDA) techniques. As commented before, last advances in spatiotemporal mapping of epidemiological data incorporate ML regression models to improve and help the understanding of general or core knowledge bases. Thus, model fitting is achieved according to epidemiological systems laws, population dynamics, and theoretical space-time dependence models (see Christakos 2008, and the references therein). See also Barstugan et al. 2020; Chien and Chen 2020 and Sujath et al. (2020a) in the hard-data context. It is well-known that the limited availability of hard-data affects space-time analysis. Hence, the incorporation of soft-data into ML regression models can help this analysis, providing a global view of the available sample information (see, e.g., Christakos et al. 2002). Particularly, in our empirical comparative analysis, involving ML regression models and our approach, input hard- and soft-data information is incorporated. Cross-validation, bootstrapping confidence intervals and probability density estimation support our comparative study. Specifically, random k-fold () cross-validation first evaluates the performance of the compared regression models from hard- and soft-data, in terms of Symmetric Mean Absolute Percentage Errors (SMAPEs). Bootstrap confidence intervals and probability density estimation of the spatially averaged SMAPEs approximate the distributional characteristics of the random k-fold cross-validation errors. Thus, a complete picture of SMAPEs supports our evaluation of the predictive ability of the regression models tested, from the analyzed hard- and soft-data sets. From the empirical comparative analysis carried out, we can conclude that almost the best performance in both, hard- and soft-data categories, is displayed by Radial Basis Function Neural Network (RBF), and Gaussian Processes (GP). Both approaches are improved, when soft-data are incorporated into the regression analysis. Slightly differences are observed in the performance of Support Vector Regression (SVR) and Bayesian Neural Networks (BNN). Multilayer Perceptron (MLP) gets over GRNN, presenting better estimation results when hard-data are analyzed. The sample values and distributional characteristics of cross-validation SMAPEs, in Generalized Regression Neural Network (GRNN), are similar to the ones obtained in trigonometric curve regression, when spatial residual analysis is achieved in terms of empirical second-order moments. Note that, GRNN is also favored by the soft-data category. In this category, BNN and our approach show very similar performance, when trigonometric regression is combined with Bayesian multivariate time series residual prediction. Indeed, some slightly better bootstrapping distributional characteristics of our approach respect to BNN are observed in the soft-data category. The outline of the paper is the following. The modeling approach is introduced in Sect. 2. Section 3 describes the multiple objective forecasting methodology. This methodology is applied to the spatiotemporal statistical analysis of COVID-19 mortality in Spain in Sect. 4. The empirical comparative study with ML regression models is given in Sect. 5. Conclusions about our data-driven model ranking can be found in Sect. 6. In the Supplementary Material, a brief introduction to our implementation of ML models from hard- and soft-data is provided. Additional numerical estimation results, based on the complete sample, are also displayed. Particularly, the observed and predicted mortality cumulative cases, and log-risk curves are displayed.

Data model

Let be the basic probability space. Consider the space of square-integrable functions on to be the underlying real separable Hilbert space. In the following, we denote by the Borel -algebra in Let be our spatiotemporal input hard-data process on satisfying for any time The input soft-data process over any spatial bounded set is then defined aswhere denotes the space of infinite differentiable functions, with compact support contained in D. For each bounded set defineAssume that, for any finite positive interval and bounded set where denotes the identity in the second-order moment sense. Let be a family of random counting measures. Given the observation at the finite temporal interval of the input soft-data process over the spatial h-window in D,  the conditional probability distribution of the number of random events that occur in is a Poisson probability distribution with mean for every and We refer to as the generalized cumulative mortality risk random process over the interval Hence, the input hard-data process defines the spatiotemporal mortality log-risk process. From the sample values of our input soft-data process, the following observation model is considered in the curve regression model fittingwherewith denoting a function family in H,  whose elements have respective compact supports defining the p small-areas where the counts are aggregated, satisfying suitable regularity conditions. For each the vector contains the center and bandwidth parameters, defining the window selected in the analysis of the small-area p. For each represents the unknown parameter vector to be estimated at the p region, and is the open set defining the parameter space, whose closure is a compact set in We assume that is of the form (see, e.g., Ivanov et al. 2015)whose spatial-dependent parameters are given by the temporal scalings and the Fourier coefficients For simplifications purposes, we will consider that the scaling parameters are known, and fixed over the P spatial regions. Also, for where N denotes the truncation parameter, that will be selected according to the penalized factor proposed in Chapelle et al. (2002), as we explain in more detail in Sect. 3. Thus,To analyze the spatial correlation between regions, a multivariate autoregressive model is considered for prediction of the regression residual term at each region Particularly, for any in equation (3) is assumed to satisfy the state equation, for where, for any and Here, are assumed to be independent zero–mean Gaussian P–dimensional vectors. For the projection then keeps the temporal linear autocorrelation at each spatial region for and the temporal linear cross-correlation between regions for of the regression error (see, Bosq 2000).

Implementation of the curve regression model and spatial residual analysis

Let be the small-areas, where the counts are aggregated, and be the functions with respective compact supports Particularly, we denote by the centers respectively allocated at the regions and by the bandwidth parameters providing the associated window sizes. In practice, from the observation model (3), to find in (5) minimizing the expected quadratic loss function, or expected risk, we look for the minimizer of the empirical regression riskTruncation parameter N is then selected to controlling the ratio between the expected quadratic loss function at and the expected value of the minimized empirical risk from the identitywhere, for denotes the inverse of the ith eigenvalue of the matrix with being a matrix, whose elements are the values of the N trigonometric basis functions selected at the time points Parameter N should be such that Note that, asymptotically, when goes to the identity matrix, and for In equation (8), we have considered the minimized empirical riskfor each spatial region whereOur regression predictor is then computed, for any from the identity(see Theorem 1 in Ivanov et al. (2015) about conditions for the weak-consistency of (10)). The regression residualsand the empirical nuclear autocovariance and cross-covariance operatorswill be considered in the estimation of the spatial linear residual correlation (see Bosq 2000). A truncation parameter k(T) is also considered here to remove the ill-posed nature of this estimation problem. Particularly, k(T) must satisfy A suitable choice of k(T) also ensures strong-consistency of the estimatorfor (see Bosq 2000). Here,where and denote the empirical eigenvalues and eigenvectors of respectively. Particularly, we consider (see Bosq 2000). The classical plug-in predictor is then computed, for each asUnder the Gaussian distribution of in the Bayesian estimation of from (6), the likelihood function, defining the objective function, is given by, for each where, for each the beta probability distributions with shape parameters and respectively define the prior probability distributions of the independent random variables Here, for each and for As before, weights the spatial sample information about the p small-area, for As usual, denotes the indicator function on the interval (0, 1),  and is the beta function,From (15), the Bayesian predictor is obtained, for aswith being computed by maximizing (15), to find the posterior mode (see Bosq and Ruiz-Medina 2014, where Bayesian estimation is introduced in an infinite-dimensional framework). We refer to (16) as the Bayesian plug-in predictor of the residual mortality log-risk process at the p small area, for In practice, equation (15) is approximated from the computed values of the regression residual process.

Statistical analysis of COVID-19 mortality

Our analysis is based on daily records of COVID-19 mortality reported by the Carlos III Health Institute, since March, 8 to May, 13, 2020, at the 17 Spanish Communities. We first describe the main steps of the proposed estimation algorithm, referring to the inputs and outputs at different stages. Tables 1–2 below display the parameter estimates and where has been considered, for In these tables and below, the following Spanish Community (SC) codes appear: C1 for Andalucía; C2 for Aragón; C3 for Asturias; C4 for Islas Baleares; C5 for Canarias; C6 for Cantabria; C7 for Castilla La Mancha; C8 for Castilla y León; C9 for Cataluña; C10 for Comunidad Valenciana; C11 for Extremadura; C12 for Galicia; C13 for Comunidad de Madrid; C14 for Murcia; C15 for Navarra; C16 for País Vasco, and C17 for La Rioja.
Table 1

Regression parameter estimates at the 17 Spanish Communities

SC/PE\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\widehat{A}}_1(\cdot ) $$\end{document}A^1(·)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\widehat{A}}_2(\cdot ) $$\end{document}A^2(·)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\widehat{A}}_3(\cdot ) $$\end{document}A^3(·)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\widehat{A}}_4(\cdot ) $$\end{document}A^4(·)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\widehat{A}}_5(\cdot ) $$\end{document}A^5(·)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\widehat{A}}_6(\cdot ) $$\end{document}A^6(·)
C13.6343−0.4814−0.0075−0.02580.01890.0193
C23.4345−0.39230.04160.0265−0.0709−0.0572
C33.2031−0.1364−0.00880.02210.04300.0289
C43.1445−0.11180.00410.03370.00620.0072
C53.1015−0.0693−0.03450.0352−0.01120.0003
C63.1347−0.13970.00200.0300−0.0061−0.0002
C74.0591−0.5487−0.09070.09510.09920.0842
C83.8032−0.5500−0.10070.06330.01390.0277
C94.5095−0.7435−0.11340.18090.22310.2026
C103.6321−0.4685−0.05400.0384−0.01520.0011
C113.2967−0.2274−0.00830.05530.02500.0240
C123.3454−0.2122−0.0927−0.03300.07240.0679
C134.8419−0.6790−0.24550.03110.05540.0667
C143.0941−0.10370.02100.0141−0.00160.0041
C153.2877−0.2598−0.05240.0842−0.0423−0.0348
C163.6870−0.4302−0.00860.0078−0.0027−0.0017
C173.2197−0.20710.01620.00790.02060.0110
Table 2

Regression parameter estimates at the 17 Spanish Communities

SC/PE\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\widehat{B}}_1(\cdot ) $$\end{document}B^1(·)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\widehat{B}}_2(\cdot ) $$\end{document}B^2(·)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\widehat{B}}_3(\cdot ) $$\end{document}B^3(·)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\widehat{B}}_4(\cdot ) $$\end{document}B^4(·)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\widehat{B}}_5(\cdot ) $$\end{document}B^5(·)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\widehat{B}}_6(\cdot ) $$\end{document}B^6(·)
C10−0.0052−0.1330−0.01230.0064−0.0195
C20−0.0367−0.0998−0.0462−0.0343−0.0107
C30−0.0531−0.0074−0.0142−0.00030.0020
C40−0.0074−0.0284−0.0151−0.00920.0012
C500.0433−0.0438−0.0116−0.01180.0046
C600.0018−0.0174−0.0068−0.00890.0000
C70−0.0365−0.2451−0.1791−0.08200.0026
C800.0953−0.2389−0.0431−0.0313−0.0045
C90−0.1587−0.4054−0.2269−0.10100.0047
C1000.1118−0.1579−0.0458−0.0418−0.0220
C1100.0754−0.1138−0.0166−0.00480.0072
C120−0.1104−0.13380.13300.0761−0.0017
C1300.4654−0.1302−0.1602−0.1061−0.0038
C1400.0355−0.05600.01190.0025−0.0044
C150−0.0187−0.0021−0.0897−0.05620.0134
C1600.0025−0.0707−0.0638−0.0439−0.0267
C1700.0389−0.0270−0.0174−0.00060.0019
Daily records of COVID-19 mortality are accumulated over the entire period at every Spanish Community. The resulting step cumulative curves are interpolated at 265 temporal nodes, and cubic B-spline smoothed. Their derivatives and logarithmic transforms are then computed. Our soft-input-data process is obtained from the spatial projection of the outputs in Step 1 onto the compactly supported basis We choose the tensorial product of Daubechies wavelet bases. Here, for whose components respectively provide the order of the Daubechies wavelet functions, the resolution level, and the vector of spatial displacements, according to the area occupied by each Spanish community (see, e.g., Daubechies 1988). The choice in (8) corresponds to 1.1304 value of the ratio between the mean quadratic loss function and expected minimized empirical risk. Hence 12 coefficients should be estimated. Note that the eigenvalues in (8) are computed from the trigonometric basis. Under in Step 3, the least-squares estimates of the 12 Fourier coefficients are computed from (7), in terms of the soft-input-data process obtained as output in Step 2. The regression residuals are then calculated from Step 4. The auto- and cross-covariance operators in (11) are computed from the outputs of Step 5. The residual spatial linear correlation matrix is then obtained from (12). The truncation scheme has been adopted, with The residual predictor (14) is computed from Step 6. 100 bootstrap samples are generated from the empirical autocorrelation projections. The bootstrap prior fitted suggests us to consider a scaled beta probability density with shape parameters 14 and 13. Assuming a Gaussian scenario for our log-regression residuals, our constrained nonlinear multivariate objective function (15) is computed from the prior proposed in Step 8. To maximize the objective function computed in Step 9, we implement an hybrid genetic algorithm, constructed from ’gaoptimset’ MaLab function, implemented with the ’HybridFcn’ option that handles to a function to continuing optimization after the genetic algorithm terminates. This last function applies quasi-Newton methodology in the optimization procedure, involving an inverse Hessian matrix estimate. The soft-data based bayesian predictor (16) of the residual COVID-19 mortality log-risk is finally computed from the outputs in Step 10. Our multiple objective space-time predictor is obtained from Steps 4 and 11, by addition the regression and residual predictors, applying inverse spatial wavelet transform. Regression parameter estimates at the 17 Spanish Communities Regression parameter estimates at the 17 Spanish Communities Bootstrap curve confidence intervals at confidence level , based on 1000 bootstrap samples, are computed for the spatial mean, over the 17 Spanish Communities, of the curve regression predictors. Their construction is based on the bias corrected and accelerated percentile method (); Normal approximated interval with bootstrapped bias and standard error (); basic percentile method (), and bias corrected percentile method () (see Fig. 1). The minimized regression empirical risk values are displayed in Table 3.
Fig. 1

At the top, COVID-19 mortality mean cumulative curve in Spain, since March, 8, 2020 to May, 13, 2020 (continuous red line, 265 temporal nodes), and bootstrap curve confidence intervals, at the left-hand-side, (dashed blue lines) and (dashed magenta lines), and at the right-hand-side, (dashed green lines) and (dashed yellow lines). Plots at the center and bottom reflect the same information respectively referred to the mean intensity (spatial averaged COVID-19 mortality risk curve), and log-intensity (spatial averaged COVID-19 mortality log-risk curve) curves in Spain. All the confidence bootstrap intervals are computed at confidence level from 1000 bootstrap samples

Table 3

Computed values

\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_{265}(\widehat{\varvec{\theta }}_{265}(p))$$\end{document}L265(θ^265(p))\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=$$\end{document}p=\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1\dots 17$$\end{document}117
0.01550.02590.06680.04080.0927
0.06230.16420.08830.21740.0313
0.05590.19040.00540.16020.1640
0.00030.1238
At the top, COVID-19 mortality mean cumulative curve in Spain, since March, 8, 2020 to May, 13, 2020 (continuous red line, 265 temporal nodes), and bootstrap curve confidence intervals, at the left-hand-side, (dashed blue lines) and (dashed magenta lines), and at the right-hand-side, (dashed green lines) and (dashed yellow lines). Plots at the center and bottom reflect the same information respectively referred to the mean intensity (spatial averaged COVID-19 mortality risk curve), and log-intensity (spatial averaged COVID-19 mortality log-risk curve) curves in Spain. All the confidence bootstrap intervals are computed at confidence level from 1000 bootstrap samples Computed values Figure 2 at the top displays the 1000 bootstrap sample valuesof the spatial averaged minimized empirical quadratic risk in the trigonometric regression. Note that the sample mean of these values is showing a good performance of the least-squares regression predictor, according to the value obtained. The bootstrap histogram and the corresponding approximation of the probability density function, computed from are also plotted at the bottom of Fig. 2.
Fig. 2

1000 bootstrap samples have been generated of the spatially averaged minimum empirical regression risk (SAMERR). The corresponding sample values are displayed at the top. The bootstrap histogram can be found at the bottom-left-hand side. The bootstrap probability density is plotted at the bottom-right-hand-side

1000 bootstrap samples have been generated of the spatially averaged minimum empirical regression risk (SAMERR). The corresponding sample values are displayed at the top. The bootstrap histogram can be found at the bottom-left-hand side. The bootstrap probability density is plotted at the bottom-right-hand-side Bootstrap confidence intervals for have also been computed at level from 1000 and 10000 bootstrap samples. Table 4 displays these intervals respectively based on the bias corrected and accelerated percentile method (); Normal approximated interval with bootstrapped bias and standard error (); basic percentile method (); bias corrected percentile method (), and Student-based confidence interval ().
Table 4

Bootstrap confidence intervals for (confidence level )

CI/S100010000
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}_{1}$$\end{document}I1[0.0593, 0.1222][0.0594, 0.1236]
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}_{2}$$\end{document}I2[0.0564, 0.1196][0.0567, 0.1207]
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}_{3}$$\end{document}I3[0.0584, 0.1215][0.0579, 0.1217]
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}_{4}$$\end{document}I4[0.0592, 0.1233][0.0581, 0.1208]
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}_{5}$$\end{document}I5[0.0484, 0.1281][0.0494, 0.1215]
Bootstrap confidence intervals for (confidence level ) The classical and Bayesian plug-in predictors of the residual COVID-19 mortality log-risk process at each one of the Spanish Communities are respectively computed from equations (14) and (16) for . Given the empirical spectral characteristics observed in the regularized approximation of in (12), from the singular value decomposition of the empirical operators in (11), our choice of the prior for the projections of has been a scaled, by factor 1/3,  Beta prior with hyper-parameters and for The suitability of this data-driven choice, regarding localization of the mode, and the tails thickness, is illustrated in Fig. 3. Specifically, at the right plot in Fig. 3, both, the scaled Beta probability density, with shape parameters 14 and 13 (red-square line), and the fitted probability density (blue-square line), from the generated bootstrap samples, based on the empirical projections of are displayed. Note that the observed range of the empirical projections of is well fitted, as one can see from the left plot in Fig. 3.
Fig. 3

At the left-hand side, empirical projections of the autocorrelation operator reflecting temporal autocorrelation and cross-correlation between the 17 Spanish Communities analyzed. At the right-hand side, the considered prior probability density (red squares) of a scaled, by factor 1/3,  Beta distributed random variable with shape parameters 14 and 13 is compared with the bootstrap fitting of an empirical prior (blue squares)

At the left-hand side, empirical projections of the autocorrelation operator reflecting temporal autocorrelation and cross-correlation between the 17 Spanish Communities analyzed. At the right-hand side, the considered prior probability density (red squares) of a scaled, by factor 1/3,  Beta distributed random variable with shape parameters 14 and 13 is compared with the bootstrap fitting of an empirical prior (blue squares) Bootstrap confidence intervals at level , for the expected training standard error of the multivariate time series classical and Bayesian residual COVID-19 mortality log-risk predictors, based on 1000 bootstrap samples, are displayed in Table 5:
Table 5

Bootstrap confidence intervals for the expected training standard error of the classical and Bayesian residual COVID-19 mortality log-risk predictors ()

CI/SClassicalBayesian
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}_{1}$$\end{document}I1[0.0474, 0.0597][0.0173, 0.0228]
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}_{2}$$\end{document}I2[0.0455, 0.0578][0.0167, 0.0220]
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}_{3}$$\end{document}I3[0.0463, 0.0588][0.0169, 0.0225]
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}_{4}$$\end{document}I4[0.0460, 0.0586][0.0172, 0.0226]
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}_{5}$$\end{document}I5[0.0421, 0.0563][0.0158, 0.0215]
Bootstrap confidence intervals for the expected training standard error of the classical and Bayesian residual COVID-19 mortality log-risk predictors () Maps plotted in Fig. 4 show the observed spatiotemporal evolution of COVID-19 mortality risk, and its prediction, from the fitted curve trigonometric regression model, and the subsequent classical and Bayesian time series analysis.
Fig. 4

COVID-19 mortality risk maps, since March, 8 to May, 13, 2020. Observed (left-hand-side) and estimated (right-hand side) maps, computed from trigonometric regression, combined with classical (first line) and Bayesian (second line) residual predictors

COVID-19 mortality risk maps, since March, 8 to May, 13, 2020. Observed (left-hand-side) and estimated (right-hand side) maps, computed from trigonometric regression, combined with classical (first line) and Bayesian (second line) residual predictors

An empirical comparative study

The ML regression models introduced in the Supplementary Material are applied to COVID-19 mortality analysis, and compared, via random k-fold cross-validation and bootstrap estimators, with the multiple objective space-time forecasting approach presented. We distinguish two categories respectively referred to the strong-sense (hard-data) and weak-sense (soft-data) definition of our data set. Random k-fold () cross-validation, in terms of Symmetric Mean Absolute Percentage Errors (SMAPEs), evaluates the performance of the compared regression models, from hard- and soft-data. Bootstrap confidence intervals, and probability density estimates of the spatially averaged SMAPEs are also computed. Section 6 provides a data-driven model classification, based on SMAPEs, in the two categories analyzed, from random k-fold cross-validation, and the bootstrap estimation procedures applied.

Results from random k-fold cross-validation

After interpolation and cubic B-spline smoothing of our original data set, the logarithmic transform and linear scaling are applied. We held out the first ten points and the last three, for each COVID-19 mortality log-risk curve, as an out of sample set. Our approach is implemented in the second-category from soft-data. In this implementation, we consider adopting the model selection criterion given in equation (8) (see Chapelle et al. 2002). In the multivariate time series classical and Bayesian prediction, our choice of provides a balance between signing an agreement with the separation and velocity decay of the empirical eigenvalues of the autocovariance operator, and the parameter value controlling model complexity according to the sample size The random fluctuations observed at the k(T) empirical projections of the spatial autocorrelation matrix are also well-fitted by our choice of the shape hyperparameters, characterizing the prior Beta probability density. Model fitting is evaluated in terms of the Symmetric Mean Absolute Percentage Errors (SMAPEs), given by, for and We have computed the mean of the SMAPEs obtained at each one of the k iterations of the random k-fold cross-validation procedure. This validation technique consists of random splitting the functional sample into a training and validation samples at each one of the k iterations. Model fitting is performed from the training sample, and the target outputs are defined from the validation or testing sample. By running each model ten times and averaging SMAPEs, we remove the fluctuations due to the random initial weights (for MLP and BNN models), and the differences in the parameter estimation in all methods, due to the random specification of the sample splitting in the random k-fold cross-validation procedure. The ten-running based random 10-fold cross-validation SMAPEs are displayed in Table 6, for the six ML techniques tested, GRNN, MLP, SVR, BNN, RBF, and GP, when hard-data are considered (see also Table 3 of the Supplementary Material on random 5-fold cross-validation results). Table 7 provides the ten-running based random 10-fold cross-validation results, from soft-data category (see also Table 4 of the Supplementary Material on random 5-fold cross validation results). The corresponding cross-validation results of the presented approach from soft-data are displayed in Table 8.
Table 6

Hard-data category. Averaged SMAPEs, based on 10 running of random 10-fold cross-validation

SC(\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ { \times 10}^{-{2}}$$\end{document}×10-2)GRNNMLPSVRBNNRBFGP
C10.19570.07770.07000.05940.05430.0554
C20.61320.14900.06630.07380.06800.0654
C30.15560.04730.03500.03030.03310.0304
C40.09710.03420.01350.02000.01820.0211
C50.20490.04570.03180.03700.03690.0372
C60.15720.03680.01770.02340.02330.0247
C70.48980.06980.06440.05900.06160.0588
C80.08040.03400.01710.01910.02110.0177
C90.72580.19760.09790.08120.03260.0437
C100.21910.07040.05560.04820.04710.0463
C110.12620.05300.03100.03950.03750.0355
C120.52280.15780.13410.12820.09400.0993
C130.35940.06470.05760.05790.05330.0458
C140.13450.03660.02090.02040.01940.0207
C150.60800.15230.14110.11410.09820.1039
C160.24640.08890.07090.06220.05680.0594
C170.06600.03700.01480.02220.02030.0227
M.0.29420.07960.05530.05270.04560.0463
T.5.00221.35280.93970.89590.77570.7879
Table 7

Soft-data category. Averaged SMAPEs, based on 10 running of random 10-fold cross-validation

SC(\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {x10}^{-{2}}$$\end{document}x10-2)GRNNMLPSVRBNNRBFGP
C10.15450.09830.06660.05730.02340.0312
C20.18440.17300.06600.07490.02770.0301
C30.10290.11920.04810.04520.02730.0274
C40.04320.02860.01650.01580.01240.0123
C50.06100.04760.02580.02480.01440.0149
C60.02600.02170.01330.01400.01240.0125
C70.37500.20260.10950.09240.03070.0399
C80.07640.04820.03050.03000.02620.0187
C90.48940.31980.17530.12120.02290.0372
C100.16800.08150.05210.04620.02520.0290
C110.15370.08390.04360.03970.01990.0219
C120.36890.25580.15050.12490.04010.0490
C130.28480.15820.09680.07920.02400.0320
C140.03670.02260.01200.01430.01060.0104
C150.36180.22640.12010.12270.03170.0522
C160.17730.08350.06510.05450.02640.0318
C170.08840.06230.02100.02310.01250.0136
M.0.18540.11960.06550.05770.02280.0273
T.3.15242.03331.11290.98010.38770.4642
Table 8

Our approach. Averaged SMAPEs, based on 10 running of random 10-fold cross-validation, for testing trigonometric regression combined with Classical (C.) and Bayesian (B.) residual analysis

SCC. k10B. k10
C10.0024\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0.7106(10)^{-3}$$\end{document}0.7106(10)-3
C20.0019\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0.4003(10)^{-3}$$\end{document}0.4003(10)-3
C30.0016\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ 0.6797(10)^{-3}$$\end{document}0.6797(10)-3
C40.0017\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0.4367(10)^{-3}$$\end{document}0.4367(10)-3
C50.0023\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ 0.6530(10)^{-3}$$\end{document}0.6530(10)-3
C60.0018\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ 0.5854(10)^{-3}$$\end{document}0.5854(10)-3
C70.0017\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0.6341(10)^{-3}$$\end{document}0.6341(10)-3
C80.0016\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0.6593(10)^{-3}$$\end{document}0.6593(10)-3
C90.0013\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0.5979(10)^{-3}$$\end{document}0.5979(10)-3
C100.0019\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ 0.6954(10)^{-3}$$\end{document}0.6954(10)-3
C110.0017\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0.5444(10)^{-3}$$\end{document}0.5444(10)-3
C120.0016\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0.5016(10)^{-3}$$\end{document}0.5016(10)-3
C130.0020\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ 0.4832(10)^{-3}$$\end{document}0.4832(10)-3
C140.0026\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0.6544(10)^{-3}$$\end{document}0.6544(10)-3
C150.0023\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0.6616(10)^{-3}$$\end{document}0.6616(10)-3
C160.0015\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0.7134(10)^{-3}$$\end{document}0.7134(10)-3
C170.0022\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0.6781(10)^{-3}$$\end{document}0.6781(10)-3
M.0.0019\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ 0.60524(10)^{-3}$$\end{document}0.60524(10)-3
T.0.03210.0103
ML model hyperparameter selection has been achieved by applying random k-fold cross-validation (). Our selection has been made from a suitable set of candidates. Specifically, the optimal numbers of hidden (NH) nodes in the implementation of MLP and BNN have been selected from the candidate sets [0, 1, 3, 5, 7, 9] and [1, 3, 5, 7, 9], respectively. The random cross-validation results in both cases, lead to the same choice of the NH optimal value. Namely, NH for MLP, and NH for BNN. The last one displays slight differences with respect to the values NH in the random 10-fold cross-validation implementation. In the same way, we have selected the respective spread and bandwidth h parameters in the RBF and GRNN procedures. Thus, after applying random k-fold cross-validation, with the optimal values and are obtained, from the candidate sets [2.5, 5, 7.5, 10, 12.5, 15, 17.5, 20] and [0.05, 0.1, 0.2, 0.3, 0.5, 0.6, 0.7], respectively (see Supplementary Material). Better performance from hard-data is observed in linear SVR. In its implementation, automatic hyperparameter optimization from fitrsvm MatLab function is applied. While, from the soft-data category, the best option corresponds to the Gaussian kernel based nonlinear SVR model fitting (applying the same option of automatic hyperparameter optimization, in the argument of fitrsvm MatLab function). In the implementation of GP, we follow the same tuning procedure for model selection. In this case, for both categories, we have selected Bayesian cross-validation optimization (in the hyperparameter optimization argument of the fitrgp MatLab function). In all the results displayed, the SMAPE–MEAN (M.) and SMAPE–TOTAL (T.) have been computed as performance measures, for comparing the ML models tested, and our approach. Hard-data category. Averaged SMAPEs, based on 10 running of random 10-fold cross-validation Soft-data category. Averaged SMAPEs, based on 10 running of random 10-fold cross-validation Our approach. Averaged SMAPEs, based on 10 running of random 10-fold cross-validation, for testing trigonometric regression combined with Classical (C.) and Bayesian (B.) residual analysis

Bootstrap based classification results

For the ML regression models tested, in the hard- and soft-data categories, bootstrap confidence intervals ( confidence level) for the spatially averaged SMAPEs, based on 1000 bootstrap samples, are constructed. Our approach requires the soft-data information to be incorporated. As before, the computed bootstrap confidence intervals are respectively based on the bias corrected and accelerated percentile method (); Normal approximated interval with bootstrapped bias and standard error (); basic percentile method (); bias corrected percentile method (), and Student-based confidence interval () (see Tables 9 and 10). The bootstrap histogram, and probability density of the spatially averaged SMAPEs are displayed in Figs. 5 and 6, for the hard-data category, and in Figs. 7, 8 and 9, for the soft-data category. The data-driven performance-based model classification results obtained are discussed in Sect. 6.
Table 9

Hard-data category. Bootstrap confidence intervals () for the spatially averaged SMAPEs from 1000 bootstrap samples ( )

CI/MLGRNNMLP
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}_{1}$$\end{document}I1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[2.1(10)^{-3}, 4.1(10)^{-3}]$$\end{document}[2.1(10)-3,4.1(10)-3]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[0.5(10)^{-3}, 1(10)^{-3}]$$\end{document}[0.5(10)-3,1(10)-3]
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}_{2}$$\end{document}I2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[2(10)^{-3}, 3.9(10)^{-3}]$$\end{document}[2(10)-3,3.9(10)-3]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[0.4776(10)^{-3}, 0.9483(10)^{-3}]$$\end{document}[0.4776(10)-3,0.9483(10)-3]
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}_{3}$$\end{document}I3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[2(10)^{-3}, 4(10)^{-3}]$$\end{document}[2(10)-3,4(10)-3]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[0.4746(10)^{-3}, 0.9713(10)^{-3}]$$\end{document}[0.4746(10)-3,0.9713(10)-3]
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}_{4}$$\end{document}I4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[2(10)^{-3}, 4(10)^{-3}]$$\end{document}[2(10)-3,4(10)-3]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[0.5118(10)^{-3}, 0.9878(10)^{-3}]$$\end{document}[0.5118(10)-3,0.9878(10)-3]
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}_{5}$$\end{document}I5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[1.7(10)^{-3}, 3.9(10)^{-3}]$$\end{document}[1.7(10)-3,3.9(10)-3]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[0.2780(10)^{-3}, 0.9244(10)^{-3}]$$\end{document}[0.2780(10)-3,0.9244(10)-3]
Table 10

Soft-data category. Bootstrap confidence intervals () for the spatially averaged SMAPEs from 1000 bootstrap samples ( )

CI/MLGRNNMLP
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}_{1}$$\end{document}I1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[1.3(10)^{-3}, 2.6(10)^{-3}]$$\end{document}[1.3(10)-3,2.6(10)-3]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[0.6(10)^{-3}, 1.3(10)^{-3}]$$\end{document}[0.6(10)-3,1.3(10)-3]
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}_{2}$$\end{document}I2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[1.3(10)^{-3}, 2.6(10)^{-3}]$$\end{document}[1.3(10)-3,2.6(10)-3]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[0.6(10)^{-3}, 1.3(10)^{-3}]$$\end{document}[0.6(10)-3,1.3(10)-3]
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}_{3}$$\end{document}I3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[1.3(10)^{-3}, 2.7(10)^{-3}]$$\end{document}[1.3(10)-3,2.7(10)-3]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[0.6(10)^{-3}, 1.3(10)^{-3}]$$\end{document}[0.6(10)-3,1.3(10)-3]
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}_{4}$$\end{document}I4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[1.3(10)^{-3}, 2.6(10)^{-3}]$$\end{document}[1.3(10)-3,2.6(10)-3]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[0.7(10)^{-3}, 1.3(10)^{-3}]$$\end{document}[0.7(10)-3,1.3(10)-3]
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}_{5}$$\end{document}I5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[1(10)^{-3}, 2.7(10)^{-3}]$$\end{document}[1(10)-3,2.7(10)-3]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[0.5(10)^{-3}, 1.3(10)^{-3}]$$\end{document}[0.5(10)-3,1.3(10)-3]
Fig. 5

Hard-data category. From 1000 bootstrap samples, spatially averaged SMAPEs histograms and probability densities are plotted, for GRNN (top), MLP (center), and linear SVR (bottom)

Fig. 6

Hard-data category. From 1000 bootstrap samples, spatially averaged SMAPEs histograms and probability densities are plotted, for BNN (top), RBF (center), and GP (bottom)

Fig. 7

Soft-data category. From 1000 bootstrap samples, spatially averaged SMAPEs histograms and probability densities are plotted, for GRNN (top), MLP (center) and non-linear SVR (bottom)

Fig. 8

Soft-data category. From 1000 bootstrap samples, spatially averaged SMAPEs histograms and probability densities are plotted, for BNN (top), RBF (center) and GP (bottom)

Fig. 9

Soft-data category. From 1000 bootstrap samples, spatially averaged SMAPEs histograms and probability densities are plotted, for trigonometric regression, combined with empirical-moment based classical (top), and Bayesian (bottom) residual prediction

Hard-data category. Bootstrap confidence intervals () for the spatially averaged SMAPEs from 1000 bootstrap samples ( ) Soft-data category. Bootstrap confidence intervals () for the spatially averaged SMAPEs from 1000 bootstrap samples ( ) Hard-data category. From 1000 bootstrap samples, spatially averaged SMAPEs histograms and probability densities are plotted, for GRNN (top), MLP (center), and linear SVR (bottom) Hard-data category. From 1000 bootstrap samples, spatially averaged SMAPEs histograms and probability densities are plotted, for BNN (top), RBF (center), and GP (bottom) Soft-data category. From 1000 bootstrap samples, spatially averaged SMAPEs histograms and probability densities are plotted, for GRNN (top), MLP (center) and non-linear SVR (bottom) Soft-data category. From 1000 bootstrap samples, spatially averaged SMAPEs histograms and probability densities are plotted, for BNN (top), RBF (center) and GP (bottom) Soft-data category. From 1000 bootstrap samples, spatially averaged SMAPEs histograms and probability densities are plotted, for trigonometric regression, combined with empirical-moment based classical (top), and Bayesian (bottom) residual prediction

Final comments

One can observe the agreement between the respective performance-based model classification results, obtained from random k-fold cross-validation, and bootstrap estimation in Sects. 5.1 and 5.2. In the hard-data category, the best performance is displayed by RBF and GP. Similar bootstrapping characteristics are observed for BNN and SVR, with slightly larger values of spatially averaged SMAPEs, reflected in the location of the mode, in the histograms and probability densities displayed in Figs. 5 and 6. These four regression methodologies show a similar degree of variability, regarding the spatially averaged SMAPEs sample values. A higher variability than RBF, GP, BNN and SVR is displayed by the bootstrap sample values of spatially averaged SMAPEs in MLP validation. MLP bootstrapped mode is also slightly shifted to the right. The worst performance corresponds to GRNN (see also Table 6). In the soft-data category, where our approach is incorporated to the empirical comparative study, almost the same empirical ML model ranking holds. Some differences are found in the bootstrap confidence intervals, and histogram and probability densities computed. For instance, GRNN seems to be favored by soft-data category, while MLP displays worse performance in this category. Hence, smaller differences between GRNN and MPL are displayed in the soft-data category. A slightly improvement in the soft-data category of BNN relative to SVR is observed, preserving almost the same performance. RBF and GP display better performance in the soft-data category, being RFB a bit superior to GP in this category (see Table 10 and Fig. 8). The trigonometric regression, and multivariate time series residual prediction approach based on the empirical moments displays similar results to GRNN, with slightly better performance of GRNN, observed in the bootstrap intervals and histogram/probability density (see Figs. 7 and 9). However, as given in Figs. 8 and 9, the trigonometric regression and Bayesian residual prediction presents almost the same ‘performance as BNN, with some slightly better probability distribution features of our approach respect to BNN (see also bootstrap intervals). Our approach is less affected by the random splitting of the sample, in the implementation of the random k-fold cross validation procedure, since a dynamical spatial residual model is fitted in a second (objective) step. Thus, the proposed multivariate time series classical and Bayesian regression residual modeling fits the short-term spatial linear correlations displayed by the soft-data category. However, the price we pay for increasing model complexity is reflected in the resulting SMAPEs based random k-fold and bootstrap model classification results obtained. The spatial component effect is reflected in Tables 6 (hard-data), where spatial heterogeneities displayed by random 10-fold cross-validation SMAPEs errors are observed (see also Table 3 in the Supplementary Material). While Table 7 (see also Table 4 in the Supplementary Material) reveals the benefits obtained in some of the ML regression models tested from soft-data information. Particularly, in this category, possible spatial linear correlations are incorporated to the analysis, in terms of soft-data. Below is the link to the electronic supplementary material. Supplementary material 1 (pdf 576 KB)
  29 in total

1.  Impact of vaccination on the spatial correlation and persistence of measles dynamics.

Authors:  B M Bolker; B T Grenfell
Journal:  Proc Natl Acad Sci U S A       Date:  1996-10-29       Impact factor: 11.205

2.  Bifurcation analysis pf periodic SEIR and SIR epidemic models.

Authors:  Y A Kuznetsov; C Piccardi
Journal:  J Math Biol       Date:  1994       Impact factor: 2.259

3.  Spatio-temporal modelling of foot-and-mouth disease outbreaks.

Authors:  C Malesios; N Demiris; P Kostoulas; K Dadousis; T Koutroumanidis; Z Abas
Journal:  Epidemiol Infect       Date:  2016-05-06       Impact factor: 4.434

4.  Dating and localizing an invasion from post-introduction data and a coupled reaction-diffusion-absorption model.

Authors:  Candy Abboud; Olivier Bonnefon; Eric Parent; Samuel Soubeyrand
Journal:  J Math Biol       Date:  2019-05-16       Impact factor: 2.259

5.  Mathematical modeling of the spread of the coronavirus disease 2019 (COVID-19) taking into account the undetected infections. The case of China.

Authors:  B Ivorra; M R Ferrández; M Vela-Pérez; A M Ramos
Journal:  Commun Nonlinear Sci Numer Simul       Date:  2020-04-30       Impact factor: 4.260

6.  Risk assessment of the step-by-step return-to-work policy in Beijing following the COVID-19 epidemic peak.

Authors:  Wen-Bin Zhang; Yong Ge; Mengxiao Liu; Peter M Atkinson; Jinfeng Wang; Xining Zhang; Zhaoxing Tian
Journal:  Stoch Environ Res Risk Assess       Date:  2020-11-13       Impact factor: 3.379

7.  Comparative infection modeling and control of COVID-19 transmission patterns in China, South Korea, Italy and Iran.

Authors:  Junyu He; Guangwei Chen; Yutong Jiang; Runjie Jin; Ashton Shortridge; Susana Agusti; Mingjun He; Jiaping Wu; Carlos M Duarte; George Christakos
Journal:  Sci Total Environ       Date:  2020-08-03       Impact factor: 7.963

8.  Serial interval of novel coronavirus (COVID-19) infections.

Authors:  Hiroshi Nishiura; Natalie M Linton; Andrei R Akhmetzhanov
Journal:  Int J Infect Dis       Date:  2020-03-04       Impact factor: 3.623

9.  SIR dynamics in random networks with heterogeneous connectivity.

Authors:  Erik Volz
Journal:  J Math Biol       Date:  2007-08-01       Impact factor: 2.259

10.  Real-time forecasts of the COVID-19 epidemic in China from February 5th to February 24th, 2020.

Authors:  K Roosa; Y Lee; R Luo; A Kirpich; R Rothenberg; J M Hyman; P Yan; G Chowell
Journal:  Infect Dis Model       Date:  2020-02-14
View more
  1 in total

1.  A Bayesian machine learning approach for spatio-temporal prediction of COVID-19 cases.

Authors:  Poshan Niraula; Jorge Mateu; Somnath Chaudhuri
Journal:  Stoch Environ Res Risk Assess       Date:  2022-01-25       Impact factor: 3.821

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.