Literature DB >> 33052987

Erratum: Author Correction: Machine learning model to project the impact of COVID-19 on US motor gasoline demand.

Shiqi Ou1, Xin He2, Weiqi Ji3, Wei Chen4, Lang Sui2, Yu Gan5, Zifeng Lu5, Zhenhong Lin1, Sili Deng3, Steven Przesmitzki2, Jessey Bouchard2.   

Abstract

[This corrects the article DOI: 10.1038/s41560-020-0662-1.].
© The Author(s), under exclusive licence to Springer Nature Limited 2020.

Entities:  

Keywords:  Energy and society; Energy modelling; Energy supply and demand; SARS-CoV-2

Year:  2020        PMID: 33052987      PMCID: PMC7543031          DOI: 10.1038/s41560-020-00711-7

Source DB:  PubMed          Journal:  Nat Energy        ISSN: 2058-7546            Impact factor:   60.858


Correction to: Nature Energy 10.1038/s41560-020-0662-1, published online 17 July 2020. In the version of this Article originally published, a comprehensive analysis of the model performance was not provided; thus, to avoid potential confusion over the model validation procedure and to provide a better representation of the model performance, the rolling-window cross-validation and out-of-sample testing results have now been included in the corrected Article and its Supplementary Information. In the Methods section ‘The Mobility Dynamic Index Forecast Module’, the sentence describing the cross-validation method “In addition, cross validation is adopted to search the optimal network structure and avoid overfitting, in which the datasets are divided into training and test datasets by a ratio of 2:1.” has been changed to “In addition, the rolling-window cross-validation is adopted to search the optimal network structure, which is detailed in Supplementary Note 5. Out-of-sample testing is also performed for the selected neural network structure to estimate the performance of the model in predicting future mobility.” In the Supplementary Information, the original Supplementary Fig. 10 that used R2 to describe the random-split cross-validation results has been replaced by the corrected version that uses the root mean square error (RMSE) to describe the out-of-sample testing results, and the caption has accordingly been updated to read “Root Mean Square Error (RMSE) of the neural network model with 2 hidden layers and 25 nodes. The data before May 15 were used as the training dataset, and the data between May 25 and May 31 were used as the out-of-sample testing dataset. (a) Google mobility: workplaces; (b) Google mobility: retail and recreation; (c) Google mobility: grocery and pharmacy; (d) Google mobility: parks; (e) Apple Mobility.” Additionally, the original Supplementary Table 4 that used R2 to select the neural network structure has been replaced by the corrected version that uses the RMSE instead to select the neural network structure, and the caption has been updated accordingly to read “Rolling-window cross-validation of the neural network model for different combinations of hidden layers and nodes. The data in the table show the Root Mean Square Error (RMSE) of the training dataset and cross-validation dataset. Yellow highlighted text indicates the layers and the nodes are adopted in the neural network in the PODA model.” Furthermore, discussion of the rolling-window cross-validation and out-of-sample testing results has been added in Supplementary Note 5: the first paragraph, starting “Supplementary Figure 10 compares the historical mobility data with the results predicted by the trained model…” has been rewritten to read: “Multiple regularization techniques were adopted to avoid overfitting. We used weight-decaying (equivalent to L2 regularization) to penalize large neural network weights and enforce model parameter sparsity and such to avoid overfitting. We also used mini-batch with Adam optimizer to train the neural network. Mini-batch training can offer a regularizing effect since it adds noise to the learning process. In addition, early stopping was employed to avoid overfitting. The rolling-window cross-validation was performed to study the effect of the number of layers and nodes on the performance of the neural network model. Supplementary Table 4 lists the Root Mean Square Error (RMSE) of the training datasets and cross-validation datasets. For each combination of layer and node, two evaluations were performed with training dataset to be before April 15 and April 29, respectively. For each run, the model was trained using 2/3 of the randomly selected data from the training dataset. The “Validation dataset” listed in Supplementary Table 4 was used for cross-validation. Generally speaking, the neural network models with 1-hidden-layer and 2-hidden-layer achieve better performance than the 3-hidden-layer and 4-hidden-layer models. They are relatively insensitive to the number of nodes. Overall, the neural network models with 1-layer-30-node, 1-layer-35-node, 2-layer-25-node, and 2-layer-30-node are top performers. The 2-layer-25-node neural network is adopted in the PODA model for this work. Supplementary Figure 10 shows the out-of-sample testing of the neural network model with 2 hidden layers and 25 nodes. Data before May 15 was used for model training, and the data between May 25 and May 31 for model testing. The trained model well predicts the future mobility related to “workplaces”, “retail and recreation”, and “grocery and pharmacy”. The relatively poor performance in predicting “Google parks” and “Apple mobility” is due to the high day-to-day variations. There is no obvious over-fitting as the performance in the testing dataset is comparable to the training dataset. Finally, the neural network model was retrained with 2/3 of random-sampled all of available data before June 11 to capture the latest pattern.” The original and corrected Supplementary Fig. 10 and Table 4 are shown in the Supplementary Information for this correction notice. These corrections have been peer reviewed. Original and corrected Supplementary Fig. 10 and Table 4.
  17 in total

1.  Forecasting the evolution of fast-changing transportation networks using machine learning.

Authors:  Weihua Lei; Luiz G A Alves; Luís A Nunes Amaral
Journal:  Nat Commun       Date:  2022-07-22       Impact factor: 17.694

2.  Forecasting oil consumption with attention-based IndRNN optimized by adaptive differential evolution.

Authors:  Binrong Wu; Lin Wang; Sheng-Xiang Lv; Yu-Rong Zeng
Journal:  Appl Intell (Dordr)       Date:  2022-06-24       Impact factor: 5.019

3.  Random-Forest-Bagging Broad Learning System With Applications for COVID-19 Pandemic.

Authors:  Choujun Zhan; Yufan Zheng; Haijun Zhang; Quansi Wen
Journal:  IEEE Internet Things J       Date:  2021-03-17       Impact factor: 10.238

4.  Intelligent system for COVID-19 prognosis: a state-of-the-art survey.

Authors:  Janmenjoy Nayak; Bighnaraj Naik; Paidi Dinesh; Kanithi Vakula; B Kameswara Rao; Weiping Ding; Danilo Pelusi
Journal:  Appl Intell (Dordr)       Date:  2021-01-06       Impact factor: 5.086

Review 5.  Applications of artificial intelligence in battling against covid-19: A literature review.

Authors:  Mohammad-H Tayarani N
Journal:  Chaos Solitons Fractals       Date:  2020-10-03       Impact factor: 5.944

6.  Antipyretic Medication for a Feverish Planet.

Authors:  Markus Stoffel; David B Stephenson; Jim M Haywood
Journal:  Earth Syst Environ       Date:  2020-11-02

7.  [Energy demand and CO2 emissions according to COVID-19].

Authors:  Andreas Löschel; Madeline Werthschulte
Journal:  Wirtschaftsdienst       Date:  2021-01-19

8.  A hybrid multi-objective optimizer-based model for daily electricity demand prediction considering COVID-19.

Authors:  Hongfang Lu; Xin Ma; Minda Ma
Journal:  Energy (Oxf)       Date:  2020-12-11       Impact factor: 7.147

9.  Forecasting the U.S. oil markets based on social media information during the COVID-19 pandemic.

Authors:  Binrong Wu; Lin Wang; Sirui Wang; Yu-Rong Zeng
Journal:  Energy (Oxf)       Date:  2021-03-18       Impact factor: 7.147

10.  The Role of Artificial Intelligence in Fighting the COVID-19 Pandemic.

Authors:  Francesco Piccialli; Vincenzo Schiano di Cola; Fabio Giampaolo; Salvatore Cuomo
Journal:  Inf Syst Front       Date:  2021-04-26       Impact factor: 5.261

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.