| Literature DB >> 34393647 |
Hafiz Tayyab Rauf1, Jiechao Gao2, Ahmad Almadhor3, Muhammad Arif4, Md Tabrez Nafis5.
Abstract
The highly infectious COVID-19 critically affected the world that has stuck millions of citizens in their homes to avoid possible spreading of the disease. Researchers in different fields are continually working to develop vaccines and prevention strategies. However, an accurate forecast of the outbreak can help control the pandemic until a vaccine is available. Several machine learning and deep learning-based approaches are available to forecast the confirmed cases, but they lack the optimized temporal component and nonlinearity. To enhance the current forecasting frameworks' capability, we proposed optimized long short-term memory networks (LSTM) to forecast COVID-19 cases and reduce mean absolute error. For the optimization of LSTM, we applied bat algorithm. Furthermore, to tackle the premature convergence and local minima problem of BA, we proposed an enhanced variant of BA. The proposed version utilized Gaussian adaptive inertia weight to control the individual velocity in the entire swarm. In addition, we substitute random walk with the Gaussian walk to observe the local search mechanism. The proposed LSTM examines the personal best solution with the swarm's local best and preserves the optimal solution by combining the Gaussian walk. To evaluate the optimized LSTM, we compared it with the non-optimal version of LSTM, recurrent neural network, gated recurrent units, and other recent state-of-the-art algorithms. The experimental results prove the superiority of the optimized LSTM over other recent algorithms by obtaining 99.52 % accuracy.Entities:
Keywords: COVID-19; Gaussian distribution; Gaussian inertia weight; LSTM
Year: 2021 PMID: 34393647 PMCID: PMC8356221 DOI: 10.1007/s00500-021-06075-8
Source DB: PubMed Journal: Soft comput ISSN: 1432-7643 Impact factor: 3.643
Fig. 1USA COVID-19 cases summary from Feb 2020 to Sep 2020
Comparison of proposed optimized LSTM with other standard deep learning forecasting models
| Model | RMSE | MAPE | Stdev | Prediction interval | Accuracy |
|---|---|---|---|---|---|
| GRU | 1786.613 | 30.01539 | 3261.895 | 6393.313572 | 70 |
| RNN | 531.3041 | 8.817398 | 970.0242 | 1901.247 | 91 |
| LSTM | 751.2309 | 12.12951 | 1371.554 | 2688.245 | 88 |
| Optimized LSTM | 32.99262 | 0.483875 | 60.23602 | 118.0626 | 99.52 |
Kruskal–Wallis test: proposed LSTM vs recent state-of-the-art algorithms
| Model | Median | Ave rank | Z |
|---|---|---|---|
| Adagrad Wieczorek et al. ( | 40.100 | 2.5 | |
| Adam Wieczorek et al. ( | 87.530 | 9.0 | 0.00 |
| Adamax Wieczorek et al. ( | 87.470 | 8.0 | |
| ANN Alakus and Turkoglu ( | 86.900 | 6.0 | |
| CNN Alakus and Turkoglu ( | 87.350 | 7.0 | |
| CNNLSTM Alakus and Turkoglu ( | 92.300 | 13.0 | 0.82 |
| CNNRNN Alakus and Turkoglu ( | 86.240 | 5.0 | |
| Ftrl Wieczorek et al. ( | 40.100 | 2.5 | |
| LSTM-1 Chimmula and Zhang ( | 93.400 | 15.0 | 1.22 |
| LSTM-2 Chimmula and Zhang ( | 92.670 | 14.0 | 1.02 |
| LSTM Alakus and Turkoglu ( | 90.340 | 12.0 | 0.61 |
| LSTM Wieczorek et al. ( | 93.560 | 16.0 | 1.43 |
| NAdam Wieczorek et al. ( | 87.730 | 11.0 | 0.41 |
| RMSprop Wieczorek et al. ( | 87.650 | 10.0 | 0.20 |
| RNN Alakus and Turkoglu ( | 84.000 | 4.0 | |
| SGD Wieczorek et al. ( | 9.800 | 1.0 | |
| Optimized LSTM | 99.520 | 17.0 | 1.63 |
Fig. 3Training and validation loss minimization curves using GRU
Fig. 4Training and validation loss minimization curves using RNN
Fig. 5Training and validation loss minimization curves using LSTM
Fig. 6Training and validation loss minimization curves using optimized LSTM
Fig. 7Convergence of real and forecasted COVID-19 cases trough optimized LSTM in the USA
Comparison of proposed optimized LSTM with other standard deep learning forecasting models
| Date | GRU | RNN | LSTM | Optimized LSTM |
|---|---|---|---|---|
| 1/9/20 | 3619310 | 5305265 | 4932695 | 6012715 |
| 2/9/20 | 3506454 | 5304446 | 4903980 | 6045536 |
| 3/9/20 | 3344938 | 5304747 | 4876812 | 6077771 |
| 4/9/20 | 3139538 | 5304128 | 4851221 | 6109418 |
| 5/9/20 | 2912792 | 5303244 | 4827170 | 6140511 |
| 6/9/20 | 2693745 | 5302879 | 4804581 | 6171062 |
| 7/9/20 | 2472994 | 5301480 | 4783279 | 6201055 |
| 8/9/20 | 2310167 | 5301190 | 4763229 | 6230511 |
| 9/9/20 | 2206934 | 5299707 | 4744368 | 6259414 |
| 10/9/20 | 2070085 | 5299569 | 4726624 | 6287779 |
Fig. 8Predicted cases comparison of optimized LSTM with GRU, RNN, and LSTM
Comparison of proposed optimized LSTM with other variants of LSTM and other deep learning models
| Model | RMSE | MAPE | Accuracy |
|---|---|---|---|
| LSTM Wieczorek et al. ( | – | – | 93.56 |
| NAdam Wieczorek et al. ( | – | 87.73 | |
| RMSprop Wieczorek et al. ( | – | – | 87.65 |
| Adam Wieczorek et al. ( | – | – | 87.53 |
| Adamax Wieczorek et al. ( | – | – | 87.47 |
| Ftrl Wieczorek et al. ( | – | — | 40.10 |
| Adagrad Wieczorek et al. ( | – | – | 40.10 |
| SGD Wieczorek et al. ( | – | – | 9.8 |
| Scenario 1 Chowdhury et al. ( | 297.89 | 5425 | – |
| Scenario 2 Chowdhury et al. ( | 216.48 | 23.30 | – |
| Scenario 3 Chowdhury et al. ( | 600.61 | 38.06 | – |
| LSTM-1 Chimmula and Zhang ( | 34.83 | – | 93.4 |
| LSTM-2 Chimmula and Zhang ( | 45.70 | – | 92.67 |
| Convolutional LSTM Arora et al. ( | – | 5.05 | – |
| Stacked LSTM Arora et al. ( | – | 4.81 | – |
| Bidirectional LSTM Arora et al. ( | – | 3.22 | – |
| RNN Alakus and Turkoglu ( | – | – | 84.00 |
| LSTM Alakus and Turkoglu ( | – | – | 90.34 |
| CNNRNN Alakus and Turkoglu ( | – | – | 86.24 |
| CNNLSTM Alakus and Turkoglu ( | – | — | 92.30 |
| CNN Alakus and Turkoglu ( | – | – | 87.35 |
| ANN Alakus and Turkoglu ( | 86.90 | ||
| Optimized LSTM | 32.99 | 0.48 | 99.52 |
Recent related works with their dataset details and results
| Ref. | Dataset | Model | Results |
|---|---|---|---|
|
Wieczorek et al. ( | Government repositories | NAdam training model | Accuracy above 99% |
|
Chowdhury et al. ( | Bangladesh COVID-19 | Neuro-fuzzy inference system (ANFIS) | Correlation coefficient 0.75, MAPE 4.51, and RMSE 6.55 |
|
Dutta et al. ( | WHO official | CNN and RNN | CNN-LSTM approach outperforms |
|
Chimmula and Zhang ( | Dataset Canadian Health Authority | LSTM | Gained highly accurate results |
|
Arora et al. ( | Indian dataset | LSTM | Yields high accuracy |
|
Pathan et al. ( | The patient’s dataset of different countries | RNN and LSTM | Obtained optimum results |
|
Alakus and Turkoglu ( | Laboratory data | Clinical predictive models | Accuracy of 86.66% and F1-score of 91.89%, |
|
Tuli et al. ( | Data by Hannah Ritchie | ML-based improved model | Yields high accuracy |
|
Kavadi et al. ( | Indian dataset | Linear regression model | Outperformed state-of-the-art methods |
|
Pinter et al. ( | Data from Hungary | Multi-layered perceptron-imperialist competitive algorithm (MLP-ICA) | Obtained promising results |
|
Prasanth et al. ( | Google trend and ECDC data | A hybrid GWO algorithm | Reduce MAPE by 74% results |
|
Abbasimehr and Paki ( | Live time series data | Bayesian optimization-based algorithm | Mean SMAPE is 0.25 results |
|
Elsheikh et al. ( | Official data from Saudi Arabia | LSTM and other variants | Obtained highly accurate results |