| Literature DB >> 33182735 |
Muhammad Sajjad1, Samee Ullah Khan2, Noman Khan2, Ijaz Ul Haq2, Amin Ullah2, Mi Young Lee2, Sung Wook Baik2.
Abstract
In the current technological era, energy-efficient buildings have a significant research body due to increasing concerns about energy consumption and its environmental impact. Designing an appropriate energy-efficient building depends on its layout, such as relative compactness, overall area, height, orientation, and distribution of the glazing area. These factors directly influence the cooling load (CL) and heating load (HL) of residential buildings. An accurate prediction of these load facilitates a better management of energy consumption and enhances the living standards of inhabitants. Most of the traditional machine learning (ML)-based approaches are designed for single-output (SO) prediction, which is a tedious task due to separate training processes for each output with low performance. In addition, these approaches have a high level of nonlinearity between input and output, which need more enhancement in terms of robustness, predictability, and generalization. To tackle these issues, we propose a novel framework based on gated recurrent unit (GRU) that reliably predicts the CL and HL concurrently. To the best of our knowledge, we are the first to propose a multi-output (MO) sequential learning model followed by utility preprocessing under the umbrella of a unified framework. A comprehensive set of ablation studies on ML and deep learning (DL) techniques is done over an energy efficiency dataset, where the proposed model reveals an incredible performance as compared to other existing models.Entities:
Keywords: GRU; cooling load; energy consumption; energy efficient building; heating load
Year: 2020 PMID: 33182735 PMCID: PMC7696299 DOI: 10.3390/s20226419
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Massive amount of energy is consumed in the residential sector because various electrical appliances are installed.
Existing approaches for the prediction of HL and CL using energy efficiency dataset.
| Reference | Learning Strategy | Feature Selection | Evaluation Metrics |
|---|---|---|---|
| Tsanas and Xifara [ | RF, iteratively reweighted last squares (IRLS) | Mutual information, Spearman rank correlation coefficient, and | MSE, MAE, MRE |
| Chou and Bui [ | Fusion method (SVR + ANN), RF, SVR, CART, GLR, CHAID | - | RMSE, MAE, MAPE, R, SI |
| Cheng and Cao [ | Evolutionary multivariate adaptive regression splines (EMARS) | MARS | RMSE, MAPE, MAE, R2 |
| Ahmed et al. [ | ANN and | - | Silhouette score |
| Sonmez et al. [ | KNN and ANN (ABC, GA) | - | MAE, standard deviation |
| Alam et al. [ | ANN | ANOVA | RMSE |
| Fei et al. [ | ANN | - | MSE |
| Regina and Capriles [ | DT, MLP, RF, SVR | - | MAE, RMSE, MRE, R2 |
| Naji et al. [ | ANFIS | - | RMSE, R, R2 |
| Naji et al. [ | ELM | - | RMSE, R, R2 |
| Nilashi et al. [ | EM and ANFIS | PCA | MAE, MAPE, RMSE |
| Nwulu [ | ANN | - | RMSE, RRSE, MAE, RAE, R2 |
| Duarte et al. [ | DT, MLP, RF, SVM | - | MAE, RMSE, MAPE, R2 |
| Roy et al. [ | Multivariate adaptive regression splines, ELM, a hybrid model of MARS and ELM | MARS | RMSE, MAPE, MAE, R2, WMAPE, Time |
| Kavaklioglu [ | OLS, PLS | - | RMSE, R2, |
| Kumar et al. [ | ELM, online sequential ELM, bidirectional ELM | - | MAE, RMSE |
| Al-Rakhami et al. [ | Ensemble learning applying XGBoost | - | RMSE, R2, MAE, MAPE |
| Sekhar et al. [ | DNN, GRP, MPMR | - | VAF, RAAE, RMAE, R2, MAPE, NS, RMSE, WMAPE |
Figure 2The proposed framework for precise prediction of HL and CL through energy efficiency data using sequential learning model.
Parameters and their abbreviations.
| Acronyms | Description | Acronyms | Description |
|---|---|---|---|
| CL | Cooling load | MAE | Mean absolute error |
| HL | Heating load | RMSE | Root mean square error |
| GRU | Gated recurrent unit | SA | Sensitivity analysis |
| SVR | Support vector regression | SVM | Support vector machine |
| ANN | Artificial neural network | PCA | Principal component analysis |
| MLP | Multilayer perceptron | DNN | Deep neural network |
| ML | Machine learning | SO | Single-output |
| DL | Deep learning | MO | Multi-output |
| RF | Random forest | GPR | Gaussian process regression |
| MSE | Mean square error | GBR | Gradient boost regressor |
| SVM | Support vector machine | DMTs | Decision-making trees |
| rMSE | Relative mean square error | rRMSE | Relative root mean square error |
Figure 3(a) Visual representation of actual HL and CL data where x-axis shows the number of samples and y-axis illustrates the range of samples; (b) overall attributes in the dataset; (c) normalized sample value of HL and CL; (d) normalized value of 4 attributes.
Figure 4Difference between SO and MO in the prediction of HL and CL using the GRU model.
Figure 5Backpropagation of MLP architecture with one hidden layer.
Figure 6The GRU architecture for HL and CL prediction.
Detailed description of energy efficiency dataset.
| Variable | Building Information | Attribute | Total Values | Data Type | Units |
|---|---|---|---|---|---|
|
| Relative compactness | X1 | 12 | Real | None |
| Surface area | X2 | 12 | Real | m2 | |
| Wall area | X3 | 07 | Real | m2 | |
| Roof area | X4 | 04 | Real | m2 | |
| Overall height | X5 | 02 | Real | M | |
| Orientation | X6 | 04 | Integer | None | |
| Glazing area | X7 | 04 | Real | None | |
| Glazing area distribution | X8 | 6 | Integer | None | |
|
| Heating load | Y1 | 586 | Real | kWh/m2 |
| Cooling load | Y2 | 636 | Real | kWh/m2 |
Experimental results of various ML and DL models for SO prediction using the hold-out method.
|
|
| |||||||
|
|
| |||||||
|
|
|
|
|
|
|
|
| |
| SVR | 1.9532 | 1.5241 | 1.2345 | 1.3913 | 2.2143 | 1.6241 | 1.2744 | 1.7471 |
| RF | 2.4310 | 1.8701 | 1.3675 | 1.6714 | 2.4197 | 1.9875 | 1.4097 | 1.9032 |
| XGBoost | 1.8236 | 1.4797 | 1.2164 | 1.5941 | 2.1027 | 1.5579 | 1.2481 | 1.6179 |
| GBR | 2.3142 | 1.6091 | 1.2685 | 1.6721 | 2.3471 | 1.7928 | 1.3389 | 1.8932 |
| MLP | 1.7613 | 0.9781 | 0.9889 | 1.1741 | 1.9897 | 1.0899 | 1.0439 | 1.4869 |
| GRU | 1.3691 | 0.7215 | 0.8494 | 0.9315 | 1.4027 | 0.9791 | 0.9894 | 1.0132 |
|
|
| |||||||
|
|
| |||||||
|
|
|
|
|
|
|
|
| |
| SVR | 0.2855 | 0.1658 | 0.4072 | 0.5833 | 0.5662 | 0.6851 | 0.8277 | 0.9428 |
| RF | 0.3225 | 0.1924 | 0.4386 | 0.5312 | 1.0212 | 2.3355 | 3.7084 | 3.8192 |
| XGBoost | 0.2130 | 0.0911 | 0.3018 | 0.4120 | 0.4167 | 0.3566 | 0.5971 | 0.6580 |
| GBR | 0.3048 | 0.1467 | 0.3830 | 0.5269 | 0.9311 | 0.5971 | 2.7084 | 2.8149 |
| MLP | 0.0853 | 0.0075 | 0.0867 | 0.0988 | 0.0838 | 0.0074 | 0.0858 | 0.0897 |
| GRU | 0.0102 | 0.0003 | 0.0166 | 0.0284 | 0.0167 | 0.0006 | 0.0247 | 0.0368 |
Figure 7Numerous kernel performances of SVR in the prediction of HL and CL.
Experimental results of various ML and DL models for SO prediction using the 10-fold method.
|
|
| |||||||
|
|
| |||||||
|
|
|
|
|
|
|
|
| |
| SVR | 2.0978 | 1.6463 | 1.2830 | 1.4192 | 2.2089 | 1.7574 | 1.3256 | 1.5303 |
| RF | 2.5421 | 1.9943 | 1.4121 | 1.6971 | 2.6532 | 2.0215 | 1.4217 | 1.7082 |
| XGBoost | 1.9347 | 1.5998 | 1.2648 | 1.4023 | 2.0458 | 1.7110 | 1.3080 | 1.5134 |
| GBR | 2.4235 | 1.7497 | 1.3227 | 1.5932 | 2.5346 | 1.8608 | 1.3641 | 1.7043 |
| MLP | 1.8724 | 1.4996 | 1.2245 | 1.4932 | 1.9835 | 1.6107 | 1.2691 | 1.6043 |
| GRU | 1.4802 | 0.9871 | 0.9935 | 1.0210 | 1.5913 | 0.8920 | 0.9444 | 1.1031 |
|
|
| |||||||
|
|
| |||||||
|
|
|
|
|
|
|
|
| |
| SVR | 0.1941 | 0.0431 | 0.2076 | 0.3712 | 0.1830 | 0.0320 | 0.1788 | 0.2823 |
| RF | 0.2916 | 0.0981 | 0.3132 | 0.4312 | 0.2805 | 0.0870 | 0.2949 | 0.5024 |
| XGBoost | 0.1813 | 0.0334 | 0.1827 | 0.2715 | 0.1701 | 0.0231 | 0.1519 | 0.2529 |
| GBR | 0.2712 | 0.0849 | 0.2913 | 0.3108 | 0.2601 | 0.0738 | 0.2716 | 0.3914 |
| MLP | 0.0191 | 0.0091 | 0.0953 | 0.1076 | 0.0189 | 0.0080 | 0.0894 | 0.1289 |
| GRU | 0.0092 | 0.0001 | 0.0100 | 0.0391 | 0.0021 | 0.0001 | 0.0100 | 0.0282 |
Experimental results of various ML and DL models for MO prediction using the hold-out method.
|
|
|
| ||||||
|
|
| |||||||
|
|
|
|
|
|
|
|
| |
| SVR | 0.7831 | 0.5479 | 0.7402 | 0.8922 | 3.5347 | 2.3701 | 1.5395 | 2.6368 |
| RF | 0.9867 | 0.7863 | 0.8867 | 0.9647 | 3.9375 | 2.5561 | 1.5987 | 2.8059 |
| XGBoost | 0.5182 | 0.4841 | 0.6957 | 0.7328 | 3.2439 | 2.1253 | 1.4578 | 2.5278 |
| GBR | 0.6798 | 0.6531 | 0.8081 | 0.9781 | 3.7294 | 2.4321 | 1.5595 | 2.7053 |
| MLP | 0.0953 | 0.0189 | 0.1374 | 0.2579 | 2.9124 | 1.9760 | 1.4057 | 1.9979 |
| GRU | 0.0368 | 0.0015 | 0.0387 | 0.1134 | 1.7519 | 1.0217 | 1.0107 | 1.0901 |
|
|
| |||||||
| SVR | 0.6975 | 0.4043 | 0.6358 | 0.7098 | 3.4438 | 2.2903 | 1.5133 | 2.8186 |
| RF | 0.8790 | 0.6901 | 0.8307 | 0.9767 | 3.8466 | 2.4650 | 1.5700 | 3.0077 |
| XGBoost | 0.4791 | 0.3765 | 0.6135 | 0.7452 | 3.1529 | 2.0344 | 1.4263 | 2.7096 |
| GBR | 0.5170 | 0.5536 | 0.7440 | 0.8062 | 3.6385 | 2.3412 | 1.5300 | 2.9071 |
| MLP | 0.3732 | 0.1932 | 0.4395 | 0.5690 | 2.8215 | 1.8851 | 1.3729 | 2.0707 |
| GRU | 0.0062 | 0.0021 | 0.0458 | 0.1574 | 1.6608 | 1.0308 | 1.0152 | 1.0724 |
Figure 8Experimental results of the proposed model (GRU) for SO and MO prediction using the hold-out and 10-fold methods.
Figure 9Visualization of prediction results obtained via the proposed model (GRU), where x-axis indicates the number of samples while y-axis represents the actual and predicted load; (a) the actual and predicted outputs of CL using the SO strategy; (b) the actual and predicted outputs of HL and CL using the MO strategy.
Comparison of the proposed model (GRU) for HL and CL prediction with state-of-the-art models.
| Method | HL | CL | ||||
|---|---|---|---|---|---|---|
| MAE | MSE | RMSE | MAE | MSE | RMSE | |
| Tsanas and Xifara [ | 0.51 | - | - | 1.42 | - | - |
| Chou and Bui [ | 0.236 | - | 0.346 | 0.89 | - | 1.566 |
| Cheng and Cao [ | 0.35 | - | 0.47 | 0.71 | - | 1 |
| Sonmez et al. [ | 0.61 | - | - | 1.25 | - | - |
| Alam et al. [ | - | - | 0.19 | - | - | 1.42 |
| Regina and Capriles [ | 0.246 | - | 1.094 | 0.39 | - | 1.284 |
| Nilashi et al. [ | 0.16 | - | 0.26 | 0.52 | - | 0.81 |
| Nwulu [ | 0.977 | - | 1.228 | 1.654 | - | 2.111 |
| Duarte et al. [ | 0.315 | - | 0.223 | 0.565 | - | 0.837 |
| Roy et al. [ | 0.037 | - | 0.053 | 0.127 | - | 0.195 |
| Kavaklioglu [ | - | - | 3.16 | - | - | 3.122 |
| Kumar et al. [ | 0.138 | 0.321 | 0.134 | - | 0.646 | |
| Al-Rakhami et al. [ | 0.175 | - | 0.265 | 0.307 | - | 0.47 |
| Sekhar et al. [ | - | - | 0.059 | - | 0.079 | |
| Sadeghi et al. [ | 0.2 | - | 0.263 | 0.485 | - | 0.69 |
| Proposed (hold-out) | 0.0102 | 0.0003 | 0.0166 | 0.0167 | 0.0006 | 0.0247 |
| Proposed (10-fold) | 0.0092 | 0.0001 | 0.0100 | 0.0021 | 0.0001 | 0.0100 |