| Literature DB >> 36236791 |
Rajasekhar Chaganti1, Furqan Rustam2, Talal Daghriri3, Isabel de la Torre Díez4, Juan Luis Vidal Mazón5,6,7, Carmen Lili Rodríguez5,8, Imran Ashraf9.
Abstract
Building energy consumption prediction has become an important research problem within the context of sustainable homes and smart cities. Data-driven approaches have been regarded as the most suitable for integration into smart houses. With the wide deployment of IoT sensors, the data generated from these sensors can be used for modeling and forecasting energy consumption patterns. Existing studies lag in prediction accuracy and various attributes of buildings are not very well studied. This study follows a data-driven approach in this regard. The novelty of the paper lies in the fact that an ensemble model is proposed, which provides higher performance regarding cooling and heating load prediction. Moreover, the influence of different features on heating and cooling load is investigated. Experiments are performed by considering different features such as glazing area, orientation, height, relative compactness, roof area, surface area, and wall area. Results indicate that relative compactness, surface area, and wall area play a significant role in selecting the appropriate cooling and heating load for a building. The proposed model achieves 0.999 R2 for heating load prediction and 0.997 R2 for cooling load prediction, which is superior to existing state-of-the-art models. The precise prediction of heating and cooling load, can help engineers design energy-efficient buildings, especially in the context of future smart homes.Entities:
Keywords: Internet of Things; cooling load; energy consumption prediction; smart homes; sustainable homes
Year: 2022 PMID: 36236791 PMCID: PMC9571769 DOI: 10.3390/s22197692
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Comparative analysis of discussed works on heating and cooling load prediction.
| Ref. | Year | Dataset | Model | Target | Evaluation Metrics | Results |
|---|---|---|---|---|---|---|
| [ | 2018 | UCI | ELM, OSELM | HL & CL | MAE, Prediction Time(PT) | RBF—HL: MAE 0.0456 PT 0.0389; CL: MAE 0.0358 PT 0.0348 |
| [ | 2018 | Energy Plus Simulation | ANN | HL & CL | ANN—Maximum Deviation HL: 3.7% CL: 3.9%; | |
| [ | 2018 | Energy Plus Simulation | DL | HL & CL |
| |
| [ | 2019 | UCI | XGBoost | HL & CL | RMSE, | XGBoost—HL: RMSE(kW) 0.265, |
| [ | 2019 | Synthetic Dataset | XGBoost, ANN | HL & CL | RMSE, | XGBoost—CL: RMSE 2.95, |
| [ | 2019 | UCI | Octahedric Regression(OR) | HL& CL | MAE, MSE, MAPE | OR—HL: MAE 0.945, MSE 2.289, MAPE 4.182, CL: MAE 1.113, MSE 2.731, MAPE 4.554 |
| [ | 2020 | Energy Plus Simulation | Multi-objective optimisation | HL & CL | RMSE, MAE, | HL: RMSE 10.25, MAE 2.54, |
| [ | 2020 | UCI | DNN | HL & CL | NMAE, MAE, NRMSE, RMSE, | DNN—HL: NMAE 0.018, MAE 0.2, NRMSE 0.025, RMSE 0.263, |
| [ | 2020 | Heat Exchange Stations in Anyang City data | Temporal convolutional neural network (TCN) | HL | MAE, RMSE, MAPE, Accuracy | TCN—MAE 0.102, RMSE 0.129, MAPE 0.021, Accuracy 0.979 |
| [ | 2020 | UCI | Gated Recurrent Unit(GRU) | HL & CL | MAE, MSE, RMSE, MAPE | GRU—HL: MAE 1.3691, MSE 0.7215, RMSE 0.8494, MAPE 0.9315; CL: MAE 1.4027, MSE 0.9791, RMSE 0.9894, MAPE 1.0132 |
| [ | 2020 | Office Building in Tianjin China dataset | K-means clustering, Discrete Wavelet Transform(DWT) | CL | ||
| [ | 2022 | UK Ministry of Housing dataset | DT, SVM, Gradient Boosting(GB) and all ML | Energy Efficiency | Accuracy | GB—Accuracy 0.67 |
| [ | 2022 | UCI | Shuffled complex evolution(SCE)-multi-layer perception(MLP) | CL | RMSE, MAE, | SCE-MLP—RMSE 2.5943, MAE 0.8124, |
Figure 1The architecture of the proposed 3RFF model HL and DL prediction.
Figure 2The architecture of the energy efficiency data processing pipeline.
Machine learning model hyperparameter settings.
| Model | Hyperparameters | Tuning Range |
|---|---|---|
| MLP | random_state=1, max_iter=500 | random_state={1 to 10}, max_iter={100 to 1000} |
| KNN | n_neighbour=3, weights=’uniform’ | n_neighbour={1 to 5}, weights=’uniform’ |
| LR | Default | Default |
| RF | n_estimators=300, max_depth=10 | n_estimators={50 to 500}, max_depth={2 to 50} |
| 3RF | RF+RF+RF | 2RF, 3RF, 4RF, 5RF |
Description of dataset features.
| Feature | Combinations | Value Range | Unit | Description |
|---|---|---|---|---|
|
| ||||
| RelativeCompactness | 12 | 0.68–0.98 | - | The volume to surface ratio is compared the most compact shape with same volume |
| SurfaceArea | 12 | 514–808 | m2 | The total area occupied by the building |
| WallArea | 7 | 245–416 | m2 | Total area of an exterior building wall including all openings |
| RoofArea | 4 | 110–220 | m2 | the surface of the roof of the building |
| OverallHeight | 2 | 3.5–7 | m | Overall height from the lowest point of conditioned space to the highest point |
| Orientation | 4 | 2–5 | - | Orientation decides which direction the building faces |
| GlazingArea | 4 | 0–0.4 | m2 | The total area occupied by windows in a building |
| GlazingAreaDistribution | 6 | 0–5 | - | The direction of the glazing area covered in the building |
|
| ||||
| Heating Load | - | 6–43 | KWh/m2 | Amount of heat added in an area to maintain the temperature within acceptable range |
| CoolingLoad | - | 10–48 | KWh/m2 | The amount of latent and sensible heat removed from the required area to maintain the acceptable temperature |
Figure 3Cooling loading features from machine learning models.
Cooling load prediction using machine learning models.
| Model | MAE | MSE | RMSE |
|
|---|---|---|---|---|
| MLP | 21.835 | 3.475 | 4.672 | 0.777 |
| KNN | 2.946 | 1.351 | 1.716 | 0.966 |
| LR | 9.652 | 2.220 | 3.106 | 0.887 |
| RF | 3.424 | 1.101 | 1.850 | 0.959 |
| GAM | 3.471 | 1.385 | 1.863 | 0.964 |
| 3RF | 0.515 | 0.526 | 0.675 | 0.997 |
Cooling load prediction results using 10-fold cross-validation for machine learning models.
| Model | Mean | Standard Deviation |
|---|---|---|
| MLP | 0.76 | +/−0.12 |
| KNN | 0.91 | +/−0.08 |
| LR | 0.91 | +/−0.08 |
| RF | 0.91 | +/−0.08 |
| GAM | 0.90 | +/−0.08 |
| 3RF | 0.96 | +/−0.03 |
Figure 4Cooling load features using the proposed 3RF model.
Figure 5Heating load features using machine learning models.
Heating load prediction performance using machine learning models.
| Model | MAE | MSE | RMSE |
|
|---|---|---|---|---|
| MLP | 14.241 | 2.885 | 3.773 | 0.849 |
| KNN | 5.185 | 1.729 | 2.277 | 0.950 |
| LR | 10.260 | 2.353 | 3.203 | 0.896 |
| RF | 0.369 | 0.372 | 0.607 | 0.996 |
| GAM | 1.060 | 0.749 | 1.029 | 0.990 |
| 3RF | 0.521 | 0.548 | 0.722 | 0.998 |
Heating load results using 10-fold cross-validation for machine learning models.
| Model | Mean | Standard Deviation |
|---|---|---|
| MLP | 0.70 | |
| KNN | 0.85 | |
| LR | 0.89 | |
| RF | 0.92 | |
| GAM | 0.91 | |
| 3RF | 0.95 |
Fold-wise cross-validation accuracy for 3RF model.
| Fold | Heating Load | Cooling Load |
|---|---|---|
| 1 | 0.724 | 0.862 |
| 2 | 0.975 | 0.982 |
| 3 | 0.976 | 0.958 |
| 4 | 0.973 | 0.976 |
| 5 | 0.981 | 0.989 |
| 6 | 0.974 | 0.966 |
| 7 | 0.978 | 0.982 |
| 8 | 0.976 | 0.968 |
| 9 | 0.973 | 0.956 |
| 10 | 0.985 | 0.975 |
| Mean | 0.95 | 0.96 |
Figure 6Heating load prediction using best performing RF features.
Figure 7Performance comparison of machine learning models.
Accuracy and computational cost (time) for different variants of RF.
| Model | Heating Load | Cooling Load | ||
|---|---|---|---|---|
| Time (s) | Time (s) | |||
| 2RF | 0.996 | 0.59 | 0.996 | 0.747 |
| 3RF | 0.998 | 0.61 | 0.997 | 0.83 |
| 4RF | 0.997 | 1.34 | 0.997 | 1.97 |
| 5RF | 0.997 | 1.67 | 0.997 | 1.87 |
Hyperparameters used for the deep learning models for HL and CL prediction.
| Parameters | CNN | LSTM | CNN-LSTM |
|---|---|---|---|
| Conv1D | - | 1 | 1 |
| MaxPooling1D | - | Yes | Yes |
| Poolsize | - | 4 | 4 |
| Dense | 1 | 1 | 1 |
| Dropout | 0.2 | - | - |
| Activation | ReLU | ReLU | ReLU |
| Batchsize | 128 | 128 | 64 |
| Optimizer | Adam | Adam | Adam |
| Loss | MAE | MAE | MAE |
| Number of Units | 128 | 128 | 128 |
| Epochs | 2000 | 2000 | 2000 |
| Flatten | Yes | Yes | No |
Figure 8Training and validation loss for deep learning models for cooling load.
Error statistics of deep learning models for cooling load.
| Model | MAE | MSE | RMSE |
|
|---|---|---|---|---|
| LSTM | 5.82 | 1.92 | 2.41 | 0.93 |
| CNN | 7.23 | 2.03 | 2.69 | 0.92 |
| CNN-LSTM | 8.33 | 2.18 | 2.89 | 0.90 |
Figure 9Models training and validation loss for heating loading.
Performance of deep learning models for the heating load.
| Model | MAE | MSE | RMSE |
|
|---|---|---|---|---|
| LSTM | 4.95 | 1.73 | 2.22 | 0.95 |
| CNN | 9.23 | 2.17 | 3.04 | 0.90 |
| CNN-LSTM | 11.21 | 2.33 | 3.35 | 0.87 |
Results using 10-fold cross-validation for deep learning models.
| Model | Heating Load | Cooling load | ||
|---|---|---|---|---|
| Mean | Standard Deviation | Mean | Standard Deviation | |
| LSTM | 0.90 | 0.88 | ||
| CNN | 0.89 | 0.85 | ||
| CNN-LSTM | 0.89 | 0.87 | ||
Comparison with state-of-the-art approaches (metric: ).
| Authors | Year | Model | HL | CL |
|---|---|---|---|---|
| [ | 2014 | Ensemble model | 0.998 | 0.986 |
| [ | 2014 | Ensemble model | 0.998 | 0.990 |
| [ | 2017 | RF | 0.998 | 0.991 |
| [ | 2018 | Component based | 0.848 | 0.983 |
| [ | 2019 | XGBoost | 0.98 | 0.94 |
| [ | 2020 | SCE-MLP | - | 0.922 |
| Current study | 2022 | 3RF | 0.999 | 0.997 |