| Literature DB >> 35890929 |
Mengran Zhou1, Shuai Shao1, Xu Wang1, Ziwei Zhu1, Feng Hu1.
Abstract
Commercial load is an essential demand-side resource. Monitoring commercial loads helps not only commercial customers understand their energy usage to improve energy efficiency but also helps electric utilities develop demand-side management strategies to ensure stable operation of the power system. However, existing non-intrusive methods cannot monitor multiple commercial loads simultaneously and do not consider the high correlation and severe imbalance among commercial loads. Therefore, this paper proposes a deep learning-based non-intrusive commercial load monitoring method to solve these problems. The method takes the total power signal of the commercial building as input and directly determines the state and power consumption of several specific appliances. The key elements of the method are a new neural network structure called TTRNet and a new loss function called MLFL. TTRNet is a multi-label classification model that can autonomously learn correlation information through its unique network structure. MLFL is a loss function specifically designed for multi-label classification tasks, which solves the imbalance problem and improves the monitoring accuracy for challenging loads. To validate the proposed method, experiments are performed separately in seen and unseen scenarios using a public dataset. In the seen scenario, the method achieves an average F1 score of 0.957, which is 7.77% better than existing multi-label classification methods. In the unseen scenario, the average F1 score is 0.904, which is 1.92% better than existing methods. The experimental results show that the method proposed in this paper is both effective and practical.Entities:
Keywords: commercial load; correlation; deep learning; imbalance; multi-label classification; non-intrusive load monitoring
Mesh:
Year: 2022 PMID: 35890929 PMCID: PMC9320136 DOI: 10.3390/s22145250
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Comparison of Transformer-based NILM methods.
| Reference | Name | Publication | Dataset | Framework | ||
|---|---|---|---|---|---|---|
| Task | Main Components | Loss | ||||
| Lin et al. | MA-net | August 2020 | REDD | r and c | Encoders | MSE |
| Yue et al. | BERT4NILM | November 2020 | REDD | r and c | Encoders | MSE |
| Yue et al. | ELTransformer | March 2022 | REDD | r and c | Encoders | MSE |
| Sykiotis et al. | ELECTRIcity | April 2022 | REDD | r and c | Encoders | MSE |
Figure 1The TTRNet model architecture.
Figure 2A Transformer block of the TTRNet.
Figure 3The Temporal Pooling module.
Figure 4The RethinkNet module.
Figure 5Academic Block’s smart meter installation location instructions.
Attribute information for electrical load data collected from the Academic Block.
| Dataset Properties | Value |
|---|---|
| Number of main meters | 1 |
| Number of sub-meters | 8 |
| Sampling frequency | 30 s |
| Sampling range | 1 June 2014–1 July 2014 |
The main meter data information of commercial buildings in the synthetic dataset.
| Actual Location | Main Meter Load Composition | Main Meter Measurement Composition | |
|---|---|---|---|
| Building1 | Academic Block | Total Load | SM1 |
| Building2 | Academic Block | Total load-Lifts load | SM1-SM6 |
| Building3 | Academic Block | Total load-Light load | SM1-SM7 |
| Building4 | Academic Block | Total load-Socket1 load | SM1-SM8 |
| Building5 | Academic Block | Total load-Socket2 load | SM1-SM9 |
Parameter information for obtaining the activation state of the AHU device.
| AHU0 | AHU1 | AHU2 | AHU5 | |
|---|---|---|---|---|
| Max. power limit (W) | 5000 | 4500 | 4500 | 12,000 |
| Active power threshold | 500 | 450 | 450 | 1200 |
| Min. OFF duration | 15 | 15 | 15 | 15 |
| Min. ON duration | 15 | 15 | 15 | 15 |
Dataset partitioning for the seen and unseen scenarios.
| Building 1 | Building 2 | Building 3 | Building 4 | Building 5 | ||
|---|---|---|---|---|---|---|
| seen | Training (%) | 70 | 70 | 70 | 70 | 70 |
| Validation (%) | 15 | - | - | - | - | |
| Testing (%) | 15 | - | - | - | - | |
| unseen | Training (%) | 70 | - | - | - | 70 |
| Validation (%) | 15 | - | - | - | 15 | |
| Testing (%) | - | 100 | - | - | - |
Performance of the method in the seen scenario, including the average and 90% interval of each load over twenty experiments and the average of all loads.
| AHU0 | AHU1 | AHU2 | AHU5 | Avg | |||||
|---|---|---|---|---|---|---|---|---|---|
| Avg | Avg | Avg | Avg | ||||||
| F1 | 0.892 | (0.817,0.944) | 0.951 | (0.865,0.996) | 0.992 | (0.991,0.993) | 0.991 | (0.983,0.994) | 0.957 |
| Precision | 0.986 | (0.964,1.000) | 0.978 | (0.884,1.000) | 0.998 | (0.997,0.999) | 0.990 | (0.973,0.995) | 0.988 |
| Recall | 0.819 | (0.691,0.912) | 0.928 | (0.836,0.996) | 0.986 | (0.983,0.989) | 0.992 | (0.987,0.994) | 0.931 |
| Accuracy | 0.938 | (0.901,0.965) | 0.979 | (0.948,0.998) | 0.994 | (0.993,0.995) | 0.994 | (0.988,0.996) | 0.976 |
| MCC | 0.859 | (0.777,0.920) | 0.940 | (0.845,0.995) | 0.988 | (0.985,0.990) | 0.986 | (0.974,0.991) | 0.943 |
| MAE | 185.19 | (121.69,270.67) | 142.76 | (108.08,220.30) | 80.41 | (77.92,83.34) | 380.09 | (365.31,438.53) | 197.13 |
| SAE | −0.148 | (−0.292,−0.037) | −0.059 | (−0.174,0.071) | −0.057 | (−0.061,−0.053) | 0.070 | (0.060,0.090) | −0.049 |
Figure 6Comparison of the output state and the real state in the seen scenario.
Performance of the method in the unseen scenario, including the average and 90% interval of each load over twenty experiments and the average of all loads.
| AHU0 | AHU0 | AHU0 | AHU0 | Avg | |||||
|---|---|---|---|---|---|---|---|---|---|
| Avg | Avg | Avg | Avg | ||||||
| F1 | 0.864 | (0.854,0.874) | 0.827 | (0.812,0.843) | 0.940 | (0.938,0.942) | 0.983 | (0.982,0.985) | 0.904 |
| Precision | 0.910 | (0.877,0.926) | 0.871 | (0.829,0.916) | 0.955 | (0.951,0.960) | 0.989 | (0.983,0.995) | 0.932 |
| Recall | 0.823 | (0.791,0.862) | 0.788 | (0.743,0.829) | 0.926 | (0.917,0.931) | 0.978 | (0.971,0.985) | 0.878 |
| Accuracy | 0.883 | (0.877,0.889) | 0.867 | (0.857,0.880) | 0.941 | (0.940,0.943) | 0.984 | (0.982,0.985) | 0.919 |
| MCC | 0.765 | (0.754,0.776) | 0.722 | (0.701,0.749) | 0.883 | (0.880,0.887) | 0.967 | (0.964,0.971) | 0.834 |
| MAE | 359.17 | (345.36,371.95) | 515.57 | (470.77,551.56) | 266.09 | (260.46,271.35) | 592.14 | (577.86,617.02) | 433.24 |
| SAE | −0.095 | (−0.148,−0.015) | −0.094 | (−0.169,0.001) | −0.030 | (−0.045,−0.021) | −0.012 | (−0.024,−0.001) | −0.058 |
Figure 7Comparison of the instantaneous power estimates and the actual power in the unseen scenario.
Ablation studies of different components of TTRNet in the seen scenarios.
| Model | Transformer | Temporal Pooling | RethinkNet | Focal Loss | AHU0 | AHU1 | AHU2 | AHU5 | Avg |
|---|---|---|---|---|---|---|---|---|---|
| Baseline | − | √ | − | − |
|
|
|
|
|
| ModelA | − | − | − | − |
|
|
|
|
|
| ModelB | √ | √ | − | − |
|
|
|
|
|
| ModelC | − | √ | √ | − |
|
|
|
|
|
| ModelD | − | √ | √ | √ |
|
|
|
|
|
| ModelE | √ | √ | √ | − |
|
|
|
|
|
| TTRNet | √ | √ | √ | √ |
|
|
|
|
|
Performance comparison of different multi-label classification NILM methods in seen scenarios. The number in bold is the largest of the three model comparisons.
| Device | Model | F1score | Precision | Recall | Accuracy | MCC | MAE | SAE |
|---|---|---|---|---|---|---|---|---|
| AHU0 | CNN | 0.871 | 0.783 | 0.982 | 0.907 | 0.812 | 273.71 | 0.286 |
| TP-NILM |
| 0.875 |
|
|
|
| 0.160 | |
|
| 0.892 |
| 0.819 | 0.938 | 0.859 | 185.19 |
| |
| AHU1 | CNN | 0.683 | 0.529 |
| 0.800 | 0.612 | 702.30 | 0.836 |
| TP-NILM | 0.772 | 0.650 | 0.959 | 0.872 | 0.716 | 484.95 | 0.485 | |
|
|
|
| 0.928 |
|
|
|
| |
| AHU2 | CNN | 0.871 | 0.781 | 0.985 | 0.894 | 0.798 | 422.14 | 0.206 |
| TP-NILM | 0.923 | 0.873 | 0.984 | 0.938 | 0.879 | 272.84 | 0.088 | |
|
|
|
|
|
|
|
|
| |
| AHU5 | CNN | 0.880 | 0.795 | 0.985 | 0.902 | 0.813 | 1235.75 | 0.325 |
| TP-NILM | 0.930 | 0.874 |
| 0.945 | 0.891 | 847.91 | 0.223 | |
|
|
|
| 0.992 |
|
|
|
|
Performance comparison of different multi-label classification NILM methods in unseen scenarios. The number in bold is the largest of the three model comparisons.
| Device | Model | F1score | Precision | Recall | Accuracy | MCC | MAE | SAE |
|---|---|---|---|---|---|---|---|---|
| AHU0 | CNN | 0.862 | 0.834 | 0.894 | 0.871 | 0.744 | 393.80 | 0.072 |
| TP-NILM |
| 0.873 |
|
|
|
|
| |
|
| 0.864 |
| 0.823 | 0.883 | 0.765 | 359.17 | −0.095 | |
| AHU1 | CNN | 0.763 | 0.735 | 0.793 | 0.801 | 0.593 | 692.26 | 0.080 |
| TP-NILM | 0.822 | 0.831 |
| 0.858 | 0.704 | 537.67 |
| |
|
|
|
| 0.788 |
|
|
| −0.094 | |
| AHU2 | CNN | 0.862 | 0.835 | 0.891 | 0.858 | 0.718 | 551.67 | 0.067 |
| TP-NILM | 0.903 | 0.927 | 0.881 | 0.906 | 0.813 | 384.15 | −0.048 | |
|
|
|
|
|
|
|
|
| |
| AHU5 | CNN | 0.901 | 0.882 | 0.921 | 0.900 | 0.801 | 1371.48 | 0.044 |
| TP-NILM | 0.933 | 0.935 | 0.933 | 0.934 | 0.869 | 1055.06 |
| |
|
|
|
|
|
|
|
| −0.012 |