| Literature DB >> 35957392 |
Cheong-Hwan Hur1, Han-Eum Lee1, Young-Joo Kim2, Sang-Gil Kang1.
Abstract
Nonintrusive load monitoring (NILM) is a technology that analyzes the load consumption and usage of an appliance from the total load. NILM is becoming increasingly important because residential and commercial power consumption account for about 60% of global energy consumption. Deep neural network-based NILM studies have increased rapidly as hardware computation costs have decreased. A significant amount of labeled data is required to train deep neural networks. However, installing smart meters on each appliance of all households for data collection requires the cost of geometric series. Therefore, it is urgent to detect whether the appliance is used from the total load without installing a separate smart meter. In other words, domain adaptation research, which can interpret the huge complexity of data and generalize information from various environments, has become a major challenge for NILM. In this research, we optimize domain adaptation by employing techniques such as robust knowledge distillation based on teacher-student structure, reduced complexity of feature distribution based on gkMMD, TCN-based feature extraction, and pseudo-labeling-based domain stabilization. In the experiments, we down-sample the UK-DALE and REDD datasets as in the real environment, and then verify the proposed model in various cases and discuss the results.Entities:
Keywords: appliance usage classification; domain adaptation; nonintrusive load monitoring; pseudo labeling; semi-supervised learning; transfer learning
Mesh:
Year: 2022 PMID: 35957392 PMCID: PMC9371079 DOI: 10.3390/s22155838
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1A detailed overall configuration diagram of the proposed semi-supervised domain adaptation for multi-label classification on nonintrusive load monitoring.
Figure 2Step-by-step flowchart of the proposed method.
Figure 3Power Usage and ON Thresholds for House 1 and House 2 in the UK-DALE dataset.
Figure 4Power usage and ON thresholds for House 1 and House 3 in the REDD dataset.
ON threshold and the number of ON events in UK-DALE and REDD datasets.
| UK-DALE | REDD | |||||||
|---|---|---|---|---|---|---|---|---|
| House 1 | House 2 | House 1 | House 3 | |||||
| Appliance | Threshold | The Number of ON Event | Threshold | The Number of ON Event | Threshold | The Number of ON Event | Threshold | The Number of ON Event |
| DW | 2000 | 4431 | 1800 | 3236 | 1000 | 6712 | 650 | 2934 |
| FG | 250 | 2441 | 400 | 5291 | 400 | 2944 | 350 | 3344 |
| KT | 2200 | 4495 | 2000 | 1694 | - | - | - | - |
| MV | 1400 | 1242 | 1200 | 4218 | 1200 | 4809 | 1600 | 1327 |
| WM | 1800 | 4980 | 1500 | 1524 | 2500 | 4796 | 2200 | 5764 |
Training parameters.
| Parameter Description | Value |
|---|---|
| Number of TCN blocks | 8 (TN) |
| 5 (SN) | |
| Number of filters in each TCN block | 128 (TN) |
| 64 (SN) | |
| Filter size | 3 |
| Number of fully connected layers | 5 (TN) |
| 3 (SN) | |
| 2 (Domain Classifier) | |
| Dilation factor |
|
| Activation function | ReLU |
| Dropout probability | 0.1 |
| Number of maximum epochs | 200 |
| Number of minimum early stopping epochs | 4 |
| Mini-batch size | 512 |
| Learning rate | 3 × 10−3 |
F1 score comparison of domain adaptation within the same dataset.
| Appliance | Method | UK-DALE | REDD | ||
|---|---|---|---|---|---|
|
|
|
|
| ||
| DW | Baseline | 0.781 | 0.805 |
|
|
| TCN-DA | 0.832 | 0.827 |
|
| |
| gkMMD-DA | 0.778 | 0.793 |
|
| |
| TS-DA | 0.812 | 0.826 |
|
| |
| PL-DA | 0.787 | 0.811 |
|
| |
| Ours | 0.822 | 0.832 |
|
| |
| Improvement | 5.25% | 3.35% |
|
| |
| FG | Baseline | 0.833 | 0.834 | 0.817 | 0.818 |
| TCN-DA | 0.842 | 0.841 | 0.829 | 0.840 | |
| gkMMD-DA | 0.837 | 0.836 | 0.819 | 0.819 | |
| TS-DA | 0.850 | 0.853 | 0.824 | 0.827 | |
| PL-DA | 0.834 | 0.845 | 0.818 | 0.819 | |
| Ours | 0.875 | 0.872 | 0.843 | 0.852 | |
| Improvement | 5.04% | 4.56% | 3.18% | 4.16% | |
| KT | Baseline | 0.761 | 0.832 |
|
|
| TCN-DA | 0.811 | 0.839 |
|
| |
| gkMMD-DA | 0.753 | 0.820 |
|
| |
| TS-DA | 0.807 | 0.835 |
|
| |
| PL-DA | 0.770 | 0.833 |
|
| |
| Ours | 0.817 | 0.868 |
|
| |
| Improvement | 7.36% | 4.33% |
|
| |
| MV | Baseline | 0.742 | 0.791 | 0.793 | 0.790 |
| TCN-DA | 0.751 | 0.798 | 0.806 | 0.721 | |
| gkMMD-DA | 0.746 | 0.795 | 0.797 | 0.774 | |
| TS-DA | 0.753 | 0.803 | 0.804 | 0.798 | |
| PL-DA | 0.744 | 0.796 | 0.794 | 0.793 | |
| Ours | 0.774 | 0.812 | 0.814 | 0.818 | |
| Improvement | 4.31% | 2.65% | 2.65% | 3.54% | |
| WM | Baseline | 0.615 | 0.611 | 0.841 | 0.782 |
| TCN-DA | 0.725 | 0.708 | 0.844 | 0.799 | |
| gkMMD-DA | 0.623 | 0.625 | 0.842 | 0.786 | |
| TS-DA | 0.668 | 0.653 | 0.832 | 0.783 | |
| PL-DA | 0.623 | 0.615 | 0.843 | 0.783 | |
| Ours | 0.736 | 0.713 | 0.870 | 0.832 | |
| Improvement | 19.67% | 16.69% | 3.45% | 6.39% | |
F1 score comparison of TCN + gkMMD domain adaptation within the same dataset.
| Appliance | UK-DALE | REDD | ||
|---|---|---|---|---|
|
|
|
|
| |
| DW | 0.823 | 0.828 |
|
|
| FG | 0.857 | 0.854 | 0.834 | 0.847 |
| KT | 0.813 | 0.841 |
|
|
| MV | 0.762 | 0.805 | 0.809 | 0.764 |
| WM | 0.730 | 0.709 | 0.852 | 0.815 |
F1 score comparison of domain adaptation between different datasets.
| Appliance | Method |
| |
|---|---|---|---|
| DW | Baseline | 0.741 | 0.712 |
| TCN-DA | 0.779 | 0.737 | |
| gkMMD-DA | 0.736 | 0.713 | |
| TS-DA | 0.770 | 0.745 | |
| PL-DA | 0.747 | 0.714 | |
| Ours | 0.778 | 0.747 | |
| Improvement | 4.99% | 4.92% | |
| FG | Baseline | 0.786 | 0.764 |
| TCN-DA | 0.794 | 0.787 | |
| gkMMD-DA | 0.787 | 0.769 | |
| TS-DA | 0.800 | 0.772 | |
| PL-DA | 0.787 | 0.770 | |
| Ours | 0.821 | 0.797 | |
| Improvement | 4.45% | 4.32% | |
| MV | Baseline | 0.719 | 0.739 |
| TCN-DA | 0.726 | 0.716 | |
| gkMMD-DA | 0.719 | 0.746 | |
| TS-DA | 0.729 | 0.749 | |
| PL-DA | 0.717 | 0.743 | |
| Ours | 0.742 | 0.763 | |
| Improvement | 3.2% | 3.25% | |
| WM | Baseline | 0.563 | 0.758 |
| TCN-DA | 0.669 | 0.773 | |
| gkMMD-DA | 0.573 | 0.766 | |
| TS-DA | 0.610 | 0.758 | |
| PL-DA | 0.568 | 0.763 | |
| Ours | 0.672 | 0.769 | |
| Improvement | 19.36% | 1.45% |