| Literature DB >> 34206718 |
Youness Arjoune1, Sai Peri1, Niroop Sugunaraj1, Avhishek Biswas1, Debanjan Sadhukhan1, Prakash Ranganathan1.
Abstract
Heat loss quantification (HLQ) is an essential step in improving a building's thermal performance and optimizing its energy usage. While this problem is well-studied in the literature, most of the existing studies are either qualitative or minimally driven quantitative studies that rely on localized building envelope points and are, thus, not suitable for automated solutions in energy audit applications. This research work is an attempt to fill this gap of knowledge by utilizing intensive thermal data (on the order of 100,000 plus images) and constitutes a relatively new area of analysis in energy audit applications. Specifically, we demonstrate a novel process using deep-learning methods to segment more than 100,000 thermal images collected from an unmanned aerial system (UAS). To quantify the heat loss for a building envelope, multiple stages of computations need to be performed: object detection (using Mask-RCNN/Faster R-CNN), estimating the surface temperature (using two clustering methods), and finally calculating the overall heat transfer coefficient (e.g., the U-value). The proposed model was applied to eleven academic campuses across the state of North Dakota. The preliminary findings indicate that Mask R-CNN outperformed other instance segmentation models with an mIOU of 73% for facades, 55% for windows, 67% for roofs, 24% for doors, and 11% for HVACs. Two clustering methods, namely K-means and threshold-based clustering (TBC), were deployed to estimate surface temperatures with TBC providing consistent estimates across all times of the day over K-means. Our analysis demonstrated that thermal efficiency not only depended on the accurate acquisition of thermal images but also relied on other factors, such as the building geometry and seasonal weather parameters, such as the outside/inside building temperatures, wind, time of day, and indoor heating/cooling conditions. Finally, the resultant U-values of various building envelopes were compared with recommendations from the American Society of Heating, Refrigerating, and Air-conditioning Engineers (ASHRAE) building standards.Entities:
Keywords: ASHRAE; Mask R-CNN; U-value; UASs; clustering; deep learning; heat loss quantification; instance segmentation; mean average precision; thermal imagery
Mesh:
Year: 2021 PMID: 34206718 PMCID: PMC8271532 DOI: 10.3390/s21134375
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
State-of-the-art computer vision techniques.
| Computer Vision Task | Model | Structure/Backbone | Metrics | Comments |
|---|---|---|---|---|
| Detection | R-CNN [ | Selective Search Algorithm + SVM | mAP of 62% on PASCAL VOC 2012 | Slow because of the high number of region proposals (2000) (47 s/test image) |
| Fast R-CNN [ | Selective Search Algorithm + FCs | mAP of 39.3%@0.5 on MS COCO and 66% on PASCAL VOC 2012 | “Fast compared with R-CNN 0.32 s/testing image Fast R-CNN trains the very deep VGG16 network 9 × faster than R-CNN, is 213 × faster at test-time” | |
| Faster R-CNN [ | Region Proposal Network + ROI pooling + FCs | mAP of 42.7% on MS COCO and mAP of 78.8% PASCAL VOC 2012 | Remove Selective Search Algorithm | |
| SSD | ResNet101 + FCs | mAP of 31.2% | Runs at 125 ms | |
| Real-Time Object Detection | YOLO [ | Single Regression from image pixel to class BB (Darknet-53) | YOLOv3-320 mAP of 28.2%, YOLOv3-416 mAP of 31%, and YOLOv3-608 mAP of 33%” | Support real-time (up to 45 FPS for YOLOv3-320) |
| DeconvNet [ | ConvNet (VGG-16) concatenated with DeconvNet | mAP of 70% on PASCAL VOC 2012 | ||
| FCN [ | FCN introduces the skip connection to fuse feature layers of different scales | Graph-FCN achieves mIoU of 65.91% on PASCAL VOC Dataset and FCN-32 achieves mIoU of 36.64% on PASCAL-Context | None | |
| Semantic Segmentation | ParsNet [ | Improved FCN | ParseNet Baseline achieves mIoU of 67.3% on PASCAL VOC Dataset and ParseNet achieves mIoU of 69.8% on PASCAL VOC Dataset | |
| Deeplab [ | Atrous Convolution for Dense Feature Extraction + Atrous Spatial Pyramid Pooling + Fully-Connected Conditional Random Fields | DeepLab achives mIoU of 64.96% on PASCAL VOC 2012 and DeepLab-LargeFOV achieves mIoU of 65.82% on PASCAL VOC 2012 | Objective function is optimized in all layers with respect to weights by SGD standards. Imagenet classifier is replaced with classifier equaling number of target classes in last layer. | |
| Instance Segmentation | EncNet [ | ResNet + Context Encoding Module | mIoU of 52.6% on PASCAL-Context Dataset | Introduces very little extra computation to original FCN network. Context Encoder is light weight. |
| Mask R-CNN [ | RPN + ROIAlign + Mask + ResNet101 | mAP of 39.8% and 63.1% on MS COCO for keypoints & mask | Replaces ROI pooling with ROIAlign in Faster R-CNN architecture and includes FCN for segmentation | |
| Panoptic Segmentation | Machine panoptic segmentation [ | Unified semantic segmentation and instance segmentation | Unique evaluation methodology |
Figure 1Illustration of the Mask R-CNN architecture with an input image at each stage of detection.
State-of-the-art image clustering techniques.
| Clustering Method | Performance | Reference |
|---|---|---|
| K-means/DBSCAN | K = 15 was optimal for accuracy and computation time. Further segmentation by DBSCAN yielded 136 clusters for precision. | [ |
| K-means/Threshold | K = 4 was optimal and yielded a 99.7% accuracy rate in hotspot detection for an ensemble ML model called Voting (Naive Bayes + REPTree). | [ |
| Dual Clustering Scheme | Precision and recall produced averages of 80% and 58.3% respectively for 10 sets of images. | [ |
| IFS/Fuzzy C-means | Average segmentation and Dice scores of 99% for varying levels of noise corrupted images. | [ |
| IP-MS | Average of 1.4 s per sample image in contrast to 2.3 s from the K-means algorithm. Higher accuracy than K-means in terms of blue color intensity representations. | [ |
| DEMP-k (Directly Estimated Misclassification Probabilities) | Tested on digit recognition gives correct classification ( | [ |
Figure 2Data-driven approach for thermal performance [4].
Dataset breakdowns.
| Datasets | # of Facades | # of Windows | # of Roofs | # of HVACs | # of Doors | Total Images |
|---|---|---|---|---|---|---|
| Dataset 1 | 2060 | 1109 | 634 | 343 | 100 | 2562 |
| Dataset 2 | 10,190 | 13,987 | 1894 | 0 | 126 | 10,971 |
| Dataset 3 | 2576 | 5207 | 492 | 2085 | 282 | 2541 |
| Dataset 4 | 26,217 | 18,684 | 11,747 | 1616 | 6448 | 26152 |
| Test Dataset | 207 | 176 | 95 | 38 | 43 | 213 |
| Total | 41,250 | 39,163 | 14,862 | 4082 | 6999 | 42,439 |
Figure 3Flowchart of thermal image clustering using K-means and Threshold-Based Clustering.
Figure 4Calculated U-values based on variation of c.
Confusion matrix.
| Positive (1) | Negative (0) | |
|---|---|---|
| Positive (1) | TP | TN |
| Negative (0) | FP | FN |
Figure 5Computing the Intersection of Union, IoU, calculated by dividing the area of overlap between the bounding boxes by the area of union.
Hyper parameters of the best performing model at each training session (Mask R-CNN).
| Training Dataset | Learning Rate | Epochs | Training Time |
|---|---|---|---|
| DS1 | 0.001 | 75 | 7 h 35 m |
| DS2 | 0.0001 | 100 | 30 h 38 m |
| DS3 | 0.00001 | 150 | 21 h 47 m |
| DS4 | 0.0001 | 200 | 288 h 35 m |
The average precision of computer vision algorithms trained and tested on OGI thermal images with three different threshold values for five classes: windows, facades, roofs, HVACs, and doors.
| Classes |
|
|
| Intersection | Union |
|
|---|---|---|---|---|---|---|
| Window | 0.39 | 0.39 | 0.18 | 12,648.28 | 27,679.36 | 0.45 |
| Facades | 0.34 | 0.31 | 0.12 | 39,487.61 | 118,876.24 | 0.33 |
| Roof | 0.41 | 0.31 | 0.07 | 20,042.73 | 55,555.64 | 0.36 |
| HVAC | 0.27 | 0.27 | 0.09 | 192.43 | 1085.39 | 0.17 |
| Door | 0.06 | 0.06 | 0 | 414.12 | 6955.88 | 0.05 |
| Faster R-CNN Inception ResNetV2 | 0.29 | 0.27 | 0.09 | 15,503,247 | 44,762,491 | 0.34 |
| Window | 0.26 | 0.10 | 0.03 | 10,011.70 | 28,303.41 | 0.35 |
| Facades | 0.41 | 0.32 | 0.05 | 29,035.47 | 126,352.91 | 0.22 |
| Roof | 0.38 | 0.26 | 0.13 | 16,002.26 | 59,677.54 | 0.26 |
| HVAC | 0.09 | 0 | 0 | 314.15 | 1927.03 | 0.16 |
| Door | 0.35 | 0.26 | 0 | 674.93 | 6983.51 | 0.09 |
| Faster R-CNN Inception V2 | 0.30 | 0.19 | 0.04 | 11,936,209 | 47,551,061 | 0.25 |
| Window | 0.28 | 0.19 | 0.12 | 10326.40 | 29,569.77 | 0.34 |
| Facades | 0.31 | 0.26 | 0.07 | 49,439.15 | 129,971.74 | 0.38 |
| Roof | 0.42 | 0.16 | 0.012 | 16,804.96 | 53,316.74 | 0.31 |
| HVAC | 0.23 | 0 | 0.012 | 117.57 | 1011.42 | 0.11 |
| Door | 0 | 0 | 0 | 411.18 | 8357.96 | 0.04 |
| Faster R-CNN ResNet 50 | 0.25 | 0.20 | 0.07 | 16,422,148 | 47,334,488 | 0.34 |
| Window | 0.70 | 0.69 | 0.44 | 21,545.78 | 39,170.45 | 0.55 |
| Facade | 0.81 | 0.79 | 0.67 | 131,982.81 | 179,617.92 | 0.73 |
| Roof | 0.67 | 0.67 | 0.67 | 53,879.35 | 80,260.40 | 0.67 |
| HVAC | 0.27 | 0.27 | 0.18 | 508.23 | 4501.80 | 0.11 |
| Door | 0.67 | 0.67 | 0.68 | 2815.73 | 11,665.11 | 0.24 |
| Mask R-CNN | 0.62 | 0.62 | 0.53 | 14,961,967 | 22,380,315 | 0.66 |
Figure 6Examples of images of building segmented using Mask-RCNN trained on the heat loss dataset.
Figure 7Raw (a–d) and hotspot (e–l) images for two particular windows of Museum and Twamley over the morning, afternoon, and evening.
Hotspot pixel overlapping (%) for windows and walls.
| Object of Interest | Windows | Walls | ||||||
|---|---|---|---|---|---|---|---|---|
|
|
|
|
|
| ||||
|
|
|
|
|
|
|
|
|
|
|
| 88.2% | 64.2% | 86% | 93% | 69.8% | 76.8% | 79% | 70% |
|
| 82.8% | 71.8% | 43% | 93.4% | 71.4% | 42.1% | 72% | 40.4% |
|
| 77.9% | 73.1% | 82% | 43% | 72.8% | 64.2% | 34% | 35% |
Figure 8The K-means-optimal cluster evaluation and temperature results.
Performance of K-means and Threshold-Based Clustering.
| Object of Interest | Walls | Windows | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Buildings | Museum | Twamley | Museum | Twamley | ||||||
|
|
| 197,186 | 197,186 | 186,601 | 186,601 | 36,167 | 366,167 | 17,926 | 17,926 | 280.35 K |
|
| 3.72 | 4.71 | 2.84 | 4.15 | 5.18 | 8.42 | 3.56 | 3.39 | ||
|
| 7241 | 9266 | 5340 | 8165 | 2022 | 3560 | 4383 | 4098 | ||
|
| 269.71 | 270.08 | 270.09 | 270.15 | 268.91 | 268.8 | 268.72 | 268.71 | ||
|
| 268.48 | 268.48 | 268.7 | 268.7 | 268.04 | 268.04 | 268.17 | 268.17 | ||
|
|
| 200325 | 200,325 | 183,642 | 183,642 | 42572 | 42,572 | 24,204 | 24204 | 273.25 K |
|
| 1.89 | 6.25 | 1.19 | 5.15 | 5.43 | 7.37 | 4.31 | 1.48 | ||
|
| 3340 | 12,526 | 3410 | 9973 | 2626 | 2802 | 999 | 417 | ||
|
| 277.27 | 276.98 | 279.24 | 278.81 | 274.36 | 274.32 | 273.66 | 274.14 | ||
|
| 275.48 | 275.48 | 276.54 | 276.54 | 273.31 | 273.31 | 272.21 | 272.21 | ||
|
|
| 202409 | 202,409 | 177,711 | 177,711 | 138,911 | 139,811 | 33,185 | 33,185 | 276.25 K |
|
| 2.99 | 5.66 | 1.19 | 5.76 | 4.7 | 6.22 | 2.18 | 7.31 | ||
|
| 6221 | 11,560 | 2026 | 9984 | 7023 | 9900 | 775 | 1384 | ||
|
| 294.28 | 294.12 | 308.58 | 321.25 | 281.7 | 281.32 | 319.68 | 317.33 | ||
|
| 283.9 | 283.9 | 303.46 | 303.46 | 278.86 | 278.86 | 304.2 | 304.2 | ||
Museum U-value estimation (morning) on 17 March 2020.
| Building Elements | # of Images | Temperature Analysis | U-Value Analysis (W/m | ASHRAE | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Surface Temperature (K) | Thermocouple | External Air | |||||||||
| Max | Min | Avg | Temperature | Temperature |
|
|
|
| |||
|
| 19 | 272.39 | 267.36 | 268.22 | 268.45 K | 266.15 K | 0.73 | 1.98 | 1.96 | 1.53 | 1.98 |
|
| 321 | 277.27 | 266.15 | 268.5 | 1.53 | 3.50 | 3.46 | 2.83 | 1.98 | ||
|
| 435 | 278.8 | 265.85 | 268.53 | 1.41 | 2.59 | 2.55 | 2.15 | 0.48 | ||
|
| 11 | 269.55 | 266.5 | 267.25 | 0.68 | 3.46 | 3.4 | 2.27 | 0.22 | ||
Twamley U-value estimation (morning), 17 March 2020.
| Building Elements | # of Images | Temperature Analysis | U-Value Analysis (W/m2·K) | ASHRAE | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Surface Temperature (K) | Thermocouple | Air | |||||||||
| Max | Min | Avg | Temperature | Temperature |
|
|
|
| |||
|
| 150 | 271 | 267 | 268.3 | 281.05 K | 266.15 K | 1.13 | 3.52 | 3.46 | 2.66 | 5.39 |
|
| 500 | 279.25 | 266.83 | 268.25 | 1.36 | 3.46 | 3.46 | 2.78 | 5.39 | ||
|
| 45 | 281 | 266.08 | 269 | 1.79 | 2.61 | 2.55 | 2.32 | 0.48 | ||
|
| 11 | 270.3 | 266.7 | 267.4 | 0.73 | 3.46 | 3.46 | 2.55 | 0.22 | ||
The error, precision, and deviation for U-value estimations.
| Building | Object of Interest | Error (%) | Precision (%) |
| ASHRAE Standard |
|---|---|---|---|---|---|
|
| Wall | 383.3 | 15.0 | ±0.35 | 0.48 |
| Window | 48.4 | 33.3 | ±0.93 | 5.39 | |
|
| Wall | 347.9 | 24.5 | ±0.53 | 0.48 |
| Window | 43.0 | 30.6 | ±0.87 | 1.98 | |
| Window 1 | 22.7 | 36.6 | ±0.56 | 1.98 |