| Literature DB >> 35009641 |
Muksimova Shakhnoza1, Umirzakova Sabina1, Mardieva Sevara2, Young-Im Cho3.
Abstract
A fire is an extraordinary event that can damage property and have a notable effect on people's lives. However, the early detection of smoke and fire has been identified as a challenge in many recent studies. Therefore, different solutions have been proposed to approach the timely detection of fire events and avoid human casualties. As a solution, we used an affordable visual detection system. This method is possibly effective because early fire detection is recognized. In most developed countries, CCTV surveillance systems are installed in almost every public location to take periodic images of a specific area. Notwithstanding, cameras are used under different types of ambient light, and they experience occlusions, distortions of view, and changes in the resulting images from different camera angles and the different seasons of the year, all of which affect the accuracy of currently established models. To address these problems, we developed an approach based on an attention feature map used in a capsule network designed to classify fire and smoke locations at different distances outdoors, given only an image of a single fire and smoke as input. The proposed model was designed to solve two main limitations of the base capsule network input and the analysis of large-sized images, as well as to compensate the absence of a deep network using an attention-based approach to improve the classification of the fire and smoke results. In term of practicality, our method is comparable with prior strategies based on machine learning and deep learning methods. We trained and tested the proposed model using our datasets collected from different sources. As the results indicate, a high classification accuracy in comparison with other modern architectures was achieved. Further, the results indicate that the proposed approach is robust and stable for the classification of images from outdoor CCTV cameras with different viewpoints given the presence of smoke and fire.Entities:
Keywords: artificial intelligence; attention feature map; capsule network; classification; deep learning; fire detection; smoke detection
Mesh:
Substances:
Year: 2021 PMID: 35009641 PMCID: PMC8747306 DOI: 10.3390/s22010098
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Capsule network architecture.
Figure 2Architecture of the proposed classification model.
Figure 3Attention feature map.
Dataset information.
| Name of Class Images | Smoke | Fire | Negative | Amount |
|---|---|---|---|---|
| Dataset | 4000 | 4000 | 4000 | 12,000 |
Figure 4Example images from three classes from datasets for training capsule network.
Performance hardware and software of computer.
| Technology | Description |
|---|---|
| Programming language | Python 3.7 |
| OS | Windows |
| Deep Learning library | PyTorch |
| CPU | Intel(R) core™ i-7 9750H |
| GPU | GeForce GTX 1660 Ti |
| RAM | 16.00 GB RAM |
| Cuda | 10.1 |
Comparing results with a different architecture.
| Capsule Network Architecture | SE | SP | MCC | A (%) |
|---|---|---|---|---|
| Original CapsNet | 80.4% | 86.7% | 0.673 | 84.1% |
| FC+FC | 82.6% | 86.7% | 0.694 | 85.0% |
| Conv+FC | 82.6% | 84.6% | 0.687 | 84.6% |
| Conv+FC+FC | 84.5% | 85.3% | 0.693 | 84.9% |
| Conv+Conv+FC+FC | 99.0% | 99.7% | 0.884 | 99.4% |
| Conv+Conv+Conv+FC+FC | 81.9% | 86.9% | 0.685 | 84.9% |
Comparison of accuracies of the same training dataset of smoke and fire classification for different methods.
| Model | SE | SP | MCC | A (%) | AUC |
|---|---|---|---|---|---|
|
| |||||
| Our Model | 91.8% | 92.9% | 0.850 | 92.4% | 0.955 |
| CNN [ | 87.0% | 85.0% | 0.715 | 85.9% | 0.933 |
| MLP [ | 82.4% | 86.4% | 0.687 | 84.7% | 0.920 |
| DBN [ | 72.2% | 80.8% | 0.533 | 80.8% | 0.903 |
| SVM [ | 90.7% | 84.4% | 0.743 | 87.1% | 0.933 |
| k-NN [ | 69.4% | 96.6% | 0.703 | 85.1% | 0.928 |
| Logistic regression [ | 88.8% | 83.7% | 0.710 | 85.5% | 0.858 |
| LightGBM [ | 79.6% | 82.3% | 0.617 | 81.2% | 0.810 |
|
| |||||
| Our Model | 88.9% | 71.4% | 0.554 | 76.7% | 0.806 |
| CNN | 94.4% | 52.4% | 0.441 | 65.0% | 0.725 |
| MLP | 88.9% | 57.1% | 0.426 | 66.7% | 0.707 |
| DBN | 88.9% | 52.4% | 0.386 | 63.3% | 0.683 |
| SVM | 88.9% | 52.4% | 0.386 | 63.3% | 0.660 |
| k-NN | 77.8% | 52.4% | 0.279 | 60.0% | 0.624 |
| Logistic regression | 83.3% | 52.4% | 0.332 | 61.7% | 0.623 |
| LightGBM | 61.1% | 59.5% | 0.190 | 60.0% | 0.609 |
The MCC values and overall accuracy of prediction (A) were 92.4%.
Figure 5Result of AUC scores of capsule network.
Figure 6Results of attention feature map using CapsNet for fire classification: (a) smoke classification results, (b) fire classification results, and (c) negative image results.