| Literature DB >> 35458913 |
Qing An1, Xijiang Chen2, Junqian Zhang2, Ruizhe Shi2, Yuanjun Yang1, Wei Huang1.
Abstract
Accurate fire identification can help to control fires. Traditional fire detection methods are mainly based on temperature or smoke detectors. These detectors are susceptible to damage or interference from the outside environment. Meanwhile, most of the current deep learning methods are less discriminative with respect to dynamic fire and have lower detection precision when a fire changes. Therefore, we propose a dynamic convolution YOLOv5 fire detection method using a video sequence. Our method first uses the K-mean++ algorithm to optimize anchor box clustering; this significantly reduces the rate of classification error. Then, the dynamic convolution is introduced into the convolution layer of YOLOv5. Finally, pruning of the network heads of YOLOv5's neck and head is carried out to improve the detection speed. Experimental results verify that the proposed dynamic convolution YOLOv5 fire detection method demonstrates better performance than the YOLOv5 method in recall, precision and F1-score. In particular, compared with three other deep learning methods, the precision of the proposed algorithm is improved by 13.7%, 10.8% and 6.1%, respectively, while the F1-score is improved by 15.8%, 12% and 3.8%, respectively. The method described in this paper is applicable not only to short-range indoor fire identification but also to long-range outdoor fire detection.Entities:
Keywords: YOLOv5; deep learning; detection; dynamic convolution
Mesh:
Substances:
Year: 2022 PMID: 35458913 PMCID: PMC9025736 DOI: 10.3390/s22082929
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1Static and dynamic convolution.
Figure 2Improved YOLOv5 convolution layers with dynamic convolution.
Figure 3Dynamic convolution YOLOv5 network model structure.
Experimental dataset.
| Dataset | Fire Images | Non-Fire Images | Total |
|---|---|---|---|
| Train set | 8054 | 6046 | 14,100 |
| Validation set | 2033 | 1521 | 3554 |
| Test set | 5150 | 3130 | 8280 |
| Total | 15,237 | 10,697 | 25,934 |
Figure 4Training dataset.
Four possible outcomes of fire identification.
| Negative | Positive | |
|---|---|---|
| False | False Negative (FN) | False Positive (FP) |
| True | True Negative (TN) | True Positive (TP) |
Figure 5Fire detection results of YOLOv5 and the dynamic convolution YOLOv5. (a) YOLOv5. (b) Dynamic convolution YOLOv5.
Comparison results of the dynamic convolution experiment.
| P | R | Acc | F1-Score | Detection Time (ms) | |
|---|---|---|---|---|---|
| YOLOv5 | 89.7% | 97.4% | 91.5% | 46.7% | 26 |
| Dynamic convolution YOLOv5 | 96.4% | 99% | 96.8% | 49.3% | 29 |
Comparison results before and after pruning.
| P | R | Acc | F1-Score | Model Size | Detection Time (ms) | |
|---|---|---|---|---|---|---|
| YOLOv5 | 89.7% | 97.4% | 91.5% | 46.7% | 13.7 | 26 |
| YOLOv5 | 88.2% | 96.6% | 89.7% | 44.3% | 10.8 | 13 |
Figure 6Comparison of the change curve between the loss value of the object box and the belief of the object category.
Figure 7Changes in recall, precision and mAP of the YOLOv5 and the proposed method.
Figure 8Indoor fire detection.
Figure 9Outdoor fire detection.
Figure 10Fire detection at a long distance.
Figure 11Fire detection at a short distance.
Figure 12Multi-objective fire detection.
Figure 13Fire detection of different methods on rainy days.
Detection results of different algorithms.
| Different Methods | TP | TN | FP | FN |
|---|---|---|---|---|
| Fast-RCNN | 4314 | 2290 | 840 | 836 |
| SSD | 4501 | 2391 | 739 | 649 |
| Faster-RCNN | 4972 | 2417 | 713 | 178 |
| Cascade R-CNN | 5003 | 2447 | 683 | 147 |
| YOLOv5 | 5020 | 2554 | 576 | 130 |
| Proposed Method | 5100 | 2873 | 257 | 50 |
Quadri-partite measures of different methods.
| Different Methods | P (%) | R (%) | Acc (%) | F1-Score (%) |
|---|---|---|---|---|
| Fast-RCNN | 83.7 | 83.8 | 79.8 | 41.9 |
| SSD | 85.9 | 87.3 | 83.2 | 43.3 |
| Faster-RCNN | 87.5 | 96.5 | 89.2 | 44.8 |
| Cascade R-CNN | 88.1 | 97.1 | 90.1 | 45.6 |
| YOLOv5 | 89.7 | 97.4 | 91.5 | 46.7 |
| Proposed method | 95.2 | 99 | 96.3 | 48.5 |
Figure 14Processing time of each image by different methods.