| Literature DB >> 35035588 |
Bharathi Gopal1, Anandharaj Ganesan2.
Abstract
The current COVID 19 halo infection has caused a severe catastrophe with its deadly spread. Despite the implementation of the vaccine, the severity of the infection has not diminished, and it has become stronger and more destructive. So, the only solution to protect ourselves from infection is social-distancing. Although social-distancing has been in practice for a long time, in most places it is not effectively followed, and it is very difficult to find out manually at all times whether people are following it or not. Therefore, we introduced a newly developed framework of deep-learning technique to automatically identify whether people maintain social-distancing or not using remote sensing top view images. Initially, we are detecting the context of image which includes information about the environment. Our detection model recognizes individuals using the boundary box. Then centroid is determined over every detected boundary box. By means of applying Euclidean distance, the pair range distances of the detected boundary box centroid are determined. To evaluate whether the distance measurement exceeds the minimum social distance limit, the violation threshold is established. We used Improved Single Shot Detector model for detecting a person over an image. Experiments are carried out on widely collected remote sensing images from various environments. Based on the object detection algorithm of deep learning, a variety of performance metrics are compared to evaluate the efficiency of the proposed model. Research outcome shows that, our proposed model outperforms well while recognize and detect a person in a well excellent way.Entities:
Keywords: Boundary box; Centroid; Deep learning; Euclidean distance; SSD; Threshold
Year: 2022 PMID: 35035588 PMCID: PMC8749912 DOI: 10.1007/s12145-021-00758-4
Source DB: PubMed Journal: Earth Sci Inform ISSN: 1865-0473 Impact factor: 2.705
Fig. 1Significance of maintaining social distancing
Fig. 2Effect of following the social distancing
Analysis of various techniques involved in social distancing monitoring system based on literature review work
| Existing solutions | Methods/techniques | Accuracy |
|---|---|---|
| Dalal and Triggs ( | Texture-based schemes using SVM | Acc = 94.37 Sen = 95.97 Spe = 96.63 Pre = 97.83 F1-score = 97.37 |
| Leibe et al. ( | Estimation based trajectory using CNN | Acc = 97.03 Sen = 96.05 Spe = 100 |
| Andriluka et al. ( | Tracklet-based detectors using RCNN | Acc = 96 Sen = 100 Spe = 94 |
| Eshel and Moses ( | Crowd detection faster RCNN | Acc = 74 Sen = 79.1 Spe = 62.4 |
| Su et al. ( | Counting model based on people vision using CNN with SSD | Acc = 89.89 Sen = 92.04 Spe = 88.98 AUC = 94.37 |
| Punn et al. ( | Detect humans and the Deepsort using the YOLOv3 model. | Acc = 93 AUC = 97 Pre = 95 F1-score = 93 |
| Ramadass et al. ( | Autonomous drone-based model using YOLOv3 model | Acc = 96.35 Spe = 95.30 Sen = 100 Pre = 95.34 |
| Pouw et al. ( | A framework for physical distancing and managing crowd using faster-RCNN and SSD. | Acc = 70.82 Sen = 69.81 Spe = 92.92 Pre = 68.78 F1-score = 67.12 |
Feature extraction performance on ImageNet challenge collected from literature work
| Used models | Accuracy | Parameters (p) | Ratio (a*100/p) |
|---|---|---|---|
| VGG-16 (Simonyan and Zisserman | 0.70 | 16 M | 4.72 |
| ResNet-101(He et al. | 0.75 | 43.6 M | 1.77 |
| Inception v2 (Szegedy et al. | 0.73 | 11 M | 7.39 |
| Inception v3 (Szegedy et al. | 0.77 | 23 M | 3.57 |
| Resnet v2 (Szegedy et al. | .079 | 55 M | 1.47 |
Hyper-parameters to generate the boundary boxes
| Models used for detection | Vector size (p) | Aspect ratio | Boundary boxes | Intersection of union |
|---|---|---|---|---|
| Faster RCNN (Punn et al. | [0.26, 0.6,1.0] | [0.6, 1.1, 2.0] | 9 | 0.7 |
| YOLO v3 (Punn et al. | [0.26, 0.6, 1.0] | [0.4, 0.6, 1.0] | 9 | 0.6 |
| SSD (Punn et al. | [0.2, 0.56, 0.94] | [0.4, 0.6, 1.0] | 9 | 0.5 |
| Improved SSD | [0.2, 057, 0.94] | [0.4,0.6, 1.0] | 9 | 0.5 |
Fig. 3Improved single shot detector
Fig. 4Proposed flow to detect social distance monitoring framework using overhead position
Fig. 5Comparison of testing accuracy between improved SSD with and without transfer learning
Testing accuracy between improved SSD with and without transfer learning
| Model | Testing accuracy |
|---|---|
| Improved SSD without transfer learning | 93.4% |
| Improved SSD with transfer learning | 95.3% |
Testing accuracy, precision, recall between Improved SSD with transfer learning and other state-of-art approaches [collected from literature work]
| Model | Accuracy | Precision | Recall | F-measure |
|---|---|---|---|---|
| Fast-RCNN(pre-trained) (Imran et al. | 90% | 80% | 66% | 72.33% |
| Faster-RCNN (pre-trained)(Imran et al. | 92% | 80% | 70% | 74.67% |
| Mask-RCNN (pre-trained)(Imran et al. | 92% | 82% | 70% | 75.53% |
| YOLOv3 (pre-trained)(Imran et al. | 92% | 84% | 78% | 80.89% |
| Improved SSD (trained using overhead position dataset) | 95.3% | 86.28% | 79.73% | 82.87% |
Fig. 6Comparison of performance evaluation between improved SSD with transfer learning and other state-of-art approaches
Fig. 7Front and side view tested sample input
Fig. 8a, b & c Overhead position tested sample input images
Fig. 9Detecting boundary boxes
Fig. 10Calculating centroid over an Image
Fig. 11Applying Euclidean distance, over every boundary boxes along with centroid point