| Literature DB >> 35190751 |
Tahir Hussain1, Dostdar Hussain2, Israr Hussain2, Hussain AlSalman3, Saddam Hussain4, Syed Sajid Ullah5, Suheer Al-Hadhrami6.
Abstract
Internet of Things (IoT) with deep learning (DL) is drastically growing and plays a significant role in many applications, including medical and healthcare systems. It can help users in this field get an advantage in terms of enhanced touchless authentication, especially in spreading infectious diseases like coronavirus disease 2019 (COVID-19). Even though there is a number of available security systems, they suffer from one or more of issues, such as identity fraud, loss of keys and passwords, or spreading diseases through touch authentication tools. To overcome these issues, IoT-based intelligent control medical authentication systems using DL models are proposed to enhance the security factor of medical and healthcare places effectively. This work applies IoT with DL models to recognize human faces for authentication in smart control medical systems. We use Raspberry Pi (RPi) because it has low cost and acts as the main controller in this system. The installation of a smart control system using general-purpose input/output (GPIO) pins of RPi also enhanced the antitheft for smart locks, and the RPi is connected to smart doors. For user authentication, a camera module is used to capture the face image and compare them with database images for getting access. The proposed approach performs face detection using the Haar cascade techniques, while for face recognition, the system comprises the following steps. The first step is the facial feature extraction step, which is done using the pretrained CNN models (ResNet-50 and VGG-16) along with linear binary pattern histogram (LBPH) algorithm. The second step is the classification step which can be done using a support vector machine (SVM) classifier. Only classified face as genuine leads to unlock the door; otherwise, the door is locked, and the system sends a notification email to the home/medical place with detected face images and stores the detected person name and time information on the SQL database. The comparative study of this work shows that the approach achieved 99.56% accuracy compared with some different related methods.Entities:
Mesh:
Year: 2022 PMID: 35190751 PMCID: PMC8858039 DOI: 10.1155/2022/5137513
Source DB: PubMed Journal: Comput Math Methods Med ISSN: 1748-670X Impact factor: 2.238
Related work.
| Verification technique | Adopted device | Input | Reference no. |
|---|---|---|---|
| Password verification | Keypad | Needed | [ |
| Near-field communication (NFC) verification and password comparison | Android phone | Needed | [ |
| Smart phone-based network authentication | Android phone | Needed | [ |
| Taking images for face recognition authentication | Android phone | Needed | [ |
| Real-time face recognition | Not needed | Not needed | Proposed method |
Figure 1Block diagram architecture.
Figure 2USB web camera.
Figure 3Raspberry Pi 4 model B+ control module.
Figure 4E-mail notification.
Figure 5Haar features.
Figure 6Feature extraction.
Figure 7Face recognition work flow.
Figure 8The detected frontal and profile faces.
Figure 9The illumination variation.
Figure 10Database face images.
Figure 11ResNet-50 model with SVM.
Figure 12VGG-16 model with SVM.
Figure 13ReLU operation.
Figure 14System work flow.
Figure 15Converting grayscale to decimal.
Figure 16Local binary pattern histogram for face description.
Figure 17Flow chart of LBPH algorithm.
Machine for testing model.
| Machine | Operation system | Processor | Main memory |
|---|---|---|---|
| Intel | Windows 10 | Core i7 (6th gen) | 8 GB |
| Raspberry Pi 4 | Raspbian | ARM Cortex A72 | 4 GB |
Figure 18Testing time graph of pretrained CNN and LBPH model.
Figure 19Response time for recognition of a person.
Figure 20Model accuracy.
Model performance.
| Method | Training time | Accuracy |
|---|---|---|
| ResNet-50 | 1.205 hours | 99.56% |
| VGG-16 | 1.346 hours | 98.49% |
| LBPH | 0.189 hours | 98.47% |
Figure 21Model loss.
A comparative study of different methods.
| Method | Face recognition accuracy |
|---|---|
| Eigenfaces [ | 36.19% |
| HOG [ | 66.45% |
| Laplacian faces [ | 65.07% |
| Open face [ | 75.72% |
| Dense U-Nets [ | 81.43% |
| Retina face [ | 83.87% |
| Hierarchical network | 87.36% |
| ResNet-50, VGG-16, and LBPH (our model) | 99.56%, 98.49%, 98.47% |
Figure 22Model recognizing faces.