| Literature DB >> 31683560 |
Shuhua Liu1, Yu Song2, Mengyu Zhang3, Jianwei Zhao4, Shihao Yang5, Kun Hou6.
Abstract
In this study, an advanced Kinect sensor was adopted to acquire infrared radiation (IR) images for liveness detection. The proposed liveness detection method based on infrared radiation (IR) images can deal with face spoofs. Face pictures were acquired by a Kinect camera and converted into IR images. Feature extraction and classification were carried out by a deep neural network to distinguish between real individuals and face spoofs. IR images collected by the Kinect camera have depth information. Therefore, the IR pixels from live images have an evident hierarchical structure, while those from photos or videos have no evident hierarchical feature. Accordingly, two types of IR images were learned through the deep network to realize the identification of whether images were from live individuals. In comparison with other liveness detection cross-databases, our recognition accuracy was 99.8% and better than other algorithms. FaceNet is a face recognition model, and it is robust to occlusion, blur, illumination, and steering. We combined the liveness detection and FaceNet model for identity authentication. For improving the application of the authentication approach, we proposed two improved ways to run the FaceNet model. Experimental results showed that the combination of the proposed liveness detection and improved face recognition had a good recognition effect and can be used for identity authentication.Entities:
Keywords: FaceNet; Kinect camera; deep learning; infrared radiation; liveness detection
Mesh:
Year: 2019 PMID: 31683560 PMCID: PMC6864603 DOI: 10.3390/s19214733
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Identity authentication framework based on liveness detection and FaceNet. CNN, convolutional neural network; IR, infrared radiation; MTCNN, multitask cascaded CNN.
Figure 2Training process of liveness detection.
Figure 3CNN structure.
Figure 4FaceNet structure.
Figure 5Function of the triplet loss.
Figure 6Face recognition process by IFaceNet.
Liveness detection dataset, called NenuLD.
| Training Images | Validation Images | Test Images | |
|---|---|---|---|
| Real faces | 8400 | 1050 | 1050 |
| Spoof faces | 6030 | 753 | 753 |
| total | 14,430 | 1803 | 1803 |
Figure 7IR pictures collected with a Kinect camera.
Figure 8Spoof pictures.
Figure 9Positive samples.
Figure 10Integrated test.
Comparison of liveness detection with cross-databases.
| Replay-Attack ERR (%) | CASIA ERR (%) | NenuLD ERR (%) | |
|---|---|---|---|
| DOG(baseline) [ | - | 17.0 | - |
| DLTP [ | 7.13 | 7.02 | - |
| Deep Learning [ | 6.1 | 7.3 | - |
| DPCNN [ | 2.9 | 4.5 | - |
|
| - | - |
|
Comparison of FaceNet and IFaceNet.
| People in Dataset | Strangers | |
|---|---|---|
| FaceNet | Output correct results with 99.7% recognition rate | Output ID with Maximal similarity (error) |
| IFaceNet | Output correct results with 99.7% recognition rate | Output “unknown” (correct) |
Accuracy of the proposed algorithm.
| Accuracy of Live Detection | Accuracy of Face Recognition | Total Accuracy |
|---|---|---|
| 99.8% | 99.7% | 99.5% |