| Literature DB >> 30096832 |
Dat Tien Nguyen1, Tuyen Danh Pham2, Young Won Lee3, Kang Ryoung Park4.
Abstract
Iris recognition systems have been used in high-security-level applications because of their high recognition rate and the distinctiveness of iris patterns. However, as reported by recent studies, an iris recognition system can be fooled by the use of artificial iris patterns and lead to a reduction in its security level. The accuracies of previous presentation attack detection research are limited because they used only features extracted from global iris region image. To overcome this problem, we propose a new presentation attack detection method for iris recognition by combining features extracted from both local and global iris regions, using convolutional neural networks and support vector machines based on a near-infrared (NIR) light camera sensor. The detection results using each kind of image features are fused, based on two fusion methods of feature level and score level to enhance the detection ability of each kind of image features. Through extensive experiments using two popular public datasets (LivDet-Iris-2017 Warsaw and Notre Dame Contact Lens Detection 2015) and their fusion, we validate the efficiency of our proposed method by providing smaller detection errors than those produced by previous studies.Entities:
Keywords: NIR camera sensor; deep learning; iris recognition; presentation attack detection; support vector machines
Mesh:
Year: 2018 PMID: 30096832 PMCID: PMC6111611 DOI: 10.3390/s18082601
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Summary of previous studies on iPAD compared to our proposed method.
| Category | Method | Strength | Weakness |
|---|---|---|---|
| Using image features extracted from entire (global) iris region image | Uses handcrafted image features extracted from entire iris region image [ |
Easy to implement Feature extractors are designed by experts | Detection accuracy is fair because of predesigned image feature extraction method |
| Uses learning-based method, i.e., CNN method [ | Extracts efficient image features by a learning-based method using a large amount of training samples |
Only captures information extracted from global (entire) iris image for detection problem Processing time for both training and testing steps is longer than that using handcrafted image features. | |
| Uses combination of deep and handcrafted-image features [ | Enhances the detection performance by using both handcrafted and deep image features |
Only captures image features from global iris image for detection problem More sophisticated than the use of only deep or only handcrafted image features. | |
| Using image features extracted from multiple local patches of normalized iris image |
Extract overlapped local patches of iris region for classification. Using CNN method to classify patches into real or presentation attack class [ |
Extracts rich information from overlapped image patches. Utilizes the learning-based method i.e., CNN, for feature extraction and classification. |
Takes long processing times because of using multiple patches. CNN network is relatively shallow. Does not consider the detail information along with pupil and iris boundaries |
| Combining features extracted from both local and global iris regions for detection task (Proposed method) |
Extracts image features from inner and outer local regions of iris image in polar coordinate system using CNN method Extracts image features from global (entire) iris region from Cartesian coordinates Combines detection results by features extracted from local and global iris regions using fusion rule |
Captures information from both local and global regions of image for detection task Produces higher detection accuracy than the use of only image features extracted from global iris region, especially with the cross-sensor or cross-artificial template manufacturer condition | Processing time is longer than when using only image features extracted from global iris region |
Figure 1Overview flowchart of our proposed method for iPAD: (a) feature level fusion (“nD-Feature Vector” denotes n-dimensional feature vector), and (b) score level fusion.
Figure 2Examples of detection result of iris detection method: (a) a near-infrared (NIR) iris image, and (b) detection result of the NIR iris image in (a).
Figure 3Definition of local and global iris region: (a) inner and outer local iris region (two donut shapes between three red circles whose radiuses of R1, R2, and R3), and (b) entire iris region (rectangular box).
Figure 4Normalization method of inner and outer iris regions: (a) normalization of iris region from Cartesian to polar coordinates, (b) normalized inner iris region of Figure 3a,c normalized outer iris region of Figure 3a.
Figure 5Example of result image by Retinex method: (a) normal-illumination gray iris image (leftmost) and its Retinex filtering results (center and rightmost), and (b) low-illumination gray iris image (leftmost) and its Retinex filtering results (center and rightmost).
Description of convolutional neural network (CNN) architecture used in our iPAD study.
| Operation Layer | Number of Filters | Size of Each Filter | Stride Value | Padding Value | Size of Output Image | |
|---|---|---|---|---|---|---|
|
| - | - | - | - | 224 × 224 × 3 | |
|
| Convolution | 64 | 3 × 3 × 3 | 1 × 1 | 1 × 1 | 224 × 224 × 64 |
| ReLU | - | - | - | - | 224 × 224 × 64 | |
|
| Max pooling | 1 | 2 × 2 | 2 × 2 | 0 | 112 × 112 × 64 |
|
| Convolution | 128 | 3 × 3 × 64 | 1 × 1 | 1 × 1 | 112 × 112 × 128 |
| ReLU | - | - | - | - | 112 × 112 × 128 | |
|
| Max pooling | 1 | 2 × 2 | 2 × 2 | 0 | 56 × 56 × 128 |
|
| Convolution | 256 | 3 × 3 × 128 | 1 × 1 | 1 × 1 | 56 × 56 × 256 |
| ReLU | - | - | - | - | 56 × 56 × 256 | |
|
| Max pooling | 1 | 2 × 2 | 2 × 2 | 0 | 28 × 28 × 256 |
|
| Convolution | 512 | 3 × 3 × 256 | 1 × 1 | 1 × 1 | 28 × 28 × 512 |
| ReLU | - | - | - | - | 28 × 28 × 512 | |
|
| Max pooling | 1 | 2 × 2 | 2 × 2 | 0 | 14 × 14 × 512 |
|
| Convolution | 512 | 3 × 3 × 512 | 1 × 1 | 1 × 1 | 14 × 14 × 512 |
| ReLU | - | - | - | - | 14 × 14 × 512 | |
|
| Max pooling | 1 | 2 × 2 | 2 × 2 | 0 | 7 × 7 × 512 |
|
| Fully connected | - | - | - | - | 4096 |
| ReLU | - | - | - | - | 4096 | |
|
| Dropout (dropout = 0.5) | - | - | - | - | 4096 |
|
| Fully connected | - | - | - | - | 4096 |
| ReLU | - | - | - | - | 4096 | |
|
| Dropout (dropout = 0.5) | - | - | - | - | 4096 |
|
| Fully connected | - | - | - | - | 2 |
|
| Softmax | - | - | - | - | 2 |
|
| Classification | - | - | - | - | 2 (Real/Presentation Attack) |
Figure 6Demonstration of three-channel input image to CNN: (a) three-channel gray image, (b) three-channel Retinex image with sigma of 10, 15 and 20, and (c) three-channel fusion of one gray and two Retinex images with sigma of 10 and 15.
Description of Warsaw-2017 and NDCLD-2015 datasets.
| Dataset | Number of Real Images | Number of Attack Images | Total | Image Data Collection Method |
|---|---|---|---|---|
| Warsaw-2017 | 5168 | 6845 | 12,013 | Recaptured printed iris patterns on paper |
| NDCLD-2015 | 4875 | 2425 | 7300 | Recaptured printed iris patterns on contact lens |
Description of Warsaw-2017 dataset in our experiment (with augmentation of training dataset).
| Dataset | Training Dataset | Testing Dataset | |||||||
|---|---|---|---|---|---|---|---|---|---|
| Real Image | Attack Image | Total | Test-Known Dataset | Test-Unknown Dataset | |||||
| Real Image | Attack Image | Total | Real Image | Attack Image | Total | ||||
| Original dataset | 1844 | 2669 | 4513 | 974 | 2016 | 2990 | 2350 | 2160 | 4510 |
| Augmented dataset | 27,660 (1844 × 15) | 24,021 (2669 × 9) | 51,681 | 974 | 2016 | 2990 | 2350 | 2160 | 4510 |
(a) Detection errors (attack presentation classification error rate (APCER), bona fide presentation classification error rate (BPCER), and average classification error rate (ACER)) of iPAD based on CNN method for classification using Warsaw-2017 dataset and three different kinds of input image (unit: %); (b) Detection errors (APCER, BPCER, and ACER) of iPAD based on SVM method for classification using Warsaw-2017 dataset and three different kinds of input image (unit: %).
| Test Dataset | Approach | Using Three-Channel Gray Images | Using Three-Channel Retinex Images | Using Three-Channel Fusion of Gray and Retinex Images | ||||||
|---|---|---|---|---|---|---|---|---|---|---|
| APCER | BPCER | ACER | APCER | BPCER | ACER | APCER | BPCER | ACER | ||
| ( | ||||||||||
|
| Using Inner Iris Region | 0.103 | 0.099 | 0.101 | 0.103 | 0.000 | 0.051 | 0.000 | 0.000 | 0.000 |
| Using Outer Iris Region | 0.000 | 0.050 | 0.025 | 0.000 | 0.100 | 0.050 | 0.000 | 0.000 | 0.000 | |
| Using Entire Iris Region | 0.000 | 0.050 | 0.025 | 0.000 | 0.100 | 0.050 | 0.000 | 0.148 | 0.074 | |
|
| Using Inner Iris Region | 0.170 | 0.278 | 0.224 | 1.021 | 1.482 | 1.251 | 2.128 | 0.092 | 1.110 |
| Using Outer Iris Region | 5.617 | 0.046 | 2.832 | 1.830 | 3.750 | 2.790 | 15.106 | 0.694 | 7.900 | |
| Using Entire Iris Region | 0.298 | 0.324 | 0.311 | 0.894 | 0.556 | 0.725 | 0.638 | 0.602 | 0.620 | |
| ( | ||||||||||
|
| Using Inner Iris Region | 0.103 | 0.198 | 0.151 | 0.103 | 0.000 | 0.051 | 0.000 | 0.050 | 0.025 |
| Using Outer Iris Region | 0.000 | 0.000 | 0.000 | 0.000 | 0.010 | 0.050 | 0.000 | 0.000 | 0.000 | |
| Using Entire Iris Region | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | |
| Using Feature Level Fusion Approach | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | |
| Using Score Level Fusion Approach | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | |
|
| Using Inner Iris Region | 0.213 | 0.324 | 0.268 | 4.596 | 2.130 | 3.363 | 0.085 | 0.509 | 0.297 |
| Using Outer Iris Region | 0.638 | 0.787 | 0.713 | 0.383 | 4.444 | 2.414 | 2.383 | 4.259 | 3.321 | |
| Using Entire Iris Region | 0.809 | 0.370 | 0.589 | 0.809 | 0.833 | 0.821 | 0.681 | 0.139 | 0.410 | |
| Using Feature Level Fusion Approach | 0.213 | 0.093 | 0.153 | 0.383 | 0.278 | 0.330 | 0.170 | 0.000 | 0.085 | |
| Using Score Level Fusion Approach | 0.128 | 0.046 | 0.087 | 0.213 | 0.232 | 0.222 | 0.000 | 0.046 | 0.023 | |
Figure 7Detection error trade-off (DET) curves of iPAD systems according to best detection accuracy presented in Table 5b using three-channel images of fusion of gray and Retinex images for iPAD.
Description of training and testing datasets with the NDCLD-2015 dataset using LivDet-Iris-2017 division method.
| Dataset | Training Dataset | Testing Dataset | |||||||
|---|---|---|---|---|---|---|---|---|---|
| Real Image | Attack Image | Total | Test-Known Dataset | Test-Unknown Dataset | |||||
| Real Image | Attack Image | Total | Real Image | Attack Image | Total | ||||
| Original NDCLD-2015 dataset | 600 | 600 | 1200 | 900 | 900 | 1800 | 900 | 900 | 1800 |
| Augmented dataset | 29,400 (600 × 49) | 29,400 (600 × 49) | 58,800 | 900 | 900 | 1800 | 900 | 900 | 1800 |
(a) Detection errors (APCER, BPCER, and ACER) of iPAD based on CNN method for classification using NDCLD-2015 dataset with LivDet-Iris-2017 division method and three kinds of input image (unit: %); (b) Detection errors (APCER, BPCER, and ACER) of iPAD based on SVM method for classification using NDCLD-2015 dataset with LivDet-Iris-2017 division method and three kinds of input image (unit: %).
| Test Dataset | Approach | Using Three-Channel Gray Images | Using Three-Channel Retinex Images | Using Three-Channel Fusion of Gray and Retinex Images | ||||||
|---|---|---|---|---|---|---|---|---|---|---|
| APCER | BPCER | ACER | APCER | BPCER | ACER | APCER | BPCER | ACER | ||
| ( | ||||||||||
| Test-known dataset | Using Inner Iris Region | 0.056 | 0.389 | 0.222 | 0.167 | 0.333 | 0.250 | 0.167 | 0.278 | 0.222 |
| Using Outer Iris Region | 0.000 | 0.278 | 0.139 | 0.056 | 0.111 | 0.083 | 0.000 | 0.222 | 0.111 | |
| Using Entire Iris Region | 0.000 | 0.278 | 0.139 | 0.000 | 0.167 | 0.083 | 0.056 | 0.056 | 0.056 | |
| Test-unknown dataset | Using Inner Iris Region | 1.278 | 11.889 | 6.583 | 0.444 | 11.722 | 6.083 | 0.333 | 13.278 | 6.806 |
| Using Outer Iris Region | 0.056 | 32.222 | 16.139 | 0.278 | 24.944 | 12.611 | 0.222 | 23.889 | 12.056 | |
| Using Entire Iris Region | 0.389 | 11.722 | 6.056 | 0.222 | 10.556 | 5.389 | 0.222 | 13.611 | 6.917 | |
| ( | ||||||||||
| Test-known dataset | Using Inner Iris Region | 0.167 | 0.111 | 0.139 | 0.056 | 0.389 | 0.222 | 0.167 | 0.111 | 0.139 |
| Using Outer Iris Region | 0.000 | 0.278 | 0.139 | 0.222 | 0.000 | 0.111 | 0.000 | 0.167 | 0.083 | |
| Using Entire Iris Region | 0.000 | 0.278 | 0.139 | 0.111 | 0.000 | 0.056 | 0.000 | 0.111 | 0.056 | |
| Using Feature Level Fusion Approach | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | |
| Using Score Level Fusion Approach | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | |
| Test-unknown dataset | Using Inner Iris Region | 2.167 | 8.556 | 5.361 | 2.278 | 3.500 | 2.889 | 2.722 | 3.278 | 3.000 |
| Using Outer Iris Region | 3.611 | 10.389 | 7.000 | 5.167 | 5.500 | 5.333 | 5.611 | 7.667 | 6.639 | |
| Using Entire Iris Region | 1.333 | 2.389 | 1.861 | 1.556 | 2.833 | 2.194 | 1.389 | 2.111 | 1.750 | |
| Using Feature Level Fusion Approach | 0.778 | 2.667 | 1.722 | 0.333 | 0.889 | 0.611 | 0.333 | 0.833 | 0.583 | |
| Using Score Level Fusion Approach | 1.722 | 1.833 | 1.778 | 0.944 | 0.833 | 0.889 | 0.556 | 1.000 | 0.778 | |
Figure 8DET curves of iPAD systems according to best detection accuracy in Table 7b using the three-channel images of fusion of gray and Retinex images for iPAD.
(a) Detection errors (APCER, BPCER, and ACER) of iPAD based on CNN method for classification using NDCLD-2015 dataset with our first division method and three kinds of input images (unit: %); (b) Detection errors (APCER, BPCER, and ACER) of iPAD based on SVM method for classification using NDCLD-2015 dataset with our first division method and three kinds of input images (unit: %).
| Test Dataset | Approach | Using Three-Channel Gray Images | Using Three-Channel Retinex Images | Using Three-Channel Fusion of Gray and Retinex Images | ||||||
|---|---|---|---|---|---|---|---|---|---|---|
| APCER | BPCER | ACER | APCER | BPCER | ACER | APCER | BPCER | ACER | ||
| ( | ||||||||||
| Test-known Dataset | Using Inner Iris Region | 0.389 | 0.056 | 0.222 | 0.111 | 0.389 | 0.250 | 0.278 | 0.389 | 0.333 |
| Using Outer Iris Region | 0.000 | 0.167 | 0.083 | 0.000 | 0.056 | 0.028 | 0.000 | 0.056 | 0.028 | |
| Using Entire Iris Region | 0.000 | 0.056 | 0.028 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | |
| Test-Unknown Dataset | Using Inner Iris Region | 1.278 | 9.778 | 5.528 | 0.389 | 10.778 | 5.583 | 0.889 | 10.667 | 5.778 |
| Using Outer Iris Region | 0.111 | 36.611 | 18.361 | 0.111 | 24.944 | 12.528 | 0.278 | 31.389 | 15.833 | |
| Using Entire Iris Region | 0.111 | 24.667 | 12.389 | 0.278 | 19.444 | 9.861 | 0.556 | 12.944 | 6.750 | |
| ( | ||||||||||
| Test-known Dataset | Using Inner Iris Region | 0.111 | 0.444 | 0.278 | 0.000 | 0.556 | 0.028 | 0.222 | 0.278 | 0.250 |
| Using Outer Iris Region | 0.000 | 0.167 | 0.083 | 0.000 | 0.000 | 0.000 | 0.056 | 0.000 | 0.028 | |
| Using Entire Iris Region | 0.000 | 0.000 | 0.000 | 0.000 | 0.056 | 0.028 | 0.000 | 0.000 | 0.000 | |
| Using Feature Level Fusion Approach | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | |
| Using Score Level Fusion Approach | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | |
| Test-Unknown Dataset | Using Inner Iris Region | 3.167 | 4.944 | 4.056 | 2.667 | 2.833 | 2.750 | 2.278 | 3.889 | 3.083 |
| Using Outer Iris Region | 2.778 | 14.000 | 8.389 | 2.444 | 7.333 | 4.889 | 3.833 | 7.556 | 5.694 | |
| Using Entire Iris Region | 1.944 | 3.389 | 2.667 | 2.000 | 4.333 | 3.167 | 1.333 | 2.278 | 1.806 | |
| Using Feature Level Fusion Approach | 1.222 | 1.778 | 1.500 | 0.389 | 0.611 | 0.500 | 1.056 | 0.833 | 0.944 | |
| Using Score Level Fusion Approach | 1.556 | 2.167 | 1.861 | 1.167 | 0.833 | 1.000 | 0.722 | 0.778 | 0.750 | |
Description of training and testing datasets of NDCLD-2015 dataset using our second division method.
| Dataset | Training Dataset | Testing Dataset | ||||
|---|---|---|---|---|---|---|
| Real Image | Attack Image | Total | Real Image | Attack Image | Total | |
| Original entire NDCLD-2015 (1st Fold) | 2340 | 1068 | 3408 | 2535 | 1357 | 3892 |
| Augmented dataset (1st Fold) | 28,080 (2340 × 12) | 26,700 (1068 × 25) | 54,780 | 2535 | 1357 | 3892 |
| Original entire NDCLD-2015 (2nd Fold) | 2535 | 1357 | 3892 | 2340 | 1068 | 3408 |
| Augmented dataset (2nd Fold) | 30,420 (2535 × 12) | 33,925 (1357 × 25) | 64,345 | 2340 | 1068 | 3408 |
(a) Detection errors (APCER, BPCER, and ACER) of iPAD based on CNN method for classification using NDCLD-2015 dataset with our second division method and three kinds of input images (unit: %); (b) Detection errors (APCER, BPCER, and ACER) of iPAD method based on SVM method for classification using NDCLD-2015 dataset with our second division method and three kinds of input images (unit: %).
| Approach | Using Three-Channel Gray Images | Using Three-Channel Retinex Images | Using Three-Channel Fusion of Gray and Retinex Images | ||||||
|---|---|---|---|---|---|---|---|---|---|
| APCER | BPCER | ACER | APCER | BPCER | ACER | APCER | BPCER | ACER | |
| ( | |||||||||
| Using Inner Iris Region | 4.088 | 31.212 | 17.650 | 3.322 | 35.895 | 19.608 | 3.831 | 34.090 | 18.961 |
| Using Outer Iris Region | 1.851 | 3.502 | 2.676 | 1.921 | 2.766 | 2.344 | 1.767 | 3.461 | 2.614 |
| Using Entire Iris Region | 1.606 | 6.120 | 3.863 | 1.501 | 7.845 | 4.673 | 1.522 | 4.418 | 2.970 |
| ( | |||||||||
| Using Inner Iris Region | 6.581 | 13.810 | 10.195 | 6.003 | 25.649 | 15.826 | 5.360 | 19.749 | 12.555 |
| Using Outer Iris Region | 2.581 | 1.666 | 2.123 | 2.175 | 0.883 | 1.529 | 2.180 | 1.706 | 1.943 |
| Using Entire Iris Region | 1.907 | 1.204 | 1.555 | 1.898 | 1.646 | 1.772 | 2.079 | 0.596 | 1.337 |
| Using Feature Level Fusion Approach | 1.481 | 0.823 | 1.152 | 1.777 | 0.140 | 0.959 | 1.649 | 0.281 | 0.965 |
| Using Score Level Fusion Approach | 1.731 | 0.599 | 1.165 | 1.884 | 0.094 | 0.989 | 1.800 | 0.214 | 1.007 |
Description of training and testing datasets of fusion of Warsaw-2017 and NDCLD-2015 datasets.
| Training Dataset | Testing Dataset | |||||||
|---|---|---|---|---|---|---|---|---|
| Images from Warsaw-2017 Dataset | Images from NDCLD-2015 Dataset | Total | Test-Known Dataset | Test-Unknown Dataset | ||||
| Images from Warsaw-2017 Dataset | Images from NDCLD-2015 Dataset | Total | Images from Warsaw-2017 Dataset | Images from NDCLD-2015 Dataset | Total | |||
| 51,681 | 58,800 | 110,481 | 2990 | 1800 | 4790 | 4510 | 1800 | 6310 |
(a) Detection errors (APCER, BPCER, and ACER) of iPAD based on CNN method for classification using fusion of Warsaw-2017 and NDCLD-2015 datasets and three kinds of input images (unit: %); (b) Detection errors (APCER, BPCER, and ACER) of iPAD based on SVM method for classification using fusion of Warsaw-2017 and NDCLD-2015 datasets and three kinds of input images (unit: %).
| Test Dataset | Approach | Using Three-Channel Gray Images | Using Three-Channel Retinex Images | Using Three-Channel Fusion of Gray and Retinex Images | ||||||
|---|---|---|---|---|---|---|---|---|---|---|
| APCER | BPCER | ACER | APCER | BPCER | ACER | APCER | BPCER | ACER | ||
| ( | ||||||||||
| Test-known Dataset | Using Inner Iris Region | 0.160 | 0.034 | 0.097 | 0.053 | 0.206 | 0.130 | 0.000 | 0.171 | 0.085 |
| Using Outer Iris Region | 0.053 | 0.034 | 0.044 | 0.053 | 0.069 | 0.061 | 0.053 | 0.034 | 0.044 | |
| Using Entire Iris Region | 0.000 | 0.034 | 0.017 | 0.107 | 0.034 | 0.071 | 0.053 | 0.034 | 0.044 | |
| Test-Unknown Dataset | Using Inner Iris Region | 0.585 | 4.020 | 2.302 | 2.062 | 4.575 | 3.318 | 1.292 | 4.412 | 2.852 |
| Using Outer Iris Region | 3.692 | 14.183 | 8.934 | 3.292 | 10.458 | 6.875 | 5.108 | 11.765 | 8.436 | |
| Using Entire Iris Region | 0.923 | 2.386 | 1.654 | 0.800 | 3.726 | 2.263 | 0.431 | 5.621 | 3.026 | |
| ( | ||||||||||
| Test-known Dataset | Using Inner Iris Region | 0.053 | 0.034 | 0.044 | 0.267 | 0.343 | 0.305 | 0.000 | 0.172 | 0.086 |
| Using Outer Iris Region | 0.000 | 0.069 | 0.034 | 0.053 | 0.000 | 0.027 | 0.053 | 0.000 | 0.027 | |
| Using Entire Iris Region | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | |
| Using Feature Level Fusion Approach | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | |
| Using Score Level Fusion Approach | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | |
| Test-Unknown Dataset | Using Inner Iris Region | 0.339 | 4.935 | 2.637 | 3.877 | 3.595 | 3.736 | 2.339 | 3.105 | 2.722 |
| Using Outer Iris Region | 4.246 | 9.510 | 6.878 | 4.246 | 7.353 | 5.800 | 4.831 | 7.811 | 6.321 | |
| Using Entire Iris Region | 1.662 | 1.536 | 1.599 | 2.154 | 1.144 | 1.649 | 1.815 | 2.222 | 2.019 | |
| Using Feature Level Fusion Approach | 1.231 | 1.438 | 1.334 | 1.200 | 1.111 | 1.156 | 0.862 | 0.556 | 0.709 | |
| Using Score Level Fusion Approach | 0.400 | 2.386 | 1.393 | 1.015 | 2.712 | 1.864 | 1.354 | 2.418 | 1.886 | |
Figure 9DET curves of iPAD systems according to best detection accuracy presented in Table 12b using three-channel images of fusion of gray and Retinex images for iPAD.
The processing time of our proposed iPAD method (unit: ms).
| Pupil and Iris Boundary Detection | Inner and Outer Region Image Extraction | Retinex Filtering | Deep Feature Extraction | Feature Selection by PCA | Classification by SVM | Total |
|---|---|---|---|---|---|---|
| 22.500 | 3.776 | 0.011 | 58.615 | 0.0001 | 0.00002 | 84.90212 |
Comparison of detection errors (ACER) between proposed method and previous methods using Warsaw-2017 and NDCLD-2015 datasets (unit: %).
| Method | Warsaw-2017 Dataset | NDCLD-2015 Dataset | ||||
|---|---|---|---|---|---|---|
| APCER | BPCER | ACER | APCER | BPCER | ACER | |
| CASIA method [ | 3.40 | 8.60 | 6.00 | 11.33 | 7.56 | 9.45 |
| Anon1 method [ | 6.11 | 5.51 | 5.81 | 7.78 | 0.28 | 4.03 |
| UNINA method [ | 0.05 | 14.77 | 7.41 | 25.44 | 0.33 | 12.89 |
| CNN-based method [ | 0.198 | 0.327 | 0.263 | 1.250 | 5.945 | 3.598 |
| MLBP-based method [ | 0.154 | 0.285 | 0.224 | 4.056 | 7.806 | 5.931 |
| Feature Level Fusion of CNN and MLBP Features [ | 0.154 | 0.131 | 0.142 | 1.167 | 3.028 | 2.098 |
| Score Level Fusion of CNN and MLBP Features [ | 0.000 | 0.032 | 0.016 | 1.389 | 4.500 | 2.945 |
| Our proposed method | 0.000 | 0.032 | 0.016 | 0.167 | 0.417 | 0.292 |