Diaa Salama AbdELminaam1,2, Abdulrhman M Almansori1, Mohamed Taha3, Elsayed Badr4,5. 1. Department of Information Systems, Faculty of Computers and Artificial Intelligence, Benha University, Benha City, Egypt. 2. Department of Computer Science, Faculty of Computers and Informatics, Misr International University, Cairo, Egypt. 3. Department of Computer Science, Faculty of Computers and Artificial Intelligence, Benha University, Benha City, Egypt. 4. Department of Scientific Computing, Faculty of Computers and Artificial Intelligence, Benha University, Benha City, Egypt. 5. Department of Computer Science, Higher Technological Institute, 10th of Ramadan City, Egypt.
Abstract
The development of biometric applications, such as facial recognition (FR), has recently become important in smart cities. Many scientists and engineers around the world have focused on establishing increasingly robust and accurate algorithms and methods for these types of systems and their applications in everyday life. FR is developing technology with multiple real-time applications. The goal of this paper is to develop a complete FR system using transfer learning in fog computing and cloud computing. The developed system uses deep convolutional neural networks (DCNN) because of the dominant representation; there are some conditions including occlusions, expressions, illuminations, and pose, which can affect the deep FR performance. DCNN is used to extract relevant facial features. These features allow us to compare faces between them in an efficient way. The system can be trained to recognize a set of people and to learn via an online method, by integrating the new people it processes and improving its predictions on the ones it already has. The proposed recognition method was tested with different three standard machine learning algorithms (Decision Tree (DT), K Nearest Neighbor(KNN), Support Vector Machine (SVM)). The proposed system has been evaluated using three datasets of face images (SDUMLA-HMT, 113, and CASIA) via performance metrics of accuracy, precision, sensitivity, specificity, and time. The experimental results show that the proposed method achieves superiority over other algorithms according to all parameters. The suggested algorithm results in higher accuracy (99.06%), higher precision (99.12%), higher recall (99.07%), and higher specificity (99.10%) than the comparison algorithms.
The development of biometric applications, such as facial recognition (FR), has recently become important in smart cities. Many scientists and engineers around the world have focused on establishing increasingly robust and accurate algorithms and methods for these types of systems and their applications in everyday life. FR is developing technology with multiple real-time applications. The goal of this paper is to develop a complete FR system using transfer learning in fog computing and cloud computing. The developed system uses deep convolutional neural networks (DCNN) because of the dominant representation; there are some conditions including occlusions, expressions, illuminations, and pose, which can affect the deep FR performance. DCNN is used to extract relevant facial features. These features allow us to compare faces between them in an efficient way. The system can be trained to recognize a set of people and to learn via an online method, by integrating the new people it processes and improving its predictions on the ones it already has. The proposed recognition method was tested with different three standard machine learning algorithms (Decision Tree (DT), K Nearest Neighbor(KNN), Support Vector Machine (SVM)). The proposed system has been evaluated using three datasets of face images (SDUMLA-HMT, 113, and CASIA) via performance metrics of accuracy, precision, sensitivity, specificity, and time. The experimental results show that the proposed method achieves superiority over other algorithms according to all parameters. The suggested algorithm results in higher accuracy (99.06%), higher precision (99.12%), higher recall (99.07%), and higher specificity (99.10%) than the comparison algorithms.
The face is considered the most critical part of the human body. Research shows that even a face can speak, and it has different words for different emotions. It plays a crucial role in interacting with people in society. It conveys people's identity and thus can be used as a key for security solutions in many organizations. The facial recognition (FR) system is increasingly trending across the world as an extraordinarily safe and reliable security technology. It is gaining significant importance and attention from thousands of corporate and government organizations because of its high level of security and reliability [1-3].Moreover, the FR system is providing vast benefits compared to other biometric security solutions such as palmprints and fingerprints. The system captures biometric measurements of a person from a specific distance without interacting with the person. In crime deterrent applications, this system can help many organizations identify a person who has any kind of criminal record or other legal issues. Thus, this technology is becoming essential for numerous residential buildings and corporate organizations. This technique is based on the ability to recognize a human face and then compare the different features of the face with previously recorded faces. This feature also increases the importance of the system and enables it to be widely used across the world. It is developed with user-friendly features and operations that include different nodal points of the face. There are approximately 80 to 90 unique nodal points of a face. From these nodal points, the FR system measures significant aspects including the distance between the eyes, length of the jawline, shape of the cheekbones, and depth of the eyes. These points are measured by creating a code called the faceprint, which represents the identity of the face in the computer database. With the introduction of the latest technology, systems based on 2D graphics are now available on 3D graphics, which makes the system more accurate and increases its reliability.Biometrics is defined as the science and technology to measure and statistically analyze biological data. They are measurable behavioral and/or physiological characteristics that could be used to verify individual identification. For each individual, a unique biometric could be used for verification. Biometric systems are used in increasingly many fields such as prison security, secured access, and forensics. Biometric systems recognize individuals using authentication by utilizing different biological features such as the face, hand geometry, iris, retina, and fingerprints. The FR system is a more natural biometric information process with better variation than any other method. Thus, FR has become a recent topic in computer science related to biometrics and machine learning [4, 5]. Machine learning is a computer science field that gives computers the capability to learn without further explicit programming. The main focus of machine learning is providing algorithms for training to perform a task—machine learning related to the field of computational statistics and mathematical optimization. Machine learning includes multiple methods such as reinforcement learning, supervised learning, almost supervised learning, and unsupervised learning [6]. Machine learning can be used on many tasks that people think only they can do, such as playing games, learning subjects, and recognition [6]. Most machine learning algorithms consume a massive amount of resources, so it would be better to perform their tasks on a distributed environment such as cloud computing, fog computing, or edge computing.Cloud computing is based on the shareability of many resources including services, applications, storage, servers, and networks to accomplish economies and consistency and thus provide the best concentration to maximize the efficiency of using the shared resources. Fog computing contains many services that are provided on the network edge, such as data storage, computing, data provision, and application services for end users who can be added to the network edge [7]. These environments would reduce the total amount of resource usage, speed up the completion time of tasks, and reduce costs via pay-per-use.The main goals of this paper are to build a deep FR system using transfer learning in fog computing. This system is based on modern techniques of deep convolutional neural networks (DCNN) and machine learning. The proposed methods will be able to capture the biometric measurements of a person from a specific distance for crime deterrent purposes without interacting with the person. Thus, the proposed methods can help many organizations identify a person with any kind of criminal record or other legal issues.The remainder of the paper is organized as follows. Section 2 presents related work in FR techniques and applications. Section 3 presents the components of traditional FR: face processing, deep feature extraction and face matching by in-depth features, machine learning, K-nearest neighbors (KNN), support vector machines (SVM), DCNN, the computing framework, fog computing, and cloud computing. Section 4 explains the proposed FR system using transfer learning in fog computing. Section 5 presents the experimental results. Section 6 provides the conclusion with the outcomes of the proposed system.
2. Literature review
Due to the significant development of machine learning, the computing environment, and recognition systems, many researchers have worked on pattern recognition and identification via different biometrics using various building mining model strategies. Some common recent works on FR systems are surveyed here in brief.Singh, D et al. [8] proposed a COVID-19 disease classification model to classify infectedpatients from chest CT images. a convolutional neural network (CNN) is used to classify COVID-19-infectedpatients as infected (+ve) or not (−ve). Additionally, the initial parameters of CNN are tuned using multi-objective differential evolution (MODE). The results show that the proposed CNN model outperforms competitive models, i.e., ANN, ANFIS, and CNN models in terms of accuracy, F-measure, sensitivity, specificity, and Kappa statistics by 1.9789%, 2.0928%, 1.8262%, 1.6827%, and 1.9276%, respectively.Schiller, D et al. [9] proposed a novel approach to transfer learning to automatic emotion recognition (AER) across various modalities. The proposed model used for facial expression recognition that utilizes saliency maps to transfer knowledge from an arbitrary source to a target network by mostly “hiding” non-relevant information. The proposed method is independent of the employed model since the experience is solely transferred via augmentation of the input data. The evaluation of the proposed model showed that the new model was able to adapt to the new domain faster when forced to focus on the parts of the input that were considered relevant sources Prakash, R et al. [10] proposed an automated face recognition method using Convolutional Neural Network (CNN) with a transfer learning approach. The CNN with weights learned from pre-trained model VGG-16. The extracted features are fed as input to the Fully connected layer and softmax activation for classification. Two publicly available databases of face images–Yale and AT&T are used to test the performance of the proposed method. Face recognition accuracy of 100% is achieved for AT&T database face images and 96.5% for Yale database face images. The results show that face recognition using CNN with transfer learning gives better classification accuracy in comparison with PCA method.Deng et al. [11] proposed additive angular margin loss (ArcFace) to accomplish face acknowledgment. The proposed ArcFace has an unmistakable geometric understanding as a result of the specific correspondence to geodesic separation on a hypersphere. They also introduced the broadest exploratory assessment against the FR method utilizing ten FR datasets. They indicated that ArcFace reliably beats the best in class and can be effectively actualized with irrelevant computational overhead. The verification performance of open-sourced FR models on LFW, CALFW, and CPLFW datasets reached 99.82%, 95.45%, and 92.08%, respectively [11].Wang et al. [12] proposed a large margin cosine loss (LMCL) by reformulating the SoftMax loss as a cosine loss by L2 normalizing the two highlights and weight vectors to evacuate outspread varieties and using the cosine edge term to expand the choice edge in precise space. They achieved the highest between-class difference and lowest intraclass fluctuation via cosine choice edge augmentation and normalization. They referred to their model, trained with LMCL, as CosFace. They based their experiment on the Labeled Face in the Wild (LFW), YouTube Faces (YTF), and MegaFace Challenge datasets. They confirmed the efficiency of their proposed approach, achieving 99.33%, 96.1%, 77.11%, and 89.88% accuracy on the LFW, YTF, MF1 Rank1, and MF1 Veri datasets, respectively [12].Tran et al. [13] proposed a disentangled representation learning-generative adversarial network (DR-GAN) with three different developments. First, the encoder-decoder structure of the generator permits DR-GAN to gain proficiency with a discriminative and generative portrayal, including picture blending. Second, the portrayal is unraveled from other face varieties—for example, through the posture code given to the decoder and posture estimation in the discriminator. Third, DR-GAN can accept one or various pictures as information and produce one integrated portrayal alongside an arbitrary number of manufactured pictures. They tested their network using the Multi-PIE database. They contrasted their strategy and face acknowledgment techniques with Multi-PIE, CFP, and IJB-A and achieved average face confirmation exactness with greater than tenfold standard deviation. They accomplished equivalent execution on frontal-frontal confirmation with ~1.4% enhancement for frontal-profile verification [13].Masi et al. [14] proposed to build prepared information sizes for face acknowledgment frameworks: domain explicit information development. They presented techniques to enhance realistic datasets with critical facial varieties by controlling the faces in the datasets while coordinating inquiry pictures presented by standard convolutional neural systems. They tested their framework against the LFW and IJB-A benchmarks and Janus CS2 on a large number of downloaded pictures. They reported the standard convention for unhindered, marked outside information and announced a mean grouping precision of 100% equal error rate [14].Ding and Tao [15] proposed a far-reaching system based on convolutional neural networks (CNN) to overcome the difficulties faced in video-based face recognition (VFR). CNN learns obscure highlights by utilizing prepared information comprising misleadingly obscured information and still pictures. They proposed a trunk-branch ensemble CNN model (TBE-CNN) to improve CNN highlights to present varieties and impediments. TBE-CNN separates data from face pictures and zones picked around facial segments. TBE-CNN removes information by sharing the center and low-level convolutional layers between the branch and trunk systems. They proposed an improved triplet misfortune capacity to invigorate the influence of discriminative portrayals learned by TBE-CNN. TBE-CNN was tested on three video face databases: YouTube, COX Face, and PaSC Faces [15].Al-Waisy, et al. [16] proposed a multimodal profound learning system that depends on nearby element presentation for k-based face acknowledgment. They consolidated the focal points of neighborhood handmade component descriptors with the DBN to report face acknowledgment in unconstrained circumstances. They proposed a multimodal nearby component extraction approach dependent on consolidating the upsides of fractal measurement with the curvelet change, and they called it the curvelet–fractal approach. The principal inspiration of this methodology is that the curvelet change can expertly present the fundamental facial structure, while the fractal measurement presents the surface descriptors of face pictures. They proposed a multimodal profound face acknowledgment (MDFR) approach, to include highlight presentation by preparing a DBN on nearby element portrayals. They compared the outcomes of the proposed MDFR approach with the curvelet–fractal approach on four face datasets: the LFW, CAS-PEAL-R1, FERET, and SDUMLA-HMT databases. The outcomes acquired from their proposed approaches outperformed different methodologies including WPCA, DBN, and LBP by accomplishing new outcomes on the four datasets [16].Sivalingam et al. [17] proposed a proficient fractional face location strategy utilizing AlexNet CNN to detect emotions based on images of half-faces. They distinguished the key focal points and concentrated on textural highlights. They proposed an AlexNet CNN strategy to discriminatively coordinate the two removed nearby capabilities, and both the textural and geometrical data of neighborhood highlights were utilized for coordination. The comparability of two appearances was changed according to the separation between the adjusted capabilities. They tested their approach on four generally utilized face datasets and demonstrated the viability and constraints of their proposed method [17].Jonnathann et al. [18] presented a comparison between profound learning and conventional AI strategies (for example, artificial neural networks, extreme learning machine, SVM, optimum-path forest, KNN) and deep learning. For facial biometric acknowledgment, they concentrated on CNNs. They used three datasets: AR Face, YALE, and SDUMLA-HMT [19]. Further research on FR can be found in [20-23].Ethics Statement
3. Material and methods
All participants provided written informed consent and appropriate, photographic release. The individuals shown in to publish their image.
3.1 Traditional facial recognition components
The whole system comprises three modules, as shown in Fig 1.
Fig 1
Deep FR system with face detector and alignment.
In the beginning, the face detector is utilized on videos or images to detect faces.The prominent feature detector aligns each face to be normalized and recognized with the best match.Finally, the face images are fed into the FR module with the aligned results.Before inputting an image into the FR module, the image is scanned using face anti-spoofing, followed by recognition performance.illustrates the modus operandi of the FR module, where the face is first discovered, and then deep features are evaluated based on their conformity with the face via the following equation:where M indicates the face matching algorithm, which is used to calculate the degree of similarity,F refers to extracting the feature encoded for identity information,P is the face-processing stage of occlusal facial treatment, expressions, highlights, and phenomena; andI and I are two faces in the images.
3.1.1 Face processing
Deep learning approaches are commonly used because of their dominant representation; Ghazi and Ekenel [24] showed some conditions including occlusions, expressions, illuminations, and pose, which can affect the deep FR performance. One of the main challenges in FR applications is representing variation; in this paper, we will summarize the face-processing deep methods for poses. Similar techniques can solve other changes. The face-processing techniques are categorized as "one-to-many augmentation" and "many-to-one normalization" [24]."One-to-many augmentation": Create many images from a single image with the ability to change the situation, which helps increase the ability of deep networks to work and learn."Many-to-one normalization": The canonical view of face images is recovered from nonfrontal-view images, after which FR is performed under controlled conditions.
3.1.2 Deep feature extraction: Network architecture
inspired by the success of ImageNet [25] and typical CNN architectures such as SENet, ResNet, GoogleNet and VGGNet. It is also used as a baseline model in FR as a full or partial implementation [26-30].In addition to the mainstream methods, FR is still used as an architecture design to improve efficiency. Additionally, with backbone networks as basic blocks, FR methods can be implemented in assembled networks, possibly with multiple tasks or multiple inputs. Each network is related to one type of input or one type of task. During adoption, higher performance is attained after the results of assembled networks are collected [30].Loss Function. SoftMax loss is used as an organizing object by a supervising signal, and it improves the variation in the features. For FR, when intravariations may be larger than intervariations, SoftMax loss loses its effectiveness.Euclidean-distance-based loss:Intravariance compression and intervariance enlargement are based on the Euclidean distance.Angular/cosine-margin-based loss:Discriminative learning of facial features is performed according to angular similarity, with prominent and potentially large angular/cosine separability between the features learned.SoftMax loss and its variations:Performance is enhanced by using SoftMax loss or a modification of it.
3.1.3 Face matching by deep features
After training the deep networks to work with massive data and an appropriate loss function, deep feature representation must be obtained by testing each of the passed images through the networks. L2 distance or cosine distance methods are most commonly used to compute feature similarity; however, for identification and verification tasks, the nearest neighbor (NN) and threshold comparison are used. Many other methods are used to process the deep features and compute facial matching with high accuracy, such as sparse representation-based classifier (SRC) and metric learning.FR is a developed object classification; face-processing methods can also handle variations in poses, expressions, and occlusions. There are many new complicated kinds of FR related to features present in the real world, such as cross pose FR, cross-age FR, and video FR. Sometimes, more realistic datasets are constructed to simulate scenes from reality.
3.2 Machine learning
Machine learning is developed from computational learning theory and pattern recognition. A learning algorithm uses a set of samples called a training set as an input.In general, there exist two main categories of learning: supervised and unsupervised. The objective of supervised learning is to learn the prediction of the proper output vector for any input vector. Classification tasks are applications in which the target label is a finite number in a discrete category. Defining the unsupervised learning objective is challenging. A primary objective is to find similar samples of sensible clusters identified within input data, called clustering.
3.2.1 K-nearest neighbors
In KNN, any given new data point in the training set is determined by seeking K given data points, which reaches a convergence of inputs or a feature space that are close to each other. A distance scale such as Euclidean distance, L1 base, angle, Mahala Nobis distance, or Hamming distance is used to discover the nearest K neighbors to the new data point. For problem formulation, we will represent the new data point (input vector) as x, its KNN as Nk(x), the class label predicted for x as y, and a class variable as discrete random variable t. Moreover, 1(:) denotes the indicator function: if s is true, 1(s) = 1; otherwise, 1(s) = 0. The form of the classification task isKNN must store a large amount of training space, and this is one of the limitations that make KNN challenging to work with in a large dataset.
3.2.2 Support vector machine
SVMs are non-probabilistic binary classifiers that aim at finding the dividing hyper-plane that separates both classes of the training set with the maximum margin. The predicted label of a new data point is determined [31]. At the beginning, linear SVM, which finds a hyper-plane that will be discussed, is a linear input variable function. For problem formulation, we indicate the offset controlling parameter of the hyper-plane from the origin along its normal vector as b and the normal vector to the hyper-plane as w. Moreover, to confirm that SVMs can work with outliers in the data, we introduce variable ξi, that is, a slack variable, for every training point xi that gives the distance of how far this training point violates the margin in units of jwj. The binary linear classification task is defined using the following form:
where parameter C > 0 indicates how heavily a violation is punished [32, 33].Although we use the L1 norm for the penalty term Pn i = 1 ξi, there exist other penalty terms such as the L2 norm, which should be chosen with respect to the needs of the application. Moreover, parameter C is a hyper-parameter that can be chosen via cross-validation or Bayesian optimization. An important property of SVM is that the resulting classifier uses only a few points of training to classify a new data point, known as a support vector.SVMs can perform nonlinear classification that detects a nonlinear hyper-plane function of the input variable in addition to performing linear classification as the input variable is mapped to a high-dimensional feature space. SVMs can perform multiclass classification in addition to binary classification [34].SVMs are among the best off-the-shelf supervised learning models that are capable of effectively working with high-dimensional datasets and are efficient regarding memory usage due to the employment of support vectors for prediction. SVMs are useful in several real-world systems including protein classification, image classification, and handwritten character recognition.
3.3 Computing framework
The recognition system has different parts, and the computing framework is one of the essential parts for processing data. The computing framework is famous for cloud and fog computing. The application of FR can utilize a framework based on process location and application. Data in some applications must be processed after the acquisition; however, in some applications, data processing is not instantly required. Fog computing is a network architecture that supports the processing of data instantly [35].
3.3.1 Fog computing
Cloud computing is engineered to work by relaying and transmitting information to the edge of the servers from the datacenter task. The fog computing architecture on edge servers uses this architecture, and it provides network, storage space, limited computing, and data filtering of logical intelligence and datacenters. This structure is used in fields such as military and e-health applications [36, 37].
3.3.2 Cloud computing
To obtain accessible data, data are sent to the datacenter for analysis and processing. A significant amount of time and effort is expended to transfer and process data in this type of architecture, indicating that it is not sufficient to work with big data. Big data processing increases the cloud server's CPU usage [38]. There are various types of cloud computing such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS), and Mobile Backend as a Service (MBaaS) [39].Big data applications such as FR require a method and design that distribute computing to process big data in a fast and repetitive way [40, 41]. Data are divided into packages, and each package is assigned to different computers for processing. A move from the cloud to fog or distributed computing requires 1) a reduction in network loading, 2) an increase in data processing speed, 3) a decrease in CPU usage, 4) a decrease in energy consumption, and 5) higher data volume processing.
4. Proposed facial recognition system
4.1 Traditional deep convolutional neural networks
Images are expressed in terms of width (W) 227, height (H) 227, and depth (D) 3 of the colors red, green, and blue; therefore, they have a size of 227×227×3. The input color image is filtered at the first convolutional layer. This layer has 96 kernels (K) with an 11x 11x11 filter (F) and a 4-pixel stride (s). In the kernel map, the stride is the distance between the responsive field centers of neighboring neurons. The mathematical formula ((W-F+2P)/S) +1 is employed to compute the output size of the convolutional layer, where P refers to the padded pixel number, which can be as low as zero. The output volume size of the convolutional layer is ((227–11+0)/4)+1 = 55. The second input of the convolutional layer has a size of 55×55×no of filters, and therefore, the number of filters is 256 in this layer. As the work of the layers is distributed over 2 GPUs, the load is divided by 2 over all layers in each GPU. The next layer is the convolutional layer, followed by the pooling layer. Each feature map is decreased in dimensionality, and important features are retained. The type of pooling can be sum, max, average, etc. In AlexNet, a max-pooling layer is employed. Two hundred fifty-six filters (256) are input to this layer.Krizhevsky et al. [11] developed AlexNet for the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) [34]. The first layer of AlexNet is used to filter the input image. The input image has a height (H), width (W), and depth (D) of 227×227×3; D = 3 to account for the colors red, green, and blue. The first convolutional layer is utilized to filter the input color image; it has 96 kernels (K) with an 11x11x11 filter (F) and a four-pixel stride (s). The stride is the distance between the responsive field centers of neighboring neurons in the kernel map. The formula ((W-F+2P)/S) +1 is employed to compute the output size of the convolutional layer, where P refers to the padded pixel number, which can be as low as zero. The convolutional layer output volume size is ((227–11+0)/4)+1 = 55. The second input of the convolutional layer is of size 55×55×no of filters, and the number of filters in this layer is also 256. Since the work of these layers is distributed over 2 GPUs, the load of each layer is divided by 2. The next layer is the convolutional layer, followed by the pooling layer. Each feature map dimensionality decreases, and important features are retained. The pooling method can be max, sum, average, etc. A max-pooling layer is employed in AlexNet. A total of 256 filters are the input of this layer. Each filter has a size of 5×5×256 with a stride of two pixels. When two GPUs are used, the work is divided into 55/2×55/2×256/2≈ 27×27×128 inputs for each GPU. The normalized output of the second convolutional layer is connected to the third layer, which has 384 kernels with a size of 3×3. For the fourth convolutional layer, there are 384 kernels of size 3×3, and they are divided over 2 GPUs, so the load of each GPU is 3×3×192. There are 256 kernels each of size 3×3 in the fifth convolutional layer, and they are divided over 2 GPUs, so each GPU has a load of 3×3×128. The last three convolutional layers are created without pooling layers or normalization. The outputs of these three layers are delivered as the input to two fully connected layers, where each layer has 4096 neurons. Fig 2 illustrates the architecture used in AlexNet to classify different classes with ImageNet as a training dataset [34]. DCNNs can learn from features hierarchically. A DCNN increases the image classification accuracy, especially with large datasets [42]. Since the implementation of a DCNN requires a large number of images to attain high classification rates, an insufficient number of color images among the subjects’ identification images creates an extra challenge for recognition systems [35, 36]. A DCNN consists of neural networks with convolutional layers that perform feature extraction and classification on images [37]. The difference between the information used for testing and the original data used to train the DCNN is minimized by using a training set with different sizes or scales but the same features. The features will be extracted and classified well using a deep network [43]. Therefore, the DCNN will be useful in the task of recognition and classification. So DCNN will be utilized in the recognition and classification tasks. The AlexNet Architecture is shown in Fig 2.
Fig 2
AlexNet architecture.
4.2 Fundamentals of transfer learning
The center information on transfer learning (TL) appears in Fig 3. The center utilizes a moderately intricate and fruitful preprepared model, prepared from an enormous information source, e.g., ImageNet, which is a large visual database developed for visual object recognition research [41]. It contains over 14,000,000 manually annotated pictures, and one million pictures are furnished with bounding boxes. ImageNet contains in excess of 20,000 classifications [11]. Ordinarily, pretrained models are prepared on a subset of ImageNet with 1,000 classes. At that point, we "moved" the scholarly information to the moderately rearranged assignments (e.g., characterizing liquor abuse and nonliquor addiction) to remove a limited quantity of private information. Two attributes are imperative to support the exchange [44]: -i. The achievement of the pretrained model can advance the prohibition of client mediation with the exhausting hyperparameter tuning of new undertakings; ii. The early layers in pretrained models can be resolved as highlight extractors that help separate low-level highlights—for example, edges, tints, shades, and surfaces. Customary TL retrains the new layers [13]. First, the pretrained model is utilized, and then the entire structure of the neural system is reprepared. Critically, the worldwide learning rate is fixed, and the moving layers will have a low factor, while recently included layers will have a high factor. The core knowledge of TL is shown in Fig 3.
Fig 3
Core knowledge of transfer learning.
4.3 Adaptive deep convolutional neural networks (the proposed face recognition system)
The proposed system consists of three essential stages, includingpreprocessing,feature extractionrecognition, and identification.In preprocessing, the frame begins to capture images that must have a human face as the subject of insertion.This image is passed to face detector module. The face detector work non detecting the human face and segment bit as region of interest. the obtained ROI continues the preprocessing steps. It is resized into the preretinal size to alignment purpose.In the feature’s extraction, the preprocessed ROI in handled to extract feature vector using the modified version of AlexNet. The extract vector represents the significant details of the associated image.Finally, the recognition and identification include the determination of feature vector belongs to whom subject of enrolled subject in the system’s database. Each new feature vector represents either anew subject or already registered subject. for the feature vector of ready a register subject, the system recognition the associated ID. for the feature vector of a new registered subject, the system adds new record into the connected database.Fig 4 illustrates the general overall view of the proposed face recognition system.
Fig 4
The general overall view of the proposed face recognition system.
The system performs the steps on the face images to obtain the distinctive features of each face as follow:Pre-processing Phase:Ethics StatementAll participants provided written informed consent and appropriate, photographic release. The individuals shown in to publish their image.In the preprocessing step, as shown in Fig 5, the system begins to ensure the input image is the RGP image. Align in the same size of the image. Then, the face detection step is performed. This step uses a well-known face detection mechanism, the Viola-Jones detection approach. The popularity of Viola-Jones detection stems from its ability to work well in real-time and its ability to achieve high accuracy. To detect faces in a specific image, this face detector uses detection windows with different sizes to scan the input image.
Fig 5
Block diagram of the proposed biometric system (images from dataset published in [18]).
In this phase, the decision of whether there is a face window is made. Haar-like filters are used to derive simple local features that are applied to face window candidates. In Haar-like filters, the feature values are obtained easily by finding the difference between the total light intensities of the pixels. Then segmentation the region of the issue by cropping and resizing the face image to 227×227, as shown in Fig 6.
Fig 6
Face images before and after preprocessing (images from dataset published in [18]).
Ethics StatementAll participants provided written informed consent and appropriate, photographic release. The individuals shown in to publish their image.Features Extraction using Pre-trained Alex NetworkThe accessible dataset size is inadequate to prepare another deep model from the earliest starting point, and in any case, this is not possible due to a large number of prepared pictures. To maintain objectivity in this test, we applied the exchange learning hypothesis to the preprepared engineering of AlexNet in three distinct ways. First, we expected to alter the structure. The last fully-connected layer (FCL) was updated since the first FCLs were created to perform 1,000 classifications. Twenty arbitrarily chosen classes were recorded: the scale, hairdresser chair, lorikeet, small poodle, Maltese dog, dark-striped cat, beer bottle, work station, necktie, trombone, protective crash helmet, cucumber, letterbox, pomegranate, Appenzeller, gag, snow panther, mountain bike, lock, and Diamondback. We observed that none of them were identified with the face recognition method. Thus, we could not legitimately apply AlexNet as the element extractor. Consequently, the calibration was fundamental. Since the length of yield neurons (1000) in conventional AlexNet is not equivalent to the number of classes in our task (2), we expected to have to alter the relating softmax layer and arrangement layer, as indicated by Fig 7.
Fig 7
The schema of the modified AlexNet, where (#S) is the number of subjects in the dataset used during training.
In our exchange learning plan, we utilized another arbitrarily introduced completely associated layer with a number of accessible subjects in the utilized dataset(s), a softmax layer, and another characterization layer with a similar number of competitors. Fig 8 shows various kinds of available activation functions; we used softmax, since we had different information and choices depending on the most extreme scores of different outputs. Next, we set the training choices. Three properties were checked before training. First, the overall number of training iterations ought to be small for exchange learning. We initially set the number of training iterations to 6. Second, the global learning rate was set to a small estimated value of 10−4 to back learning off, since the early layers of this neural system were preprepared. Third, the learning pace of new layers was several times that of the transfer layer, since the transfer layers with preprepared loads and weights and the new layers had irregular instated loads and weights. Third, we shifted the quantities of transfer layers and tried various settings. AlexNet comprises five Conv layers (CL1, CL2, CL3, CL4, and CL5) and three completely associated layers (FCL6, FL7, and FL8).
Fig 8
Different types of activation functions for classification.
The pseudocode of the proposed algorithm is shown in algorithm 1. It starts using the original AlexNet architecture and image dataset for the subjects that were enrolled in the recognition systems. For each image in the dataset, the subject’s face is detected using Viola-Jones detection. The new face dataset is used for transfer learning. To transfer learning, we adapt to the architecture of AlexNet. Next, we train the altered architecture using the face dataset. The trained model is used in feature extraction.we expect to overhaul the relating SoftMax layer and arrangement layer as indicated in the pseudocode of the proposed calculation (Algorithm 1).Algorithm 1: Transfer Learning using AlexNet modelInput ← original AlexNet Net, ImageFaceSet imdsOutput ← modified trained AlexNet FNet, features FSet1. Begin2. // Preprocessing Face image(s) in imds3. For i = 1: length(imds)4. img ← read(imds,i)5. face ← detectFace(img)6. img ← resize(face,[227, 227])7. save(imds,I,img)8. End for9. // Adapt AlexNet Structure10. FLayers ← Net.Layers(1:END-3)11. FLayers.append(new Convolutional layer)12. FLayers.append(new SoftMax layer)13. FLayers.append(new Classification layer)14. // Train FNet using options15. Options.set(SolverOptimizer ← stochastic gradient descent with momentum)16. Options.set(InitialLearnRate ←1e-3)17. Options.set(LearnRateSchedule ← Piecewise)18. Options.set(MiniBatchSize ←32)19. Options.set(MaxEpochs ←6)20. FNet ← trainNetwork(FLayers, imds, Options)21. //Use FNet to extract features22. FSet ← empty23. For j = 1: length(imds)24. img ← read(imds,j)25. F ← extract(FNet, img, ‘FC7’)26. FSet ← FSet U F27. End for28.EndFace recognition Phase using Fog and Cloud Computing:Fig 9 shows the fog computing face recognition framework. Fog systems comprise client devices, cloud nodes/servers, and distributed computing environments. The general differences from the conventional distributed computing process are as follows:
Fig 9
General block diagram of the fog computing FR system.
A distributed computing community oversees and controls numerous cloud nodes/servers.Fog nodes/servers situated at the edge of the system between the system community and the client have a specific procurement device that can perform preprocessing and highlight extraction tasks and can communicate biometric data securely with the client devices and cloud node.User devices are heterogeneous and include advanced mobile phones, personal computers (PCs), hubs, and other networkable terminals.There are multiple purposes behind the communication plan.From the viewpoint of recognition efficiency, if FR information is sent to a node, the system communication cost will increase, since all information must be sent to and prepared by the cloud server. Additionally, the calculation load on the cloud server will increase.From the point of view of recognition security, the cloud community, as the focal hub of the whole system, will become a target for attacks. If the focal hub is breached, information acquired from the fog nodes/servers becomes vulnerable.Face recognition datasets are required for training if a neural system is utilized for recognition. Preparing datasets is normally time consuming and will greatly increase the training time if the training is carried out only by the nodes, risking the training quality.Since the connection between a fog node and client devices is very inconsistent, we propose a general engineering system for cloud-based face recognition frameworks. This plan exploits the processing ability and capacity limit of fog nodes/servers and cloud servers.The design incorporates preprocessing, including extraction, face recognition, and recognition-based security. The plan is partitioned into 6 layers as indicated by the information stream of fog architecture shown inUser equipment layer: The FC/MEC client devices are heterogeneous, including PCs and smart terminals. These devices may use various fog nodes/servers through various conventions.Network layer: This connects administration through various fog architecture protocols. It is able to obtain information transmitted from the system and client device layer and to compress and transmit the information.Data processing layer: The essential task of this layer is to preprocess image(s) sent from client hardware, including information cleaning, filtering, and preprocessing. The task of this layer is performed on cloud nodes.Extraction layer: After the image(s) are preprocessed, the extraction layer utilizes the related AlexNet to remove the highlights.Analysis layer: This layer communicates through the cloud. Its primary task is to cluster the removed element vectors that were found by fog nodes/servers. It can coordinate data among registered clients and produces responses to requests.Management layer: The management in the cloud server is, for the most part, responsible for(1) the choices and responses of the face recognition framework and (2) the information and logs of the fog nodes/servers that can be stored to facilitate recognition and authentication.Ethics StatementAll participants provided written informed consent and appropriate, photographic release. The individuals shown in to publish their image.As shown in Fig 11, the recognition classifier of the Analysis layer is the most significant piece of the framework for data preparation. It is identified with the resulting cloud server response to guarantee the legitimacy of the framework. Relatedly, our work centres around recognition and authentication. Classifiers on fog nodes/servers can utilize their calculation ability and capacity limit for recognition. In any case, much of the scope information cannot be handled or stored because of the restricted calculation and capacity of fog nodes/servers. Moreover, as mentioned, sending classifiers on fog nodes/servers cannot meet the needs of an individual system. The cloud server has a greater storage capacity than fog nodes/servers; therefore, the cloud server can store many training sets and process these sets. It can send training sets to fog nodes/servers progressively for training with the goal that different fog nodes/servers receive appropriate sets.
Fig 11
Fog computing network for the face recognition scheme.
Fig 12 shows Face images of SDUMLA-HMT subjects under different conditions as a dataset example.
Fig 12
Face images of SDUMLA-HMT subjects under different conditions as a dataset example [18].
5. Experimental results
In this section, we provide the results we obtained in the experiments. Some of these results will be presented as graphs, which present the relation between the performance and some of the parameters previously mentioned.
5.1 Runtime environment
The proposed recognition system was implemented and developed using MatlabR2018a on a PC with an Intel Core i7 CPU running at 2.2 GHz and Windows 10 Professional 64-bit edition. The proposed system is based on the dataset SDUMLA-HMT, which is available online for free.
5.2 Dataset(s)
SDUMLA-HMT is a publicly available database that has been used to evaluate the proposed system. The SDUMLA-HMT database was collected in 2010 by Shandong University, Jinan, China. It consists of five subdatabases—face, iris, finger vein, fingerprint, and gait—and contains 106 subjects (61 males and 45 females) with ages ranging between 17 and 31 years. In this work, we have used the face and iris databases only [19].The face database was built using seven digital cameras. Each camera was used to capture the face of every subject with different poses (three images), different expressions (four images), and different accessories (one image with a hat and one image with glasses), and under different illumination conditions (three images). The face database consists of 106×7×(3+4+2+3) = 8,904 images. All face images are of 640×480 pixels and are stored in the BMP format. Some face images of subject number 69 under different conditions are shown in Fig [19].
5.3 Performance measure
It is obviously, researchers recently focus on enhancing the face recognition systems from accuracy metrics regardless of the latest technologies and computing environment. Today, cloud computing and fog computing are available to enhance the performance of face recognition and decrease time complexity. In the proposed framework, we will handle these issues and well considered. The classifier performance evaluator carries out various performance measures and classifies the FR accuracy as true positive (TP), false negative (FN), false positive (FP) and true negative (TN). Precision is the most interesting and sensitive measure that can be used in wide-range comparison of the essential individual classifiers and the proposed system.The parameter matrixes can be defined as follows:
whereTrue Negative (TN): These are the negative tuples that were correctly labeled by the classifier.True Positive (TP): These are the positive tuples that were correctly labeled by the classifier.False Positive (FP): These are the negative tuples that were incorrectly labeled as positive.False Negative (FN): These are the positive tuples that were mislabeled as negative.
5.4 Results & discussion
A set of experiments were performed to evaluate the proposed system in terms of the evaluation criteria. All experiments start by loading the color images from the data source, then passing them to the segmentation step. According to the pretrained AlexNet, the input image size cannot exceed 227×227, and the image depth limit is 3. Therefore, after segmentation, we performed a check step to guarantee the appropriateness of the image size. A resizing process to 227×227×3 for width, height, and depth is imperative if the size of the image exceeds the size limit. And the main parameters and ratios are represented in Table 2.
Table 2
Parameter settings used in the experiments.
Parameter
Value
Feature Vector Size
4096
Mini Batch Size
32
Training ratio
80%
Testing ratio
20%
The experimental outcomes of the developed FR system and its comparison with various other techniques are presented in the scenario. It has been noted that the outcomes of the proposed algorithm outperformed most of its peers, especially in terms of precision.
5.4.1 Recognition time results
Fig 13 shows the comparison of the four algorithms: decision tree (DT), KNN classifier, SVM, and the proposed DCNN powered by the pre-trained AlexNet classifier. The relationship between two Parameters, observation/sec and recognition time in seconds per observation, which are used respectively for comparisons.
Fig 13
Recognition time of the proposed FR system and individual classifiers.
The results show that the proposed DCNN has superiority over other machine learning algorithms according to observation/sec and recognition time
5.4.2 Precision results
Fig 14 shows the precision of the four algorithms using the three datasets SDUMLA-HMT, 113, and CASIA.
Fig 14
Precision of our proposed system and the three comparison systems.
The results show that the proposed DCNN has superiority over other machine learning algorithms according to Perception for the 2nd and 3rd datasets and obtain with SVM the best results for the 1st dataset.
5.4.3 Recall results
Fig 15 shows the recall of the four algorithms using the three datasets SDUMLA-HMT, 113, and CASIA.
Fig 15
Recall of the proposed system and the three comparison systems.
The results show that the proposed DCNN has superiority over other machine learning algorithms, according to Recall parameters.
5.4.4 Accuracy results
Fig 16 displays the accuracy of our proposed system of the four algorithms using three datasets SDUMLA-HMT, 113, and
Fig 16
Accuracy of our proposed system and the three comparison systems.
The results show that the proposed DCNN has superiority over other machine learning algorithms, according to Accuracy parameters.
5.4.5 Specificity results
Fig 17 displays the data of the specificity of our proposed system comparing with other four algorithms using three datasets SDUMLA-HMT, 113, and CASIA.
Fig 17
The specificity of the proposed system and the three comparison systems.
Table 3 shows the average results for precision, recall, accuracy, and specificity of the four algorithms using the three datasets SDUMLA-HMT, 113, and CASIA.
Table 3
Average results of our proposed system and the three comparison systems.
Algorithm
Precision
Recall
Accuracy
Specificity
DT
96.30%
94.54%
94.96%
95.36%
SVM
98.02%
96.50%
97.17%
97.90%
KNN
96.22%
96.77%
96.30%
95.64%
Adaptive DCNN
99.12%
99.07%
99.06%
99.10%
Fig 18 displays the data documented in Table representing the average results for precision, recall, accuracy, and specificity of our proposed system of the four algorithms using three datasets SDUMLA-HMT, 113, and CASIA.
Fig 18
Average results of our proposed system and the three comparison systems.
Table 4 shows the comparison of the three algorithms and the algorithm developed by Jonnathann et al. [15] using the same dataset. The Table 4 compares the accuracy rates of the developed classifiers verse the same classifiers developed by Jonnathann et al. [15] in terms of accuracy rates without considering feature extraction methods.
Table 4
Comparative accuracy details of KNN, SVM and DCNN using the SDUMLA dataset.
Classifier
Jonnathann et al.
Our Proposed
KNN
74.83
94.8
SVM
86.41
97.5
DCNN
97.26
99.4
Fig 19 shows the data documented in Table. It is noticeable that the proposed classifier achieves the highest accuracy using KNN, SVM, and DCNN.
Fig 19
Comparative evaluation of the proposed FR system vs recent literature.
6. Conclusion
FR a more natural biometric information process than other proposed systems, and it must address more variation than any other method. It is one of the most famous combinatorial optimization problems. Solving this problem in a reasonable time requires an efficient optimization method. FR may face many difficulties and challenges in terms of the input image such as different facial expressions, subjects wearing hats or glasses and varying brightness levels. This study is based on the adaptive version of the most recent DCNN algorithm, called AlexNet. This paper proposed a deep FR learning method using TL in fog computing. The proposed DCNN algorithm is based on a set of steps to process the face images to obtain the distinctive features of the face. These steps are divided by preprocessing, face detection, and feature extraction. The proposed method improves the solution by adjusting the parameters to search for the final optimal solution. In this study, the proposed algorithm and other popular machine learning algorithms, including the DT, KNN, and SVM algorithms, were tested on three standard benchmark datasets to demonstrate the efficiency and effectiveness of the proposed DCNN in solving the FR problem. These datasets were characterized by various numbers of images, including males and females. The proposed algorithm and other algorithms were tested on different images in the first dataset, and the results demonstrated the effectiveness of the DCNN algorithm in terms of achieving the optimal solution (i.e., the best accuracy) with reasonable accuracy, recall, precision, and specificity compared to the other algorithms. At the same time, the proposed DCNN achieved the best accuracy compared with Jonnathann et al. [18]. The accuracy of the proposed method reached 99.4%, compared with 97.26% by Jonnathann et al. [18]. The suggested algorithm results in higher accuracy (99.06%), higher precision (99.12%), higher recall (99.07%), and higher specificity (99.10%) than the comparison algorithms.Based on the experimental results and performance analysis of various test images (i.e., 30 images), the results showed that the proposed algorithm could be used to effectively locate an optimal solution within a reasonable time compared with other popular algorithms. In the future, we plan to improve this algorithm in two ways. The first is by comparing the proposed algorithm with different recent metaheuristic algorithms and testing the methods with the remaining instances from each dataset. The second is by applying the proposed algorithm to real-life FR problems in a specific domain.29 Jun 2020PONE-D-20-15335Deep face recognition using computational intelligence algorithms Deep Face Recognition SystemPLOS ONEDear Dr. salama,Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.Dear authors it is also recommended that the authors should add some recent papers of PLOS one journal.Please submit your revised manuscript by Aug 13 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.Please include the following items when submitting your revised manuscript:A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocolsWe look forward to receiving your revised manuscript.Kind regards,Dilbag SinghAcademic EditorPLOS ONEJournal Requirements:When submitting your revision, we need you to address these additional requirements.1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf2.We note that Figure(s) in your submission contain copyrighted images. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright.We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:1. You may seek permission from the original copyright holder of Figure(s) to publish the content specifically under the CC BY 4.0 license.We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:“I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.”Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission.In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].”2. If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.3. Please ensure that you refer to Figure 12 in your text as, if accepted, production will need this reference to link the reader to the figure.4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.5.We note that Figure [1, 4, 5, 11 and 12] includes an image of a patient / participant in the study.As per the PLOS ONE policy (http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research) on papers that include identifying, or potentially identifying, information, the individual(s) or parent(s)/guardian(s) must be informed of the terms of the PLOS open-access (CC-BY) license and provide specific permission for publication of these details under the terms of this license. Please download the Consent Form for Publication in a PLOS Journal (http://journals.plos.org/plosone/s/file?id=8ce6/plos-consent-form-english.pdf). The signed consent form should not be submitted with the manuscript, but should be securely filed in the individual's case notes. Please amend the methods section and ethics statement of the manuscript to explicitly state that the patient/participant has provided consent for publication: “The individual in this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these case details”.If you are unable to obtain consent from the subject of the photograph, you will need to remove the figure and any other textual identifying information or case descriptions for this individual.[Note: HTML markup is below. Please do not edit.]Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.Reviewer #1: YesReviewer #2: YesReviewer #3: NoReviewer #4: Yes**********2. Has the statistical analysis been performed appropriately and rigorously?Reviewer #1: YesReviewer #2: YesReviewer #3: NoReviewer #4: No**********3. Have the authors made all data underlying the findings in their manuscript fully available?The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.Reviewer #1: NoReviewer #2: YesReviewer #3: NoReviewer #4: Yes**********4. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1: YesReviewer #2: YesReviewer #3: NoReviewer #4: Yes**********5. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)Reviewer #1: In this paper authors addressed the deep FR system using TL in fog computing. Problem taken has great significance and technical contribution is also present.Minor changes are recommended:1. The main objective achieved needs some more evidences.2. More details are required about the pre-processing done.3. Very less information is present about the feature extraction.4. Security of the system needs quantitatively parameters support.5. Very few literature reviewed about fog computing, need to incorporate more related and latest work(2019, 2020) about the problem.6. The related work can be extended by including the following papers:(a) Schiller, D., Huber, T., Dietz, M., & André, E. (2020). Relevance-based data masking: a model-agnostic transfer learning approach for facial expression recognition.(b) Prakash, R. M., Thenmoezhi, N., & Gayathri, M. (2019, November). Face Recognition with Convolutional Neural Network and Transfer Learning. In 2019 International Conference on Smart Systems and Inventive Technology (ICSSIT) (pp. 861-864). IEEE.(c)Singh, D., Kumar, V., Vaishali & Kaur, M. (2020). Classification of COVID-19patients from chest CT images using multi-objective differential evolution–based convolutional neural networks. European Journal of Clinical Microbiology & Infectious Diseases, 1-11.Reviewer #2: Main aim of the proposed work is to present face recognition task with the use of transfer function. The evaluation has been done using datasets and better classification results have been achieved. The paper presents the results and analysis very well. A very few grammatical errors may be checked for final presentation.Reviewer #3: 1. The quality of some figures is very poor.2. There are number of grammatical mistakes and Typo errors in the manuscript, such asAs such, face recognition or authenticationarea of research..is still mostly an unexploredenvironmentsTherefore, the main3. The abstract is very poorly written and organized. The number of mistakes in it. It should be concise and clear for better understanding.4. Authors have poorly organized the paper. No sections and subsections are marked properly.5. The paper seems to be review paper than research paper. Authors have added unnecessary details in the manuscript.6. First of all, why authors mentioned Table 2 in related work? Secondly, Description and definitions of parameters and symbols of Table 2 are not mentioned.7. Author should define the parameter settings of each technique including proposed one.8. The current comparisons with competitive models are limited. Consider more effective techniques.9. Significant analyses are completely missing.10. Use either tables or graphs for comparative analysis. Both are creating chaos.Reviewer #4: The following suggestions need to be incorporated before submitting the manuscript:1. There are many grammatical and spelling mistakes throughout the manuscript which needs to be modified.2. The abstract should mention the machine learning algorithms used in this work.3. There is no clear mentioning about the contributions of the paper.4. Use of very short sentences such as "Then, recognition is performed" must be avoided.5. Discussion of related work on the machine learning approaches should be extended with the following papers, which recently came into my attention because they proved to be successful in various applications:N-semble: neural network based ensemble approachDeep Transfer Learning based Classification Model for COVID-19 DiseaseAn Expert Approach for Data Flow Prediction: Case Study of Wireless Sensor NetworksComputed tomography reconstruction on distributed storage using hybrid regularization approachMachine learning for computer and cyber security: principle, algorithms, and practices6. In Table 2, the parameters such as TP,FN,P, N, TN stands for? It is a much better practice to explain these in paragraph form and then add the formulas.7. Correct he heading "Materials and Methods", "Results and Discussions". Take care of the typos in the manuscript.8. Instead of our proposed system, it is better practice to use the proposed system. The accuracy of the proposed system in Table 6 for the CNN model comes out to be 100%. In the real-world systems this is impossible, kindly justify the value.9. The conclusion should also include the future perspective of this work.**********6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose “no”, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.Reviewer #1: NoReviewer #2: NoReviewer #3: NoReviewer #4: No[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.5 Sep 2020Firstly ;According to the Editor request, the authors confirmed that all images in the paper is created by the authors themselves and have not previously been copyrighted. Also, the authors confirmed that The individual pictured in Fig 1, Fig 5, Fig 6, Fig 11, Fig 12.has provided written informed consent (as outlined in PLOS consent form) to publish their image alongside the manuscript".Response letter on reviewersPONE-D-20-15335Deep face recognition using computational intelligence algorithms Deep Face Recognition SystemTo: PLOS ONE EditorRe: Response to reviewersDear Editor,Thank you for allowing us to resubmit our manuscript after addressing the reviewers’ comments.We are uploading(a) Our point-by-point responses to the comments (below) (response to reviewers),(b) An updated manuscript with changes highlighted in yellow, and(c) A clean updated manuscript without highlights (PDF main document).Best regards,Sincerely,Dr: Diaa Salama Abd ElminaamInformation Systems Department, Faculty of Computers and Informatics, Benha University, Benha City, Egypt,+201019511000Diaa.salama@fci.bu.edu.eg________________________________________First, I would like to thank the Editor for these valuable comments that improve my paper. Second, I replied to every comment, as shown below.First, I would like to thank the Editors for their recommendation of the manuscript.According to Journal Requirements________________________________________Concern # 1:1Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdfResponse: I considered this point and improved it. The paper is re-edited and formatted by 3rd party service for language polishing https://www.aje.com/c/ieee”The editing certificate is attached in the Supplementary Materials. In addition, the manuscript was revised before the first submission through 3rd party service for language polishing (https://www.aje.com/c/ieee) was used for subsequent language editing.Author response: Done________________________________________Concern # 2:2.We note that Figure(s) in your submission contain copyrighted images. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright.We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:1. You may seek permission from the original copyright holder of Figure(s) to publish the content specifically under the CC BY 4.0 license.We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:“I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.”Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission.In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].”Response you are right; I considered this point removed all copyrights figures and add new figures and adde a written permission from the copyright holder to publish these figuresAuthor action: The appropriate change was made such as figure 1 , 12 .________________________________________Concern # 3:3. Please ensure that you refer to Figure 12 in your text as, if accepted, production will need this reference to link the reader to the figure.Response: you are right. You are right; I ; I considered this pointAuthor action: Done , The appropriate change was made________________________________________Concern # 4:5.We note that Figure [1, 4, 5, 11 and 12] includes an image of a patient/participant in the study.As per the PLOS ONE policy (http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research) on papers that include identifying, or potentially identifying, information, the individual(s) or parent(s)/guardian(s) must be informed of the terms of the PLOS open-access (CC-BY) license and provide specific permission for publication of these details under the terms of this license. Please download the Consent Form for Publication in a PLOS Journal (http://journals.plos.org/plosone/s/file?id=8ce6/plos-consent-form-english.pdf). The signed consent form should not be submitted with the manuscript, but should be securely filed in the individual's case notes. Please amend the methods section and ethics statement of the manuscript to explicitly state that the patient/participant has provided consent for publication: “The individual in this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these case details”.Response you are right; I considered this point removed all copyrights figures and added new pictures and added written permission from the copyright holder to publish these figuresAuthor action: The appropriate change was made, such as figure 1, 12.________________________________________Reviewer Requirements (1st reviewer)First, I would like to thank the reviewer for these valuable comments that improve my paper. Second, I replied to every comment as shown below.Reviewer#1,First, I would like to thank the first reviewer for their recommendation (Minor changes are recommended:) of the manuscript.________________________________________Reviewer#1, Concern # 1: The main objective achieved needs some more evidences.Response: I considered this point and improved it. I revised it and concentrated on the main objective.Author response: This text was revised. An explanation has been provided.________________________________________Reviewer#1, Concern # 2: More details are required about the pre-processing done.Response you are right; I have added more details about the pre-processing step in the material and methods section 4.3 (Adaptive Deep Convolutional Neural Networks) which lists the strategic parameters of each step and the associated values. (The general overall view of the proposed face recognition system is shown in Fig .4 )Author action: The appropriate change was made.________________________________________Reviewer#1, Concern # 3: Very less information is present about the feature extraction.Response: you are right. You are right; I have added more details about the pre-processing step in the material and methods section 4.3 (Adaptive Deep Convolutional Neural Networks) which lists the strategic parameters of each step and the associated values. (The general overall view of the proposed face recognition system is shown in Fig .4 )Author action: The appropriate change was made.________________________________________Reviewer#1, Concern # 4: Security of the system needs quantitatively parameters support.Response : I considered this point and improved it.Author action: An explanation has been provided.________________________________________Reviewer#1, Concern # 5: Very few literature reviewed about fog computing, need to incorporate more related and latest work (2019, 2020) about the problem.Response : I revised the literature reviewed about fog computing . I considered this point and improved it that improves the quality of my paper as much as possible.Author action: We updated the manuscript and added an explanation for the literature reviewed________________________________________Reviewer#1, Concern # 6: The related work can be extended by including the following papers:(a) Schiller, D., Huber, T., Dietz, M., & André, E. (2020). Relevance-based data masking: a model-agnostic transfer learning approach for facial expression recognition.(b) Prakash, R. M., Thenmoezhi, N., & Gayathri, M. (2019, November). Face Recognition with Convolutional Neural Network and Transfer Learning. In 2019 International Conference on Smart Systems and Inventive Technology (ICSSIT) (pp. 861-864). IEEE.(c)Singh, D., Kumar, V., Vaishali & Kaur, M. (2020). Classification of COVID-19patients from chest CT images using multi-objective differential evolution–based convolutional neural networks. European Journal of Clinical Microbiology & Infectious Diseases, 1-11.Response : I considered this point and revised The related work and considered theses good papers (References 8, 9 , 10 )that improve the quality of my paper as much as possible .“ the reference section reordered”Some references that are recommended are added .Author action: The appropriate change was made.________________________________________Actually, I gained from these comments a lot and worked on them to improve the quality of my paper as much as possible.Thank you.________________________________________Reviewer Requirements (2nd reviewer)Reviewer#2,First, I would like to thank the secand reviewer for their recommendation (Main aim of the proposed work is to present face recognition task with the use of transfer function.The evaluation has been done using datasets and better classification results have been achieved.The paper presents the results and analysis very well. .:) of the manuscript.________________________________________Reviewer#2, Concern # 1: A very few grammatical errors may be checked for final presentationResponse : I considered this point and improved it . Although the paper has been revised 3 times before by 3rd party service for language polishing , We will consider this point and send it again to the 3rd party service for language polishing https://www.aje.com/c/ieee”The editing certificate is attached in the Supplementary Materials. In addition, the manuscript was revised before the first submission through 3rd party service for language polishing (https://www.aje.com/c/ieee) was used for subsequent language editing.Based on your comments about the editing and grammar, we have again submitted the paper to the 3rd party service for language editing.The editing certificates are as follows.Final Editing Certificate1st Editing Certificate2nd Editing Certificate3rd Editing CertificateAuthor response: This text was revised.Author action: Done.________________________________________Actually, I gained from these comments a lot and worked on them to improve the quality of my paper as much as possible.Thank you.________________________________________Reviewer Requirements (3rd reviewer)Reviewer#3, Concern # 1: The quality of some figures is very poor.Response I considered this point and improved the quality of the figures with one more clearly for better understandingAuthor action: The appropriate change was made.________________________________________Reviewer#3, Concern # 2: There are number of grammatical mistakes and Typo errors in the manuscript, such as As such, face recognition or authentication area of research.. is still mostly an unexplored environmentsTherefore, the mainResponseI considered this point and improved it . Although the paper has been revised 3 times before by 3rd party service for language polishing , We will consider this point and send it again to the 3rd party service for language polishing https://www.aje.com/c/ieee”The editing certificate is attached in the Supplementary Materials. In addition, the manuscript was revised before the first submission through 3rd party service for language polishing (https://www.aje.com/c/ieee) was used for subsequent language editing.Based on your comments about the editing and grammar, we have again submitted the paper to the 3rd party service for language editing.The editing certificates are as follows.Final Editing Certificate1st Editing Certificate2nd Editing Certificate3rd Editing CertificateAuthor response: This text was revised.Author action: Done.________________________________________Reviewer#3, Concern # 3: The abstract is very poorly written and organized. The number of mistakes in it. It should be concise and clear for better understanding.Response: you are right. I considered this point and improved it. Abstract is totally changed and re-edited.Author action: The appropriate change was made.________________________________________Reviewer#3, Concern # 4: Authors have poorly organized the paper. No sections and subsections are marked properly.Response: I considered this point and improved it will be reformat the paper by 3rd party service for language polishing https://www.aje.com/c/ieee” to be in PLOS ONE journal format. Also all sections and subsections are marked.Author action: The appropriate change was made.________________________________________Reviewer#3, Concern # 5: The paper seems to be review paper than research paper. Authors have added unnecessary details in the manuscript.Response: I considered this point and removed all unnecessary data .I revised it and just considered the relevant ones.Author action: The appropriate change was made.________________________________________Reviewer#3, Concern # 6: First of all, why authors mentioned Table 2 in related work? Secondly, Description and definitions of parameters and symbols of Table 2 are not mentioned.Response : I you are right , I review the paper and found that table 2 mention in related work section by mistake (it should be in results and discussion section ) .so I considered this point and revised The related work and changed the parameters setting and symbols of table 2 to be in equation form.Author action: The appropriate change was made.________________________________________Reviewer#3, Concern # 7: Author should define the parameter settings of each technique including proposed one.Response the material and methods section ( section 4.3 Adaptive Deep Convolutional Neural Networks) are totally changed which lists the strategic parameters of each step and the associated values. (The general overall view of the proposed face recognition system is shown in Fig .4 ).table 2 shows the Parameters settings used in the experimentsAuthor action: The appropriate change was made.________________________________________Reviewer#3, Concern # 8: The current comparisons with competitive models are limited. Consider more effective techniques.Response I considered this point.Author action: The appropriate change was made.________________________________________Reviewer#3, Concern # 9: Significant analyses are completely missing.Response: you are right. I considered this point and improved it. The and methodology section (section 4.3) are totally updated.Author action: The appropriate change was made.________________________________________Reviewer#3, Concern # 10: Use either tables or graphs for comparative analysis. Both are creating chaos.Response: I considered this point and improved it.Author action: The appropriate change was made.________________________________________Actually, I gained from these comments a lot and worked on them to improve the quality of my paper as much as possible.Thank you.________________________________________Reviewer Requirements (4th reviewer)________________________________________Reviewer#4, Concern # 1: There are many grammatical and spelling mistakes throughout the manuscript which needs to be modified.ResponseI considered this point and improved it . Although the paper has been revised 3 times before by 3rd party service for language polishing , We will consider this point and send it again to the 3rd party service for language polishing https://www.aje.com/c/ieee”The editing certificate is attached in the Supplementary Materials. In addition, the manuscript was revised before the first submission through 3rd party service for language polishing (https://www.aje.com/c/ieee) was used for subsequent language editing.Based on your comments about the editing and grammar, we have again submitted the paper to the 3rd party service for language editing.The editing certificates are as follows.Final Editing Certificate1st Editing Certificate2nd Editing Certificate3rd Editing CertificateAuthor action: Done________________________________________Reviewer#4, Concern # 2: The abstract should mention the machine learning algorithms used in this work.Response you are right I considered this point and revised the abstract totally.Author response: This text was revised. An explanation has been provided. ________________________________________Reviewer#4, Concern # 3: There is no clear mentioning about the contributions of the paper.Response: you are right. I considered this point and improved it. I revised the paper totally.Author response: This text was revised. An explanation has been provided.________________________________________Reviewer#4, Concern # 4: Use of very short sentences such as "Then, recognition is performed" must be avoided.Response: I considered this point and improved it.Author response: I considered this point and improved it. Although the paper has been revised 3 times before by 3rd party service for language polishing , we will consider this point and send it again to the 3rd party service for language polishing https://www.aje.com/c/ieee”________________________________________Reviewer#4, Concern # 5: Discussion of related work on the machine learning approaches should be extended with the following papers, which recently came into my attention because they proved to be successful in various applications:N-semble: neural network based ensemble approachDeep Transfer Learning based Classification Model for COVID-19 DiseaseAn Expert Approach for Data Flow Prediction: Case Study of Wireless Sensor NetworksComputed tomography reconstruction on distributed storage using hybrid regularization approachMachine learning for computer and cyber security: principle, algorithms, and practicesResponse : I revised the literature reviewed. I considered these respected papers that improves the quality of my paper as much as possible.Author response: Done________________________________________Reviewer#4, Concern # 6: In Table 2, the parameters such as TP,FN,P, N, TN stands for? It is a much better practice to explain these in paragraph form and then add the formulas.Response : I considered this point and change table 2 parameters to equations 5,6,7, and 8Author response: Done________________________________________Reviewer#4, Concern # 7: Correct the heading "Materials and Methods", "Results and Discussions". Take care of the typos in the manuscript.Response I considered this point and revised the paper.Author response: Done________________________________________Reviewer#4, Concern # 8: Instead of our proposed system, it is better practice to use the proposed system. The accuracy of the proposed system in Table 6 for the CNN model comes out to be 100%. In the real-world systems this is impossible, kindly justify the value.Response I considered this point and improved it. I checked the results and found a bug in my work. So, I repeated all the experiments again to check the correctness of the results.Author action: The appropriate change was made.________________________________________Reviewer#4, Concern # 9: The conclusion should also include the future perspective of this work.Response : I considered this point and improved it.Author action: The appropriate change was made. ________________________________________Actually, I gained from these comments a lot and worked on them to improve the quality of my paper as much as possible.Thank you.Response letterPONE-D-20-15335Deep face recognition using computational intelligence algorithms Deep Face Recognition SystemTo: PLOS ONE EditorRe: Response to reviewersDear Editor,Thank you for allowing us to resubmit our manuscript after addressing the Editor comments.We are uploading(a) Our point-by-point responses to the comments (below) (response to Editor),Best regards,Sincerely,Dr: Diaa Salama Abd ElminaamInformation Systems Department, Faculty of Computers and Informatics, Benha University, Benha City, Egypt,+201019511000Diaa.salama@fci.bu.edu.eg________________________________________First, I would like to thank the editor for these valuable comments that improve my paper. Second, I replied to every comment as shown below.________________________________________The Editor Concern # 1: We note the following figures contain images of faces: Fig 1, Fig 5, Fig 6, Fig 11, Fig 12.Additionally, we note the following figures may contain copyrighted images: Fig 9 and Fig 10.1) Please disclose whether or not the participants shown in Fig 1, Fig 5, Fig 6, Fig 11, Fig 12 consented to having this image published under the Creative Commons Attribution (CC BY) license and signed the PLOS Consent Form for Publication in a PLOS Journal (https://journals.plos.org/plosone/s/file?id=8ce6/plos-consent-form-english.pdf).If the participants completed the consent form, please amend the methods section and ethics statement of the manuscript to explicitly state that the patient/participant has provided consent for publication: "The individual pictured in Fig _____ has provided written informed consent (as outlined in PLOS consent form) to publish their image alongside the manuscript".Response: I considered this point and improved it. The individual pictured in Fig 1, Fig 5, Fig 6, Fig 11, Fig 12.has provided written informed consent (as outlined in PLOS consent form) to publish their image alongside the manuscript".the consent form for Abd Elrahman Almansori (Fig 1, Fig 5, Fig 6, Fig 11, Fig 12) the consent form for Faris Noori(Fig 6, Fig 12)the consent form for khaled Alrahidi(Fig 6, Fig 12) the consent form for Mohamed Ahussani(Fig 6, Fig 12)________________________________________The Editor Concern # 2: Please explain where the authors obtained the images in Fig 1, Fig 5, Fig 6, Fig 9, Fig 10, Fig 11, and Fig 12 in your submission or if the authors created the image themselves.Response: the authors created the images in Fig 1, Fig 5, Fig 6, Fig 9, Fig 10, Fig 11, and Fig 12 in the submission themselves. the authors didn't obtain the images in Fig 1, Fig 5, Fig 6, Fig 9, Fig 10, Fig 11, and Fig 12 in the submission from anywhere________________________________________The Editor Concern # 3:4) If any of the images in the above mentioned figures have been previously copyrighted, PLOS ONE is unable to publish this image, as all content is published under the Creative Commons Attribution (CC BY) 4.0 license.to seek permission from the copyright owner to publish these figures under the Creative Commons Attribution License (CCAL), CC BY 4.0, please contact them with the following text and PLOS ONE Request for Permission form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf):"I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license."Please upload the granted permission to the manuscript as a Supporting Information file. In the figure caption of the copyrighted figure, please include the following text: "Republished from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year]."Please note that RightsLink permission forms often impose use restrictions that are incompatible with our CC BY 4.0 license, and we are therefore unable to accept these permissions. For this reason, we strongly recommend contacting copyright holders with the PLOS ONE Request for Permission form.If you are unable to obtain permission from the original copyright holder, please either remove the figure or supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image used in the study, and is therefore for illustrative purposes only.Response: the authors confirmed that the figures mentioned above haven't previously copyrighted.Submitted filename: Response to Reviewers 2.docxClick here for additional data file.30 Oct 2020A deep facial recognition system using computational intelligent algorithmsPONE-D-20-15335R1Dear Dr. salama,We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.Kind regards,Seyedali MirjaliliAcademic EditorPLOS ONEAdditional Editor Comments (optional):Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.Reviewer #1: All comments have been addressedReviewer #2: All comments have been addressed**********2. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.Reviewer #1: YesReviewer #2: Yes**********3. Has the statistical analysis been performed appropriately and rigorously?Reviewer #1: YesReviewer #2: Yes**********4. Have the authors made all data underlying the findings in their manuscript fully available?The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.Reviewer #1: YesReviewer #2: Yes**********5. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1: YesReviewer #2: Yes**********6. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)Reviewer #1: (No Response)Reviewer #2: All the questions have been answered well. The manuscript is more presentable and improved in terms of discussion and analysis.**********7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose “no”, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.Reviewer #1: NoReviewer #2: No12 Nov 2020PONE-D-20-15335R1A deep facial recognition system using computational intelligent algorithmsDear Dr. Salama AbdELminaam:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.If we can help with anything else, please email us at plosone@plos.org.Thank you for submitting your work to PLOS ONE and supporting open access.Kind regards,PLOS ONE Editorial Office Staffon behalf ofProf. Seyedali MirjaliliAcademic EditorPLOS ONE
Table 1
Different network architectures of FR.
Network Architecture
Settings
Backbone Network
Mainstream architecture: AlexNet, VGGNet, GoogleNet or SeNet
Authors: Geert Litjens; Thijs Kooi; Babak Ehteshami Bejnordi; Arnaud Arindra Adiyoso Setio; Francesco Ciompi; Mohsen Ghafoorian; Jeroen A W M van der Laak; Bram van Ginneken; Clara I Sánchez Journal: Med Image Anal Date: 2017-07-26 Impact factor: 8.545
Authors: Jonathan D Power; Mark Plitt; Stephen J Gotts; Prantik Kundu; Valerie Voon; Peter A Bandettini; Alex Martin Journal: Proc Natl Acad Sci U S A Date: 2018-02-12 Impact factor: 11.205