Anandbabu Gopatoti1,2, Vijayalakshmi P1. 1. Department of Electronics and Communication Engineering Hindusthan College of Engineering and Technology Coimbatore Tamil Nadu India. 2. Anna University Chennai Tamil Nadu India.
Abstract
The corona virus disease 2019 (COVID-19) pandemic has a severe influence on population health all over the world. Various methods are developed for detecting the COVID-19, but the process of diagnosing this problem from radiology and radiography images is one of the effective procedures for diagnosing the affected patients. Therefore, a robust and effective multi-local texture features (MLTF)-based feature extraction approach and Improved Weed Sea-based DeepNet (IWS-based DeepNet) approach is proposed for detecting the COVID-19 at an earlier stage. The developed IWS-based DeepNet is developed for detecting COVID-19to optimize the structure of the Deep Convolutional Neural Network (Deep CNN). The IWS is devised by incorporating the Improved Invasive Weed Optimization (IIWO) and Sea Lion Optimization (SLnO), respectively. The noises present in the input chest x-ray (CXR) image are discarded using Region of Interest (RoI) extraction by adaptive thresholding technique. For feature extraction, the proposed MLFT is newly developed by considering various texture features for extracting the best features. Finally, the COVID-19 detection is performed using the proposed IWS-based DeepNet. Furthermore, the proposed technique achieved effective performance in terms of True Positive Rate (TPR), True Negative Rate (TNR), and accuracy with the maximum values of 0.933%, 0.890%, and 0.919%.
The corona virus disease 2019 (COVID-19) pandemic has a severe influence on population health all over the world. Various methods are developed for detecting the COVID-19, but the process of diagnosing this problem from radiology and radiography images is one of the effective procedures for diagnosing the affected patients. Therefore, a robust and effective multi-local texture features (MLTF)-based feature extraction approach and Improved Weed Sea-based DeepNet (IWS-based DeepNet) approach is proposed for detecting the COVID-19 at an earlier stage. The developed IWS-based DeepNet is developed for detecting COVID-19to optimize the structure of the Deep Convolutional Neural Network (Deep CNN). The IWS is devised by incorporating the Improved Invasive Weed Optimization (IIWO) and Sea Lion Optimization (SLnO), respectively. The noises present in the input chest x-ray (CXR) image are discarded using Region of Interest (RoI) extraction by adaptive thresholding technique. For feature extraction, the proposed MLFT is newly developed by considering various texture features for extracting the best features. Finally, the COVID-19 detection is performed using the proposed IWS-based DeepNet. Furthermore, the proposed technique achieved effective performance in terms of True Positive Rate (TPR), True Negative Rate (TNR), and accuracy with the maximum values of 0.933%, 0.890%, and 0.919%.
Coronavirus is an outbreak that transmits very quickly from person to person. For this case, it has very overwhelming effects in numerous fields across the world.
Hence, it is very important to identifythe COVID‐19 disease
as fast as possible for controlling the virus spread. Moreover, the COVID‐19 outbreak is a respiratory infection that is caused by serious acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2).
It was detected in November 2019 in Wuhan city. In the first month of 2020, COVID‐19 expands all over the world through transmission from human to human.
Coronavirus affects animals, but the virus can be transmitted to humans due to its zoonotic nature.
The widespread occurrence of COVID‐19 has been of greater trouble to the world health organization since no effectual treatment has been found.
The biological composition of COVID‐19 consists of positive‐sense single‐stranded RNA‐type, and it is more complicated to diagnose the COVID‐19 due to its mutating power.
Moreover, the COVID‐19 disease may cause severe problems for the patients having difficulty in breathing, sudden cardiac arrest, multiorgan failure, and pneumonia.
Medical professionals across the world are undergoing demanding research to discover a robust treatment for diagnosing COVID‐19 disease.The distinctive characteristics of COVID‐19 disease are fatigue, fever, muscle pain, cough, breathing problem, and headache.
The virus can aggravate the demise of individuals with weakened immune systems.
The transmission of COVID‐19 disease
is occurred from one person to another person by having direct interaction between people.
In general, healthy people can be affected by COVID‐19 disease
through mucous contact, breath contact, and hand contact with the persons having COVID‐19.
Moreover, the real‐time reverse transcription‐polymerase chain reaction (RT‐PCR) is the most widespread test approach presently utilized for diagnosing COVID‐19. In addition, Chest radiological imaging techniques, namely computed tomography (CT) and x‐ray have fundamental roles in diagnosing and treating the COVID‐19 diseases
in the earlier stage.
CXR images have greater potential in monitoring and examining various types of lung diseases, like infiltration, pneumonia, atelectasis, hernia, and tuberculosis. Consequently, CXR images may also be utilized in the COVID‐19 detection.
A chest radiology image‐driven detection model has numerous benefits over various conventional techniques due to its less cost, less radiation dose, capability to analyze several cases at the same time, and ease of access.
It can be very supportive for hospitals with no or inadequate amount of testing resources and testing kits.The automatic assessment of CXR can be carried out using various deep learning‐enabled methods, which may speed up the analysis. Deep learning
is generally the most populous research topic of artificial intelligence (AI), which promotes the construction of end‐to‐end architectures for achieving encouraging consequences using input data, without the requirement of extracting the features in a manual mode. In addition, deep learning approaches have been accomplished in treating various issues like breast cancer classification, brain tumor identification, skin cancer detection, arrhythmia detection, lung cancer classification, fundus image segmentation, and pneumonia prediction using CXR images. In Reference 9, a deep learning approach was introduced for diagnosing the COVID‐19 automatically. The introduced approach consists of an end‐to‐end architecture model without using any kind of feature extraction methods, and it requires unprocessed CXR images to perform the analysis. In References 22, 23, the CNN model was devised for diagnosing the COVID‐19 affected individuals utilizing chest CT imaging. The gated bidirectional CNNs (GCNN) was employed for detecting COVID‐19.
Based on in‐depth investigation, it is found that deep learning methods achieve better outcomes in classifying the COVID‐19 disease from lung CT imaging.
Recovery from the virus causing COVID‐19 is very much dependent on the immunity of the affected host. The virus prevention includes following personal hygiene and social distancing.The major goal of this research article is to devise an efficient MLTF + IWS‐enabled DeepNet method for detecting the COVID‐19 disease in people. Initially, the CXR image collected from the dataset is given as an input to the preprocessing component to get rid of the noises. Here, the preprocessing is performed by the RoI extraction based adaptive thresholding technique. Then, the lung lobe segmentation process is performed using SegNet model. Based on the interested lobe regions, the proposed MLTF is formed based on the effective texture features, likeLocal optimal oriented pattern (LOOP), local gradient hexa pattern (LGHP), local binary pattern (LBP), local directional pattern (LDP), local ternary pattern (LTP), local directional ternary pattern (LDTP), local gradient pattern (LGP), and harmonic mean local gradient pattern (HLGP). Finally, COVID‐19 detection is performed using proposed IWS‐based DeepNet. Here, Deep CNN classifier is accomplished to detect COVID‐19, where the structural optimization of Deep CNN classifier is done using IWS algorithm.The key objective of this article is illustrated as follows,Proposed MLTF from interested lobe region: The proposed MLTF is formed from the interested lobe regions by considering the texture features, such as LOOP, LGHP, LBP, LDP, LTP, LDTP, LGP, and HLGP in order to improve the detection process.Proposed IWS‐based DeepNet: An effective COVID‐19 detection technique is devised using IWS‐based DeepNet. Here, COVID‐19 detection is performed using Deep CNN classifier. The developed optimization algorithm, named IWS is exploited for optimizing the structural layers of Deep CNN classifier. Moreover, the IWS algorithm is designed by the integration of IIWO and SLnO.The arrangement of the research article is designed as follows: the various existing COVID‐19 detection methods are reviewed in Section 2, the proposed technique for COVID‐19 detection is portrayed in Section 3, the implementation outcomes are elucidated in Section 4, and Section 5 describes the conclusion.
LITERATURE SURVEY
This section describes the different existing COVID‐19 detection approaches with its advantages and drawbacks. Minaee et al.
designed a deep learning framework to predict COVID‐19 from CXR images. This method had lesser computational complexity and memory utilization. However, a large quantity of clean labeled COVID‐19 images is required for improving the detection accuracy. Shankar and Perusals
developed a robust hand‐crafted deep learning features driven fusion method to classify the COVID‐19. Here, the fusion model was employed for fusing the handcrafted features with the support of deep learning features and LBP. The training time of this method was very low. Meanwhile, this technique does not utilize other effective classification techniques for better performance. Wang et al.
introduced a deep CNN technique for identifying the COVID‐19 cases from CXR images. This approach effectively improved the screening procedure for making precise and accurate analysis. Consequently, this technique failed to determine the risk status of patients, and also failed to evaluate the hospitalization period for managing the patient population. Waheed et al.
introduced an auxiliary classifier generative adversarial network (ACGAN) for identifying COVID‐19. Here, a CNN‐driven approach was employed for the detection process. This method achieved improved performance rate, but this technique failed to improve the synthetic CXR image quality.Demir
introduced a deep long short term memory (deep LSTM) approach for detecting COVID‐19 cases automatically. Besides, the marker‐controlled watershed and the Sobel gradient segmentation processes were applied to raw images to improve the preprocessing. This approach effectively reduced the computational time. Moreover, this approach failed to evaluate the performance of the technique using more consistent and challenging dataset. Togacar et al.
developed a support vector machine (SVM) for COVID‐19 detection. This technique minimized the image interferences in dataset and hence offered proficient features with stacking method. Meanwhile, this approach failed to work with various sophisticated deep learning methods with correct depth into samples for attaining better outcomes. Singh et al.
designed a hybrid social group optimization (HSGO) to identify COVID‐19 disease. The features extracted from images of CXR and the appropriate features were chosen by hybrid social group optimization algorithm. This method had minimum computational cost and attained superior outcomes on the equipped dataset. On the other hand, this technique failed to consider more number of training data. Ismael and Şengür
developed a CNN technique for detecting COVID‐19 by considering CXR images. Here, the SVM classifier was utilized with different kernel functions, such as quadratic, linear, cubic, and Gaussian for classifying the deep features. This method obtained higher classification accuracy. However, the major challenge lies in developing graphical user interface (GUI) to support radiologists in detecting COVID‐19.
PROPOSED MLTF + IWS‐BASED DEEPNET METHOD FOR COVID‐19 DETECTION
This section elucidates the proposed MLTF + IWS‐based DeepNet method to detect COVID‐19. The major processes performed for the developed COVID‐19 detection using CXR images includes four stages, namely preprocessing, lung lobe segmentation, feature extraction, and COVID‐19 detection. At first, the input CXR image is given to the preprocessing phase. In preprocessing stage, Region of Interest (RoI) extraction is done using adaptive thresholding mechanism
in order to eliminate the noises and the redundant pixels from the input image. After preprocessing, then lung lobe segmentation is carried out based on SegNet.
After the segmentation of lung lobes, feature extraction is performed using proposed MLTF, which is developed newly by combining various texture features, like LOOP,
LGHP,
LBP,
LDP,
LTP,
LDTP,
LGP,
and HLGP
from the interested lobe regions. Finally, IWS‐based DeepNet is developed for detecting COVID‐19 by optimizing the structure of deep convolutional neural network (Deep CNN).
For structure optimization, IWS algorithm is used, which is newly designed by integrating IIWO,
and SLnO.
The schematic diagram of COVID‐19 detection process based on proposed MLTF and optimized IWS‐based DeepNet using CXR images is depicted in Figure 1.
FIGURE 1
Schematic representation of proposed COVID‐19 detection approach
Schematic representation of proposed COVID‐19 detection approach
Input image acquisition
The input image considered for the COVID‐19 detection is CXR image, which is acquired from DeepCovid dataset. Consider dataset has number of input CXR images as given in Equation (1).
where is the dataset, implies input CXR image, specifies the total images, represents zth CXR image. The input CXR image is acquired from the database and is presented to the preprocessing module for discarding falsifications from the input image.
Preprocessing by RoI extraction based adaptive thresholding technique
The noises present in the image degrade the visual quality and produce inaccurate results. Hence, preprocessing is done to discard the falsifications or the noises in the image. Here, the input CXR image is given to the preprocessing module such that the input image is preprocessed by RoI extraction based adaptive thresholding technique. The adaptive thresholding strategy removes the noises in the image, which separates the RoI from the input CXR image.Adaptive thresholding technique: The input CXR image is given to preprocessing module where the image is preprocessed using adaptive thresholding strategy
for eliminating the noises in the image. Image thresholding provides the image segments on the basis of specific image characteristics, while the image characteristics can be the pixels with their intensity levels. In the thresholding process, the image is represented in a binary configuration by classifying the image as dark or light. Here, adaptive thresholding is utilized to overcome the variations produced as result of illumination. This strategy utilizes varying threshold values for every pixels present in the image and it provides effectiveness toward illumination variations. In the adaptive thresholding procedure, the integral image is computed initially, and thereafter the average value is measured for the window of every pixel in the integral image. If the pixel value is less than the average of neighbors, then the pixel value is fixed as black, or else the pixel value is fixed as white. The major advantages of thresholding strategy are that it offers uncomplicated, accurate, and clean results. The outcome of adaptive thresholding strategy is the noise free preprocessed CXR image, which is signified as , and is fed to the lung lobe segmentation phase.
Lung lobe segmentation using SegNet
The preprocessed image is presented to the segmentation phase, which is done using SegNet
for segmenting the lobes from the preprocessed CXR image. In the COVID‐19 detection, SegNet model is employed for COVID‐19 detection considering each CXR image segments.SegNet: The SegNet model consists of a network of encoder and decoder with a pixel‐wise layer. Hence, the SegNet model generates lobe segments from the preprocessed CXR image. The architectural representation of SegNet
is presented in Figure 2. Here, encoder has convolutional layers for performing segmentation. The training procedure is initialized based on the weights trained for the detection. Every layer of encoder has its subsequent decoder layer. The output produced by the decoder is subjected to multiclass soft‐max classifier for producing class probabilities for all the values of pixels. In the encoder network, each encoder convolves with the filter bank to form a cluster of feature maps. After that, an element‐wise rectified linear nonlinearity (ReLU) is adapted in such a way that the output is made to sub sample. ReLU is utilized to ensure the efficiency of the process and it works more quickly while handling large scale networks. Moreover, a POOL layer effectively reduces the difficulty, and there is no bias or weights to be trained as the input gets processed by combining the local regions of filter. Moreover, an appropriate decoder is adapted to up sample the feature maps with maximal pooling indices so that the feature maps are convolved with trained decoder filter bank for creating dense feature maps. Moreover, the output obtained by the final decoder is presented to the trainable soft‐max classifier, which classifies every pixel in a self‐regulating way. The soft‐max classifier output is represented as , where signifies overall classes. Consequently, the predicted segmentation is equivalent to the class with maximum probability for each pixels. Hence, the SegNet has the capability to produce the segments.
FIGURE 2
SegNet architecture
SegNet architectureLet the segmented lobes of input image is expressed as,
where denotes the overall image segments, and denotes the eth image segment. The lobe segmentation output is denoted as , which is presented as an input to the feature extraction module.
Proposed MLTF from interested lobe regions
The lobe segmentation output is given as an input to the feature extraction phase for obtaining the proposed MLTF from interested lobe regions. The proposed MLTF can be generated by extracting the significant texture features, such as LOOP,
LGHP,
LBP,
LDP,
LTP,
LDTP,
LGP,
and HLGP
from the interested lobe regions. Figure 3 depicts the illustration of proposed MLTF.
FIGURE 3
Representation of proposed MLTF from interested lobe regions
Representation of proposed MLTF from interested lobe regionsThe various effective texture features, such as LOOP, LGHP, LBP, LDP, LTP, LDTP, LGP, and HLGP are explained below as follows.
LOOP
This feature
illustrates the nonlinear incorporation of the LDP and LBP, respectively. Let us assume the image intensity as at pixel and the pixel intensity in neighborhood is represented as . The LOOP value of pixel is calculated as,
Here,
The encoding of rotation invariance to main formulation is done by LOOP feature. Here, represents the LOOP feature.
LGHP
LGHP feature
finds the modifications in the original pixels of higher order derivative space. Here, the four first order directional derivatives of image T(W) with dimension in directions at an arbitrary reference point at distance is calculated as follows,
where reference pixel in direction at a particular distance is signified as that is , and specifies the pixel value.The second order at distance is calculated by performing the encoding operation using encoding function for the pair wise derivatives and then the patterns encoded are concatenated. In addition, is illustrated as,
where
.The encoding function in the derivative space at a point is formulated as,
Finally, the second order LGHP is formulated as,
The six binary patterns of nine bits are encoded by LGHP each at a specific distance for various pairs. These six patterns are transformed to its corresponding decimal values for generating six LGHP matrices. Finally, the LGHP feature is denoted as .
LBP
LBP
is a most common descriptor feature, which is used for capturing the local intensity variations of image and LBP feature has better discriminative character. Let be the image intensity at pixel and the neighborhood pixel intensity is specified as . Besides, the LBP value of pixel is given as,
where
Here, represents the LOOP feature.
LDP
LDP feature
has a low discriminative power and hence LDP feature is used for COVID‐19 detection process. LDP feature is slightly liable to noises when compared with other conventional LBP operators.
where defines the directional bit responses, denotes the location of the pixel, signifies Kirsch masks, and implies center directional response.
The LDP feature output is denoted as .
LTP
LTP‐based feature
is a generalization of LBP patch wise texture feature extractor. The LBP feature is formulated as,
where signifies neighboring pixel, denotes gray scale value of centering pixel, specifies neighborhood radius, denotes the overall neighbors. Here, function is expressed as,
The LTP operator is computed based on the modification of step function and the equation is given as,
where represents the values of threshold to identify sharpness and the output of LTP feature is denoted as .
LDTP
LDTP
is a foremost texture function for extracting texture information from primary axis in every neighborhood. This LDTP feature contains principal information and hence it is used for COVID‐19 detection with improved performance.The eight values of absolute edge response is calculated based on Kirsch masks for obtaining LDTP feature and the equation is expressed as,
where denotes the image, signifies Kirsch mask and is the convolution operation. The output of LDTP feature is denoted as .
LGP
The center region of pixel is characterized by two different two‐bit binary patterns known as LGP.
In general, LGP exploits gradient values and eight pixel values. The gradient values and center pixel are computed using the correct dissimilarity among the center intensity and closer value of pixel. Furthermore, threshold value is calculated on the basis of average gradient values considering adjoining pixels and thereafter the kernel is used for calculating the closer pixel value for the central region of pixel. Besides, the value of adjacent pixel is “1” if the pixel gradient is greater than the rate of threshold; else it is “0”. Moreover, the LGP feature is expressed as,
where denotes the gradient value of central pixel, signifies total neighbors, and implies gradient value of neighbors. The adjoining pixel value is computed as,
where specifies the locally modified threshold. The LGP extracts invariant samples with the gradient dissimilarity and LGP is not inclined due to its local color variations. The dimensions of color histograms is signified as , here denotes bins. The LGP feature output is represented as .
HLGP
HLGP
is a descriptor feature where harmonic mean (HM) is utilized to capture the adaptive threshold for encoding the gradient value. Because of its value of adaptive threshold, HLGP contains the capability to reduce the outlier's dilemma. Besides, the harmonic mean of the values of gradient is calculated by the equation given below as,
Afterwards, all the neighborhood gradient values are compared with threshold and a binary value of is assigned if it is lesser than , otherwise fix the binary value as . Thereafter, the binary numbers are concatenated with respect to any direction and finally give a decimal value corresponding to its binary value. The HLGP feature is computed as,
Here,
where the difference between the threshold and its gradient neighbor value specifies the gradient value of central pixel, denotes total neighbors, and signifies the gradient value of neighbors. The HLGP feature is denoted as .Here, all the various texture features, namely LOOP, LGHP, LBP, LDP, LTP, LDTP, LGP, and HLGP are individually processed to obtain the binary values for each texture features. Thereafter, the obtained binary values of each different texture features are made to perform XOR operation, which forms the 8‐bit pixel values.
After generating 8‐bit pixel values, it finally gives a decimal value corresponding to its binary value to replace all the locations of the features in order to form the final proposed MLTF feature, which is represented as . The proposed MLTF obtained is presented as an input to the COVID‐19 detection module.
COVID‐19 detection using proposed IWS‐based DeepNet
The feature extraction output is given as an input to the detection phase, where the process is performed using IWS‐based DeepNet. Here, the deep CNN
classifier is employed for COVID‐19 detection. The structural optimization of deep CNN is carried out using developed IWS algorithm, which is designed by the combination of IIWO
and SLnO.
Moreover, the deep CNN architecture, and the structural optimization of deep CNN is explained below as follows:
Structure of Deep CNN
Deep CNN
is a kind of deep neural network structure such that the working procedure of deep CNN is same as CNN structure but with several hidden layers. The main aim of using deep CNN is that is poses effective self‐learning, fault tolerance ability, layer by layer extraction procedure and has the potential to operate with inadequate data. Figure 4 presents the architecture of deep CNN classifier.
FIGURE 4
Architectural representation of DeepCNN
Architectural representation of DeepCNNInput layer: The proposed MLTF output obtained from interested lobe regions is presented as an input for COVID‐19 detection process.Convolutional layer: It includes number of convolution kernels for sharing the weights in such a way that each set of weight sharing connection forms the filter, which is then convoluted with input feature. The feature maps are created by performing convolution across the images. In addition, the interconnection of neurons with trainable weights and the convolution of trained weights with input is done to create the feature maps. The input of convolution layer is given as,
where signifies number of convolution layer, is convolution layer of deep CNN classifier. Moreover, the he output generated by the convolution layer is formulated as,
where implies the fixed feature map, indicates total feature maps from previous convolution layer, and denotes the convolution operator. signifies the network weights optimized by proposed IWS.Maximum pooling layer: Pooling layer exploits activation function in order to assist the simplicity and effectiveness, while managing massive number of networks. The output generated at this layer considering the feature maps is formulated as,
where denotes the activation function, and this layer helps to minimize the difficulty of COVID‐19 detection process.Dropout layer: Dropout is a regularization method that prevents over fitting by discarding the value of some units randomly, which introduces a dropout layer.Fully connected layer (FC): The input presented to FC layer is the output generated from convolutional layer, pooling layer and the FC layer is utilized for combining various kinds of local information. The result generated at this fully connected layer is expressed as,
where denotes weight concerning in feature map of layer with element in layer.Moreover, the structural optimization is done to optimize the convolutional layer, activation layer, maximum pooling layer, and drop out layer. The output obtained at this phase is signified as .
Proposed IWS algorithm for structure optimization of Deep CNN
This section elaborates the structural optimization of Deep CNN
classifier using proposed IWS algorithm for COVID‐19 detection. Here, the IWS is designed by integrating SLnO
and IIWO.
The SLnO
algorithm is motivated by the hunting characteristics of sea lions. In addition, it is also inspired by whiskers of sea lions' to find the prey. This algorithm avoids the premature convergence and enables better exploration capability so that the optimal location can be identified. Meanwhile, IIWO
is an intellectual population‐based algorithm, which effectively performs the global searches more speedily in such a way that the population diversity is enhanced. Furthermore, the incorporation of SLnO and IIWO improved the overall system performance and achieves global best solution by minimizing the computational problems. The algorithmic steps of developed IWS are explained below.Step 1: Initialization.The preliminary phase is to initialize the solution which is expressed as,
where signifies the overall solutions and represents the solution.Step2: Solution encoding representation.The solution vector determines the best solution and it has the dimension of , representing number of convolution layer, type of activation function, number of maximum pooling layers, and number of dropout layer. Figure 5 depicts the solution encoding of developed IWS algorithm.
FIGURE 5
Solution encoding of proposed IWS‐based DeepNet
Solution encoding of proposed IWS‐based DeepNetStep 3: Fitness calculation.It is used for computing the optimum solution such that the solution with the least error value is regarded as the optimal solution. The fitness is calculated using Equation (29).
where indicates the fitness measure, signifies classifier output, and denotes the target output.Step 4: Detecting and tracking phase.Based on SLnO
algorithm, the sea lions use its whiskers for predicting the size, location, and figure of prey. The sea lion move in the route of target prey and is given as,
Assume, ,
By incorporating SLnO with IIWO, the global convergence characteristics can be improved, and optimal solution can be achieved. The standard equation of IIWO
is given as,
By substituting Equation (36) in (33) at the target prey of SLnO and also by assuming , the equation becomes,
where implies the next iteration, indicates the random vector within the range [0,1], denotes the position vector of target prey, denotes the position vector of sea lion, is the distance among target prey and sea lion, represents a parameter that linearly decreases from 2 to 0 for every iterations, represents chaotic mapping in iteration, denotes the maximum iteration, signifies the standard deviation, specifies the nonlinear modulation index, denotes the position of new weed, and signifies the optimal weed present in the overall population.Step 5: Vocalization phase.The sea lions interact with each other through vocalizations. When the sea lion finds a prey then, it calls remaining sea lions to surround and harass the prey, which is expressed as,
where is the speed of sounds in water, signifies the speed of sounds in air, and implies the sound speed of sea lion leader. Accordingly, the sound speed in water is represented as,
The speed of sounds in air is denoted in Equation (45).
Step 6: Attacking phase (Exploitation phase).For modeling the hunting characteristics of sea lions, two different phases, (1) dwindling encircling and (2) circle updating are introduced.(a) Dwindling encircling.It is dependent on , that assists the sea lion leader to progression the direction of the track of prey and surround them.(b) Circle updating.The sea lions hunt bait ball of fishes starting from edges, which is given as,
where denotes the distance among finest solution and search agent, || signifies the absolute value, and implies the random number between [−1, 1].Step 7: Searching for prey (exploration phase).In the phase of exploitation, the optimal search agent updates the sea lion's position. In phase of exploration, the search agent location is updated in accordance with arbitrarily selected sea lion, which is formulated in Equation (47).
where denotes random sea lion which is selected from present population.
Step 8: Evaluation of solution feasibility.In this step, the solution with best best value of fitness is regarded as finest solution.Step 9: Termination.Steps 1 to 8 are repeatedly performed until the finest solution is achieved.The pseudo code of proposed IWS algorithm is presented in Algorithm 1.1 Input: Solution set , maximal iterations2 Output: Optimal solution3 Begin4 Initialize the solution5 Select6 Compute the fitness for every search agent by Equation (29)7 defines best candidate search agent with optimal fitness8 If9 Compute using Equation (43)10 If11 If12 Update the position of current search agent by Equation (46)13 else14 Select random search agent15 Update the position of current search agent by Equation (30)16 End if17 Update the position of current search agent by Equation (48)18 End if19 If search agent does not belong to any20 Go to step 921 Else22 Calculate fitness of search agent using Equation (29)23 Update , if best solution exists24 Return25 End if26 End if27 End
RESULTS AND DISCUSSION
The experimental analysis of the developed MLTF + IWS‐based DeepNet based on the evaluation metrics are described in this section.
Experimental set‐up
The execution set up of the developed MLTF + IWS‐based DeepNet is carried out in a PYTHON tool with 4 GB RAM, Windows 10 OS, and Intel I3 processor.
Dataset description
The dataset utilized for detecting the COVID‐19 is DeepCovid dataset.
This dataset contains 5000 CXR images with their binary labels to detect COVID‐19. This dataset remains as a standard to support the research organization. Here, the board‐certified radiologist labels the CXR images in class of COVID‐19 such that the images with clear indication are utilized for the testing operations. In accordance with board‐certified radiologist recommendation, only anterior–posterior images are considered for predicting the Covid‐19 disease as lateral images are not appropriate for this function. The anterior–posterior CXR images are evaluated, and the images without any minute radiographic indications of Covid‐19 are discarded from dataset.
Experimental outcomes
The implementation outcomes of developed MLTF + IWS‐based DeepNet approach for COVID‐19 detection are presented in Figure 6. Figure 6A presents the original CXR image, preprocessed RoI image is shown in Figure 6B, lobe segmented image is specified in Figure 6C. In addition, extracted LOOP feature image is portrayed in Figure 6D, extracted LGHP feature image is shown in Figure 6E,F indicates the extracted LBP feature image, Figure 6G represents the extracted LDP feature image, extracted LTP feature image is presented in Figure 6H. The extracted LGP feature image is portrayed in Figure 6I,J specifies the extracted HLGP feature image and Figure 6K shows the detected output image.
This section illustrates the performance assessment of the proposed MLTF + IWS‐based DeepNet technique using holdout method. Figure 7 shows the performance assessment of the developed MLTF + IWS‐based DeepNet technique in terms of the evaluation measures, namely TPR, TNR, and accuracy. The assessment based on TPR is portrayed in Figure 7A. For the training data 70%, the MLTF + IWS‐based DeepNet computed a TPR value for the iteration 5 is 0.897, iteration 10 is 0.898, iteration 15 is 0.899, and iteration 20 is 0.901. Figure 7B presents the analysis based on TNR. When considering 80% of training data, the proposed MLTF + IWS‐based DeepNet attained a TNR value for the iteration 5, 10, 15, and 20 is 0.865, 0.866, 0.867, and 0.868. The analysis with respect to accuracy is portrayed in Figure 7C. When considering the training data as 60%, the value of accuracy obtained by the developed MLTF + IWS‐based DeepNet for the iteration 5 is 0.864, iteration 10 is 0.865, iteration 15 is 0.866, and iteration 20 is 0.867. The performance assessment based on ROC is presented in Figure 7D. When the value of FPR is 2, the developed MLTF + IWS‐based DeepNet attained a value of TPR for the iteration 5, 10, 15, and 20 is 0.821, 0.832, 0.843, and 0.854.
FIGURE 7
Performance assessment of the developed technique using training data (A) TPR, (B) TNR, (C) accuracy, and (D) ROC
Performance assessment of the developed technique using training data (A) TPR, (B) TNR, (C) accuracy, and (D) ROC
Comparative methods
The various comparative methods exploited for the COVID‐19 assessment in comparison with the developed technique are Deep transfer learning,
multimodal covid network‐III (MMCOVID‐NET‐III),
Bayesian optimization‐based CNN,
Deep CNN,
Gayathri et al.,
Auxiliary GAN,
and Deep LSTM,
randomly initialized CNN (RND‐CNN).
Comparative analysis
The comparative assessment of developed technique using holdout method and cross validation is explained in this section.
Analysis using holdout method
Figure 8 explains the comparative assessment of the developed technique with respect to the metrics, such as TPR, TNR, and accuracy. Figure 8A portrays the assessment in terms of TPR measure. When the training data is 60%, the proposed MLTF + IWS‐based DeepNet measured a TPR value of 0.890, while the TPR value measured by deep transfer learning, MMCOVID‐NET‐III, Bayesian optimization‐based CNN, Deep CNN, Gayathri et al., auxiliary GAN, deep LSTM, and RND‐CNN is 0.722, 0.745, 0.765, 0.790, 0.814, 0.834, 0.851, and 0.875. The analysis using TNR metric is presented in Figure 8B. When the training data is 70%, the value of TNR achieved by the deep transfer learning, MMCOVID‐NET‐III, Bayesian optimization‐based CNN, Deep CNN, Gayathri et al., auxiliary GAN, deep LSTM, RND‐CNN, and proposed MLTF + IWS‐based DeepNet is 0.697, 0.705, 0.714, 0.745, 0.763, 0.780, 0.824, 0.833, and 0.867. Figure 8C illustrates the assessment based on accuracy. For the 80% of training data, the value of accuracy measured by the existing methods, like Deep transfer learning, MMCOVID‐NET‐III, Bayesian optimization‐based CNN, Deep CNN, Gayathri et al., auxiliary GAN, deep LSTM, and RND‐CNN is 0.723, 0.733, 0.754, 0.770, 0.799, 0.812, 0.853, and 0.875, whereas the developed MLTF + IWS‐based DeepNet measured a accuracy value of 0.901. Figure 8D portrays the analysis based on ROC. When considering the FPR value as 2, the proposed MLTF + IWS‐based DeepNet measured a TPR value of 0.856, while the TPR value measured by deep transfer learning, MMCOVID‐NET‐III, Bayesian optimization‐based CNN, Deep CNN, Gayathri et al., auxiliary GAN, deep LSTM, and RND‐CNN is 0.753, 0.760, 0.766, 0.771, 0.799, 0.823, 0.853, and 0.834.
FIGURE 8
Analysis of developed technique considering training data (A) TPR, (B) TNR, (C) accuracy, and (D) ROC
Analysis of developed technique considering training data (A) TPR, (B) TNR, (C) accuracy, and (D) ROC
Analysis using cross validation
Figure 9 presents the comparative assessment of the developed technique with respect to the performance measures, like TPR, TNR, and accuracy. Figure 9A depicts the assessment based on TPR. For the K‐fold value 5, the TPR value computed by deep transfer learning is 0.752, MMCOVID‐NET‐III is 0.771, Bayesian optimization‐based CNN is 0.792, Deep CNN is 0.813, Gayathri et al. is 0.825, auxiliary GAN is 0.860, deep LSTM is 0.873, RND‐CNN is 0.883, and proposed MLTF + IWS‐based DeepNet is 0.922. The analysis using TNR measure is presented in Figure 9B. By considering the K‐fold value as 6, the TNR value computed by developed MLTF + IWS‐based DeepNet is 0.875, while the existing methods, such as deep transfer learning, MMCOVID‐NET‐III, Bayesian optimization‐based CNN, Deep CNN, Gayathri et al., auxiliary GAN, deep LSTM, and RND‐CNN measured a TNR value of 0.704, 0.737, 0.741, 0.758, 0.775, 0.808, 0.842, and 0.851. Figure 9C depicts the assessment considering accuracy metric. When considering K‐fold value as 7, the accuracy value obtained by the existing methods, like deep transfer learning is 0.734, MMCOVID‐NET‐III is 0.754, Bayesian optimization‐based CNN is 0.763, Deep CNN is 0.780, Gayathri et al. is 0.801, auxiliary GAN is 0.823, deep LSTM is 0.863, and RND‐CNN is 0.895, whereas the accuracy value computed by the proposed MLTF + IWS‐based DeepNet is 0.913.
FIGURE 9
Analysis of the developed method considering K‐fold (A) TPR, (B) TNR, and (C) accuracy
Analysis of the developed method considering K‐fold (A) TPR, (B) TNR, and (C) accuracy
Comparative discussion
This section elaborates the comparative discussion of the developed technique in comparison with the various existing techniques. Table 1 portrays the comparative discussion of the developed technique for the training data 90% and K‐fold value 8. For the training data 90%, the value of TPR computed by Deep transfer learning is 0.769, MMCOVID‐NET‐III is 0.786, Bayesian optimization‐based CNN is 0.799, Deep CNN is 0.837, Gayathri et al. is 0.854, auxiliary GAN is 0.878, deep LSTM is 0.900, RND‐CNN is 0.905, and proposed MLTF + IWS‐based DeepNet is 0.933. For the K‐fold value 8, the developed MLTF + IWS‐based DeepNet measured a TNR value of 0.890, whereas the value of TNR measured by the existing methods, namely Deep transfer learning, MMCOVID‐NET‐III, Bayesian optimization‐based CNN, deep CNN, Gayathri et al., auxiliary GAN, deep LSTM, and RND‐CNN is 0.714, 0.752, 0.765, 0.770, 0.799, 0.818, 0.855, and 0.875. By considering the 90% of training data, the value of accuracy measured by Deep transfer learning, MMCOVID‐NET‐III, Bayesian optimization‐based CNN, deep CNN, Gayathri et al., auxiliary GAN, deep LSTM, RND‐CNN, and proposed MLTF + IWS‐based DeepNet is 0.745, 0.754, 0.765, 0.791, 0.814, 0.832, 0.877, 0.888, and 0.919. Thus, from Table 1, it is clearly illustrated that the developed technique achieved a maximal TPR of 0.933 using training data, maximal TNR of 0.890 using K‐fold, and maximal accuracy of 0.919 using training data.
TABLE 1
Comparative discussion of the developed technique
Deep transfer learning
MMCOVID‐ NET‐III
Bayesian optimization‐ based CNN
Deep CNN
Gayathri et al.
Auxiliary GAN
Deep LSTM
RND‐CNN
Proposed MLTF + IWS‐based DeepNet
Hold‐out validation
TPR
0.769
0.786
0.799
0.837
0.854
0.878
0.900
0.905
0.933
TNR
0.712
0.724
0.754
0.779
0.785
0.800
0.856
0.865
0.888
Accuracy
0.745
0.754
0.765
0.791
0.814
0.832
0.877
0.888
0.919
Cross validation
TPR
0.768
0.791
0.804
0.827
0.853
0.874
0.891
0.914
0.931
TNR
0.714
0.752
0.765
0.770
0.799
0.818
0.855
0.875
0.890
Accuracy
0.738
0.765
0.770
0.784
0.807
0.829
0.868
0.900
0.918
Comparative discussion of the developed technique
CONCLUSION
This research presents an effective technique to detect COVID‐19 disease using MLTF + IWS‐based DeepNet based on the CXR images. The performance of the developed method is analyzed using the performance metrics, namely TPR, TNR, and accuracy. The proposed technique attained better performance with the highest TPR of 0.933%, highest TNR of 0.890%, and highest accuracy of 0.919%. The proposed technique offers efficient and acceptable detection performance for the COVID‐19 cases. Anyhow, the detection accuracy of the proposed method needs further improvement. In the future, more details, such as vaccination plan and the effective deep learning classifiers will be considered to enhance the detection process.
Authors: Tulin Ozturk; Muhammed Talo; Eylul Azra Yildirim; Ulas Baran Baloglu; Ozal Yildirim; U Rajendra Acharya Journal: Comput Biol Med Date: 2020-04-28 Impact factor: 4.589