Literature DB >> 35945987

Jaya-tunicate swarm algorithm based generative adversarial network for COVID-19 prediction with chest computed tomography images.

Palanivel Rajan Doraiswami1, Velliangiri Sarveshwaran2, Iwin Thanakumar Joseph Swamidason3, Sona Chandra Devadass Sorna4.   

Abstract

A novel corona virus (COVID-19) has materialized as the respiratory syndrome in recent decades. Chest computed tomography scanning is the significant technology for monitoring and predicting COVID-19. To predict the patients of COVID-19 at early stage poses an open challenge in the research community. Therefore, an effective prediction mechanism named Jaya-tunicate swarm algorithm driven generative adversarial network (Jaya-TSA with GAN) is proposed in this research to find patients of COVID-19 infections. The developed Jaya-TSA is the incorporation of Jaya algorithm with tunicate swarm algorithm (TSA). However, lungs lobs are segmented using Bayesian fuzzy clustering, which effectively find the boundary regions of lung lobes. Based on the extracted features, the process of COVID-19 prediction is accomplished using GAN. The optimal solution is obtained by training GAN using proposed Jaya-TSA with respect to fitness measure. The dimensionality of features is reduced by extracting the optimal features, which enable to increase the speed of training process. Moreover, the developed Jaya-TSA based GAN attained outstanding effectiveness by considering the factors, like, specificity, accuracy, and sensitivity that captured the importance as 0.8857, 0.8727, and 0.85 by varying training data.
© 2022 John Wiley & Sons, Ltd.

Entities:  

Keywords:  COVID‐19; Jaya algorithm; computed tomography (CT); generative adversarial network (GAN); tunicate swarm algorithm (TSA)

Year:  2022        PMID: 35945987      PMCID: PMC9353441          DOI: 10.1002/cpe.7211

Source DB:  PubMed          Journal:  Concurr Comput        ISSN: 1532-0626            Impact factor:   1.831


INTRODUCTION

Computed tomography (CT) is the tool used to diagnose lung diseases, such as pneumonia. However, the CT scan procedure has fastest turnaround time when compared with the molecular diagnostic test achieved in the ordinary laboratory that offers comprehensive information relevant to pathology such that it offers quantitative measurement in the dimension of lung lesion. , Radiography and CT has appeared as an essential player in COVID‐19 for the preface screening as well as diagnosis. , Accordingly, the insufficient radiologists and the overwhelming patients result higher false positives. , The computer aided system is required to find the infected cases, and examine the patients and to conduct virus supervision. CT images are used to find the distribution and the presence of parenchyma abnormalities, like ground‐glass opacity (GGO) that is termed as the misty increase in the attenuation of lung without the obscuration of fundamental vessels. However, the GGO by means of the interlobular reticulation or septal thicken or the intralobular arrangement and the GGO with the consolidation are termed as the region of opacification obscure. Moreover, GGO with the reticulation, air bronchogram, and lymphadenopathy are termed as the lymph node higher than 1 cm diameter and the pleural effusion. , However, the CT based artificial intelligence (AI) scheme has the efficiency to provide early diagnosis for monitoring, planning as well as treatment. Moreover, the typical CT images help to screen the suspected cases such that the images of different pneumonias are relevant and overlie with other provocative lung diseases. CT of thorax is the toll used for assessing the patients with respiratory symptoms and is also used to test the COVID‐19. , At first, the COVID‐19 case is detailed in china during end of year 2019. Thereafter, COVID‐19 is turned out as outbreak in entire world. COVID‐19 is affirmed as the international public health disaster through World Health Organization (WHO) on March 11, 2020. At May 26, 2020, nearly more than 200 territories or countries are influenced by COVID‐19 with around 5.6 million of verified cases and 348,000 deaths. Both the confirmed and the death cases are quickly climbed up in the worldwide. More than 100,000 human lives are taken by COVID‐19 pandemic that cost several trillion dollars in world economy. However, early accurate technologies to screen the patients and to monitor the treatment progress results a challenge task to control this crisis. Some of the characteristics of this disease are respiratory symptoms, cough, pneumonia, dyspnea, and fever. , The faster increasing number of death and confirmed cases causes overburden in various medical systems in worldwide. Recently, reverse‐transcription polymerase chain reaction (RT‐PCR) analysis is mainly used for identifying the COVID‐19 disease. The RT‐PCR analysis consumes more time frequently in days for exploring the result of patients. To screen large number of cases for treatment and successful isolation is the high priority for controlling the COVID‐19 spread in entire world. Pathogenic laboratory test is considered as gold standard, but it result false negative result and causes time consuming. Accurate diagnosis method is the urgent need to combat such disease. Conventional neural network (CNN) is favored for identifying the COVID‐19 infectious using chest CT images. The three dimensional (3D) CNN is used for classifying pulmonary blood vessel vein using CT images. , In Reference 14, deep CNN is used to categorize the interstitial lung disease with CT images. In Reference 15, knowledge‐based collaborative deep learning is used to organize the lung nodule as malignant or benign. However, it achieves better accuracy in the classification of lung nodule. The supervised discriminative learning model is used in Reference 16 to detect the pulmonary fissure. Multi view CNN is used for detecting the pulmonary nodule using CT images. Moreover, adversarial network is utilized to segment abdominal CT images. The 3D CNN is used in Reference 17 for detecting the pulmonary nodules with CT images. Automatic diagnostic systems are designed with the AI technologies to increase the efficiency of CT based diagnosis by reading the patient data with CT images and deliver the output as diagnosis result. The AI driven methods showed promising performance in the prediction of COVID‐19. However, particle swarm driven adaptive neuro‐fuzzy inference model (ANFIS) is designed in Reference 18 for enhancing classification rate. The gated bidirectional CNN is focused to sort out the infected patients of COVID‐19. Recurrent CNN is designed in Reference 19 for classifying stenosis and coronary artery plaque in CT. Moreover, the 3D CNN is used for fusing multimodality information to perform tumor segmentation with CT images. This research modeled a prediction method by developed Jaya‐TSA‐based GAN to predict the patients affected with COVID‐19 disease. The proposed framework executes the prediction process with the six phases, like image acquisition, preprocessing, lung lobe segmentation, lesion segmentation, feature acquisition, and COVID‐19 prediction. Initially, the input CT image is acquired from database such that the input image is subjected towards preprocessing module, in which image is efficiently preprocessed to eradicate noise from input CT image. Accordingly, resulted preprocessed result is passed to lung lobe segmentation module, in which lung lobes are segmented by Bayesian fuzzy clustering (BFC). After completion of lung lobe segmentation, the lesions are segmented using hybrid‐based Renyi entropy measure. Moreover, the features concerned with partitioned results are acquired with local ternary pattern (LTP) and local gradient pattern (LGP). Finally, the process of COVID‐19 prediction is done by GAN such that it is tuned by Jaya‐TSA. The major contribution of this research is discussed as follows: Jaya‐TSA based GAN: An optimal prediction strategy is modeled by Jaya‐TSA based GAN to predict COVID‐19 patients. The GAN is used for forecasting COVID‐19 patients using CT modality. Based on the loss function associated with GAN, the patients affected with the COVID‐19 disease are accurately predicted through the extraction of features, like LTP and LGP. Moreover, the GAN is trained by the Jaya‐TSA, which is developed by integrating the Jaya algorithm and the TSA to reveal optimal solution that enables to increase the performance of prediction. The organization of the research is made as follows: Section 2 expresses the survey of prediction techniques, and Section 3 elaborates Jaya‐TSA based GAN for predicting the COVID‐19 patients. Section 4 presents the results and discussion of Jaya‐TSA based GAN approach and finally Section 5 concludes research.

MOTIVATION

In this section, various conventional COVID‐19 prediction schemes are surveyed along with their advantages and drawbacks that motivate the researchers to design Jaya‐TSA based GAN for predicting the COVID‐19 patients.

Literature survey

Various existing prediction methods are portrayed in this section. Zhang et al. introduced an AI system to diagnose the novel coronavirus pneumonia (NCP). It assisted the physicians and radiologists to perform quick diagnose when the health system was overloaded. This method effectively identified the medical markers that were associated with lesion materials of NCP. This system was capable to offer accurate prognosis in the clinical management. It obtained high pixel accuracy, but failed to offer refined clinical prognostic model to achieve better training process. Pathak et al. designed a deep transfer learning model for classifying infected patients of COVID‐19. Here, top‐2 loss function based on the cost sensitive features was used in handling imbalanced and noisy data. This method attained higher validation accuracy and training. However, it does not implement genetic algorithm for tuning hyper parameters. Wang et al. introduced a multi‐task prior‐attention framework to examine the COVID‐19 using CT images. This method correctly locates the lesion regions in an effective way and increase the classification performance. However, this method failed to identify the COVID‐19 at the starting stage. Wang et al. developed a deep learning algorithm for screening COVID‐19. The training data was relatively less and the presentation of this method was increased by increasing the training volume. It does not utilize features, namely epidemiological as well as genetic factors to perform the analysis for enhanced diagnosis. Liu et al. introduced a lesion‐attention deep neural network (LA‐DNN) to perform binary categorization of COVID‐19 diagnosis. It effectively learned the radiology features with high quality under limited sample information. This method was capable to achieve clinical standard in the diagnostic testing. However, it obtained low prediction accuracy. Farid et al. introduced a hybrid feature selection mechanism to identify COVID‐19 infections using CT images dataset. Accordingly, fitness function was used to increase the convergence in selecting best features such that the features were selected using the hybrid feature selection mechanism. It increased the classification performance, but failed to enhance the prediction time. Song et al. modeled a deep learning mechanism for detecting the COVID‐19 patients and to find lesions automatically through CT images. It increased the performance in both classification process and pneumonia detection. It acted as a rapid identification for the patients. However, it achieved high false positive rate. Mobiny et al. introduced a Detail‐Oriented Capsule Networks (DECAPS) for diagnosing COVID‐19 automatically using the CT scans. It used the inverted dynamic routing model to increase the stability of model. Here, the data scarcity issues were solved using the generative adversarial network. This method helps the radiologists to identify the COVID‐19 at the starting stage. It failed to analyze the performance with larger dataset.

Challenges

Some of the issues faced by the existing prediction techniques are expressed as follows: The AI system was modeled in Reference 1 to detect COVID‐19 using CTR images. However, it failed to analyze the identification performance using huge sized database. In Reference 9, the deep transfer learning mechanism was designed for detecting COVID‐19 with CT images. This scheme does not consider the optimal selection of the hyper parameters to achieve better prediction. Multi task prior attention mechanism was developed in Reference 20 to find COVID‐19 by CT images. However, this method failed to use lobe segmentation for finding the lobe location. The deep learning method in Reference 6 was used for classifying the COVID‐19 with CT images. Hierarchical features are not connected with this method of CT images to increase the detection rate. If the scrubbed regions do not contain coronavirus accumulate then the RT‐PCR tests was not effective to correctly identify the COVID‐19 patient.

PROPOSED JAYA‐TUNICATE SWARM ALGORITHM BASED GENERATIVE ADVERSARIAL NETWORK FOR COVID‐19 PREDICTION

CT has become an important tool for predicting COVID‐19 in recent decades. In this research, an effective Jaya‐TSA based GAN is developed to identify COVID‐19 patients. The proposed Jaya‐TSA‐based GAN performs the process of COVID‐19 prediction by concerning the phases, namely image acquisition, preprocessing, lung lobe segmentation, lesion segmentation, feature extraction, and COVID‐19 prediction. At first, input CT image is received from database such that input image is fed to the preprocessing phase, in which image is efficiently preprocessed by removing noise from CT image and the resulted image is subjected to lung lobe segmentation module. The lung lobes are partitioned using BFC in the lung lobe segmentation phase. However, the segmented lung lobes are subjected to lesion segmentation module, in which lesions are efficiently partitioned by hybrid‐based Renyi entropy measure. Here, the hybrid model is designed using deep joint segmentation and ACM. From the segmented lesions, the features associated with the image are effective extracted using LTP and LGP. Finally, the prediction of COVID‐19 is accomplished using GAN, which is tuned by proposed Jaya‐TSA. However, the newly developed Jaya‐TSA is modeled through incorporation of Jaya algorithm by TSA, respectively. Figure 1 illustrates schematic view of Jaya‐TSA based GAN for prediction of COVID‐19.
FIGURE 1

Schematic view of Jaya‐TSA based GAN

Schematic view of Jaya‐TSA based GAN

Input image acquisition

CT image has been shown to have more sensitivity for COVID‐19 prediction in cases with the respiratory symptoms. The CT showed higher results in prediction of COVID‐19, as it allows the extraction of various features. It is resulted that lung volume of CT image scan be considered to predict the temporary outcome in COVID‐19 patients. Hence, this research aimed to perform the prediction of COVID‐19 using quantitative CT scan. Let us assume dataset as with count of images and it is represented as, where, represents the input images, denotes database, and is the image placed at ith index of database. However, the image with dimension is used to perform the prediction of COVID‐19 by passing the input image to the preprocessing phase.

Preprocessing of input image

Preprocessing is the challenging task in the image processing system. It is used to reduce the noise present in the CT images for better image enhancement. It is the procedure used to increase the interpretability and accuracy of result. However, accurate prediction is effectively achieved if image is preprocessed as per image quality and size. It shows image more efficient for further executing in the computer aided diagnostic system. The input image is fed to preprocessing module, in which image is efficiently preprocessed by removing noise such that the resulted image obtained from this phase is specified as, where, denotes the process of preprocessing and symbolizes preprocessed image with the size of , respectively.

Lung lobe segmentation using BFC

Lung lobe segmentation is a key to find the boundaries of lobes and to prevent the pleural injure during monitoring and treatment. By accurately segmenting the lobes, the disease can be predicted faster and hence automatic lung lobe segmentation is highly required to predict COVID‐19, as the manual segmentation is not feasible in CT data because of large number of slices in CT scan. Accordingly, the procedure of lung lobe segmentation is realized through BFC with preprocessed image . BFC model is designed to understand the clustering process using maximum‐a‐posteriori (MAP) standard. Fuzzy clustering is implemented in a probabilistic way, where the fuzziness and the probability are effectively incorporated to get BFC. Here, the Bayesian inference is used to approximate the count of clusters in the fuzzy clustering. The fuzzifier constraint is such that obtains the value of less than one or with negative in BFC. BFC scheme is collected with the data likelihood distribution termed as fuzzy data likelihood (FDL), the distribution for cluster membership termed as fuzzy cluster prior (FCP), and the Gaussian distribution on the cluster prototype. However, FDL is expressed as, Here, the FCP is given as, Here, the membership variables are considered to have prior distribution and we represent this prior as FCP. It consists of three factors as, , , and , respectively. The factors and are the counterbalance to with the normalization constant of FDL. The factor is Dirichlet likelihood factors by vector and is represented as, In the perspective of standard simplex, for all , . The BFC scheme fills the Gaussian value on each of the prototype factors and Gaussian prior distribution is given as, where, is the number of data points, denotes user set parameter and its value is set as 3, denotes number of clusters, represents dimensionality of data, indicates the membership of data points , represents the dimensional identity matrix, denotes normalization constant. However, arguments towards density factors are assembled into matrix of the data points , memberships with size , where element at row and column is and the prototype with dimensions . The joint likelihood of data formed using FDL, FCP and prototype prior is given as, The result obtained after performing the lung lobe segmentation process of preprocessed image using BFC is represented as , which is fed to the lesion segmentation phase for further processing.

Lesion segmentation using hybrid‐based Renyi entropy measure

Lung lesion segmentation is accomplished based on the quality of lung segmentation. It is the process of segmenting lesion tissues from the modality, like CT. Lung lesion segmentation is a complex task because of high heterogeneity of lung lesions. Moreover, accurate segmentation is difficult, as the features, like texture and shape are susceptible to boundary transforms. The segmented lung lobes is fed to lesion segmentation module, in which lesions are partitioned by hybrid‐based Renyi entropy measure such that the hybrid model is derived by fusing the deep joint segmentation and active contour model. Deep joint segmentation: Here, the optimal segments are determined with the region similarity. It contains three modules, namely joining, region fusion and the segmentation point generation phase, respectively. Segmented lung lobes are partitioned into grids and thereafter the pixels are fused together by assuming mean as well as threshold factor. Optimal segments are identified through distance among segmentation and deep points based on mean square error (MSE). Let us indicate the segmented result obtained by the deep joint segmentation as and the neighboring pixel location of as , respectively. Active contour model (ACM): ACM is used to partition the segmented image into sub‐regions with smooth and closed boundaries. Accurate contours of lung lesions are segmented using ACM. It effectively segments the lesions under the authority of external and image forces. However, the internal force offers the piecewise smoothness constraint. Here, the internal forces are responsible to get image features, such as edges, lines and subjective contours, whereas the external force is responsible to get desired local minimum. The energy function is written as, where, denotes internal energy, represents the image forces, and represents external constraint forces. However, the segmented result obtained by the ACM is represented as and the neighboring pixel location of as , respectively. Renyi entropy: The Renyi entropy measure for both the deep joint segmentation and ACM is specifically computed to get the final output of lesion segmentation. However, the Renyi entropy of order is given as, Here, , respectively. Moreover, the Renyi entropy of is represented as and the Renyi entropy of is denoted as , respectively. The Renyi entropy of lesion segmentation of represented as, The segmentation output obtained from the lesion segmentation phase is expressed as, where, denotes the output of lesion segmentation, which is fed to feature extraction phase for collecting the essential features from segmented image.

Feature extraction

It is the procedure of reducing the dimensionality of image by selecting the optimal features from partitioned image. It reduces the training time and increases the accuracy, inference, and training speed. Some of the features extracted from the lesion segmentation , includes LTP and LGP. Local ternary pattern (LTP): LTP is the extension of the local binary pattern (LBP) to 3‐valued codes. The LTP with the gray level values of center pixel and user specified threshold is given as, where, center pixel, is the gray level values of , represents user specified threshold and the value of is set to 5, neighboring pixel of segmented image is given as , the gray level value of neighboring pixel is represented as and shows the difference between and that is , respectively. Local gradient pattern (LGP): LGP generates the invariant patterns based on the local intensity changes such the feature extracted using LGP is expressed as, where, is the LGP feature. The feature vector applied to perform prediction of COVID‐19 is given as, where, represents the feature vector having size of .

COVID‐19 prediction using proposed Jaya‐TSA based GAN

After acquiring features from segmented image of CT modality, prediction of COVID‐19 is performed by the proposed Jaya‐TSA based GAN. The prediction process of carried out by passing the vector as the input to generative model named GAN such that tuning procedure is achieved by Jaya‐TSA, respectively.

Structure of GAN

GAN is a powerful generative network model used for predicting the COVID‐19 with respect to extracted features. The generative model is used to capture the data distribution, whereas the discriminative model is utilized for defining probability that sample occurs from training samples. It establishes the games between the discriminator and generator such that the generator takes the sample as . Figure 2 represents the architecture of GAN.
FIGURE 2

Structure of GAN

Structure of GAN GAN is latent variable generative model used to produce realistic samples from the extracted features through the adversarial learning process. The GAN contains two components, namely generator and discriminator that follow the procedure of two player min–max game. The generator maps latent vectors from certain known prior to sample space. However, the process of discriminator is to differentiate the samples generated (fake) and the real samples. Here, the feature vector is passed to the generator and generate the result as . Moreover, adversarial training loss is given as, To facilitate the generator, the data mismatch term is incorporated to the adversarial loss function and is given as, The adversarial training process encourages the network model to generate accurate result. Therefore, the final loss for is given as, where, represents the hyper parameter that is used to control the weight of each loss term. However, and are iteratively trained using proposed Jaya‐TSA.

Proposed Jaya‐TSA for training GAN

The tuning strategy of GAN is done by Jaya‐TSA that is derived by incorporating TSA with Jaya optimization. TSA is a bio‐inspired algorithm that imitates the jet propulsion as well as swarm performance of tunicates at the foraging as well as the navigation mechanism. Tunicates are cylindrical in shape that are open at one of the end and blocked at other side. The dimension of every tunicate is specified as the few millimeters such that their presence gelatinous tunic at every tunicate helps to group all individuals. It moves roughly the ocean with the fluid jet like propulsion. However, this propulsion enables it to move around vertically in the ocean. Jaya optimization only needs common power factors to move the optimal solution by avoiding worst. By integrating the swarm behavior with that of propulsion of TSA with the Jaya optimization, it yields optimum solution and enhance the quality of solution. The proposed Jaya‐TSA is more dominant due to limited computation time, faster convergence factors and less implementation complexity. The algorithmic procedure exists in Jaya‐TSA are elaborated as follows: (i) Initialization: Let us consider population size , count of design variables and specify the termination criteria. Here, the population dimension specifies the count of candidate solutions. (ii) Compute fitness measure: The fitness factor is computed to acquire optimum solution such that function utilized to compute fitness measure is represented as, where, represents total count of samples, and denotes target output. (iii) Find the best and worst solution: Identify the candidates that obtain worst and best value of objective function, which is the minimizing function. However, candidate with lower objective function is determined as best solution and candidate having higher objective function is termed as worst solution. (iv) Update solution: The solution of xth variable for zth candidate at iteration is specified as, The standard expression of TSA is represented as, As, is the best search agent, it can be considered as . Assuming, , the above equation is written as, By substituting the above Equation (37) in Equation (31), the final update equation is modified as, where, where, represents vector, represents gravity force, specifies water flow advection in the deep ocean, , , and are the random number that lies in the interval of , denotes the social force among search agents, and are the initial and the subordinate speeds to make the social interaction such that the values of and are 1 and 4, respectively. Moreover, , and specifies the random number that ranges in the interval of , respectively. (v) Re‐evaluate fitness: If the solution obtained at is better, then replace the existing solution with the new solution otherwise the previous solution is accepted as the best solution. (vi) Termination: The above steps are repeated until optimum solution is attained or satisfies condition criteria. Algorithm 1 illustrates pseudo code of Jaya‐TSA based GAN. Input: , Output: Initialize , Compute Identify best and worst solution Update by Equation (39) if Accept and replace with new solution else Keep previous solution end if return optimum solution

RESULTS AND DISCUSSION

This section explains the results and discussion of Jaya‐TSA based GAN for COVID‐19 prediction with CT images.

Experimental setup

The experimentation of developed framework is done in MATLAB tool by UCSD‐AI4H/COVID‐CT dataset and deep COVID dataset. Dataset description: This dataset consists of 349 CT images that contain the clinical observation of COVID‐19 images acquired from 216 patients. However, the images of this dataset are collected from bioRxiv, medRxiv, Lancet, NEJM, JAMA, and so on. CT image that contains abnormalities of COVID‐19 are selected. This method is useful for developing prediction and diagnosis model of COVID‐19.

Evaluation metrics

The effectiveness of proposed model is analyzed by considering factors, like specificity, sensitivity, and accuracy. Accuracy: It is ability to find the healthy and patient cases correctly and is represented as, where, symbolizes true positive, indicates true negative, denotes false positive, and indicates false negative. Sensitivity: It is a measure used to find the positive COVID‐19 cases correctly and is specified as, Specificity: It is the measure used to find the negative COVID‐19 cases correctly and is represented as,

Experimental results

This section depicts the experimental images of developed approach for COVID‐19 prediction by CT images. Figure 3 depicts the experimental result acquired by considering the CT image and is named as image‐1. Figure 3A represents input image‐1, Figure 3B shows preprocessed image‐1, Figure 3C depicts segmented lung lobe image‐1, Figure 3D portrays segmented lesion image‐1, Figure 3E represents LGP feature of input image‐1, and Figure 3F represents LTP feature of input image‐1.
FIGURE 3

Experimental result acquired by considering the CT image‐1, (A) input image‐1, (B) preprocessed image‐1, (C) segmented lung lobe image‐1, (D) segmented lesion image‐1, (E) LGP feature of input image‐1, (F) LTP feature of input image‐1

Experimental result acquired by considering the CT image‐1, (A) input image‐1, (B) preprocessed image‐1, (C) segmented lung lobe image‐1, (D) segmented lesion image‐1, (E) LGP feature of input image‐1, (F) LTP feature of input image‐1 Figure 4 depicts the experimental results acquired by considering the CT image and is named as image‐2. Figure 4A represents input image‐2, Figure 4B depicted preprocessed image‐2, Figure 4C depicts segmented lung lobe image‐2, Figure 4D portrays segmented lesion image‐2, Figure 4E represents LGP feature of input image‐2, and Figure 4F represents LTP feature of input image‐2.
FIGURE 4

Experimental result acquired by considering the CT image‐2, (A) input image‐2, (B) preprocessed image‐2, (C) segmented lung lobe image‐2, (D) segmented lesion image‐2, (E) LGP feature of input image‐2, (F) LTP feature of input image‐2

Experimental result acquired by considering the CT image‐2, (A) input image‐2, (B) preprocessed image‐2, (C) segmented lung lobe image‐2, (D) segmented lesion image‐2, (E) LGP feature of input image‐2, (F) LTP feature of input image‐2 Figure 5 depicts the experimental result acquired by considering the CT image and is named as image‐3. Figure 5A represents input image‐3, Figure 5B shows preprocessed image‐3, Figure 5C depicts segmented lung lobe image‐3, Figure 5D portrays segmented lesion image‐3, Figure 5E represents LGP feature of input image‐3, and Figure 5F implies LTP feature of input image‐3.
FIGURE 5

Experimental result acquired by considering the CT image‐3, (A) input image‐3, (B) preprocessed image‐3, (C) segmented lung lobe image‐3, (D) segmented lesion image‐3, (E) LGP feature of input image‐3, (F) LTP feature of input image‐3

Experimental result acquired by considering the CT image‐3, (A) input image‐3, (B) preprocessed image‐3, (C) segmented lung lobe image‐3, (D) segmented lesion image‐3, (E) LGP feature of input image‐3, (F) LTP feature of input image‐3

Performance analysis

Figure 6 represents analysis computed by the proposed approach. Figure 6A represents accuracy analysis. At 60% of training samples, proposed Jaya‐TSA based GAN with different epochs computed the accuracy as 0.5688, 0.5917, 0.6101, and 0.6468, respectively.
FIGURE 6

Performance analysis with epoch, (A) accuracy, (B) sensitivity, (C) specificity

Performance analysis with epoch, (A) accuracy, (B) sensitivity, (C) specificity Figure 6B portrays the analysis of sensitivity. For 60% of training data, proposed Jaya‐TSA based GAN with varying epochs measured the accuracy values of 0.6076, 0.6582, 0.6835, and 0.7089. Figure 6C depicts the analysis of specificity. For 60% of training data, proposed Jaya‐TSA based GAN with various epochs computed the specificity values of 0.5468, 0.5540, 0.5683, and 0.6115.

Comparative methods

The efficacy is analyzed by considering the presented methods, like deep transfer learning, Prior‐attention residual learning (PARL), deep learning, Lesion‐attention deep neural network (LA‐DNN), WOA‐based GAN, E‐DiCoNet, multi‐COVID‐Net, WOANet, GSO‐based GAN, and GAN.

Comparative analysis

This section elaborates comparative analysis of proposed model by considering the training data the K‐fold.

Analysis using Dataset 1

Analysis based on training samples Figure 7 depicts the comparative analysis of proposed model by considering training data. Figure 7A represents accuracy analysis. At 60% data samples, accuracy computed by deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.6101, 0.4587, 0.5917, 0.6651, 0.6604, 0.6875, 0.6807, 0.6740, 0.6948, 0.6808, and 0.7018. Figure 7B portrays sensitivity analysis. At 60% data values, sensitivity computed by deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.6203, 0.4684, 0.6456, 0.6582, 0.6403, 0.6666, 0.6600, 0.6535, 0.6737, 0.6601, and 0.6962. Figure 7C represents the analysis of specificity. At 70% data values, specificity computed by deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.7308, 0.50, 0.7212, 0.7596, 0.7579, 0.7891, 0.7813, 0.7735, 0.7975, 0.7814, and 0.8269, respectively. Figure 8D precision analysis. At 70% data values, precision computed by deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.7126, 0.5515, 0.7126, 0.7498, 0.7579, 0.7891, 0.7813, 0.7735, 0.7975, 0.7814, and 0.8055, respectively. Figure 8E F1‐score analysis. At 80% data values, F1‐score computed by deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.7149, 0.6314, 0.7464, 0.7991, 0.7772, 0.8092, 0.8011, 0.7932, 0.8177, 0.8012, and 0.8340, respectively.
FIGURE 7

Analysis with training data, (A) accuracy, (B) sensitivity, (C) specificity, (D) precision, (E) F1‐score

FIGURE 8

Analysis by K‐fold, (A) accuracy, (B) sensitivity, (C) specificity, (D) precision, (E) F1‐score

Analysis based on K‐fold Analysis with training data, (A) accuracy, (B) sensitivity, (C) specificity, (D) precision, (E) F1‐score Analysis by K‐fold, (A) accuracy, (B) sensitivity, (C) specificity, (D) precision, (E) F1‐score Figure 8 represents the comparative analysis of developed model by considering K‐fold. Figure 8A represents analysis of accuracy. For K‐value = 8, accuracy observed by deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.5828, 0.5337, 0.5521, 0.6074, 0.7204, 0.7500, 0.7426, 0.7352, 0.7580, 0.7427, and 0.7669. Figure 8B illustrates sensitivity analysis. For K‐value = 7, sensitivity acquired by deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, Multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.6076, 0.5570, 0.5424, 0.5758, 0.6213, 0.6468, 0.6404, 0.6341, 0.6537, 0.6405, and 0.6582. Figure 8C specificity analysis. At K‐value = 8, specificity computed by deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.5673, 0.50, 0.5577, 0.6058, 0.7276, 0.7575, 0.7500, 0.7426, 0.7656, 0.7501, and 0.8173. Figure 8D precision analysis. At K‐value = 8, precision computed by deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.5887, 0.5391, 0.5577, 0.6134, 0.7276, 0.7575, 0.7500, 0.7426, 0.7656, 0.7501, and 0.7745, respectively. Figure 8E F1‐score analysis. At K‐value = 9, F1‐score computed by deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.7683, 0.6276, 0.5873, 0.6581, 0.7502, 0.7810, 0.7733, 0.7656, 0.7893, 0.7734, and 0.8558, respectively. Analysis of ROC Figure 9 implies analysis with ROC curve. When FPR is 60, the TPR of deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.6329, 0.7089, 0.5063, 0.7089, 0.6385, 0.7154, 0.5139, 0.7176, 0.7252, 0.6999, and 0.7215. When FPR is 80, the TPR of deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.75, 0.775, 0.725, 0.825, 0.7556, 0.7815, 0.7326, 0.8337, 0.8413, 0.84875, and 0.875, respectively.
FIGURE 9

Analysis of ROC curve

Analysis of segmentation accuracy Analysis of ROC curve Figure 10 expresses analysis of segmentation accuracy. By assuming 15 sample images, the segmentation accuracy of the proposed method for COVID cases is 69.817, whereas the segmentation accuracy of the proposed method for non‐COVID cases is 78.680.
FIGURE 10

Analysis of segmentation accuracy

Analysis of segmentation accuracy

Analysis using Dataset 2

Analysis based on training samples Figure 11 depicts the comparative analysis of proposed model by considering training data. Figure 11A represents accuracy analysis. At 60% data samples, accuracy computed by deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.6040, 0.5426, 0.5858, 0.6585, 0.6538, 0.6806, 0.6739, 0.6672, 0.6879, 0.6740, and 0.6948, respectively. Figure 11B portrays sensitivity analysis. At 60% data values, sensitivity computed by deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.6141, 0.5464, 0.6391, 0.6516, 0.6339, 0.6600, 0.6534, 0.6469, 0.6670, 0.6535, and 0.6892, respectively. Figure 11C represents the analysis of specificity. At 70% data values, specificity computed by deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.7235, 0.5487, 0.7139, 0.7520, 0.7503, 0.7812, 0.7735, 0.7658, 0.7895, 0.7735, and 0.8187, respectively. Figure 11D precision analysis. At 70% data values, precision computed by deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.7055, 0.5587, 0.7055, 0.7423, 0.7503, 0.7812, 0.7735, 0.7658, 0.7895, 0.7735, and 0.7975, respectively. Figure 11E F1‐score analysis. At 80% data values, F1‐score computed by deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.7078, 0.6251, 0.7389, 0.7911, 0.7694, 0.8011, 0.7931, 0.7853, 0.8096, 0.7932, and 0.8257, respectively.
FIGURE 11

Analysis with training data, (A) accuracy, (B) sensitivity, (C) specificity, (D) precision, (E) F1‐score

Analysis based on K‐fold Analysis with training data, (A) accuracy, (B) sensitivity, (C) specificity, (D) precision, (E) F1‐score Figure 12 represents the comparative analysis of developed model by considering K‐fold. Figure 12A represents analysis of accuracy. For K‐value = 8, accuracy observed by deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.5770, 0.5490, 0.5950, 0.6013, 0.7132, 0.7425, 0.7352, 0.7279, 0.7504, 0.7352, and 0.7592. Figure 12B illustrates sensitivity analysis. For K‐value = 7, sensitivity acquired by deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.6015, 0.5800, 0.5389, 0.5700, 0.6151, 0.6403, 0.6340, 0.6277, 0.6471, 0.6341, and 0.6516. Figure 12C specificity analysis. At K‐value = 8, specificity computed by deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, Multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.5635, 0.5950, 0.5521, 0.5997, 0.7203, 0.7500, 0.7425, 0.7352, 0.7579, 0.7426, and 0.8091. Figure 12D precision analysis. At K‐value = 8, precision computed by deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, Multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.5828, 0.5954, 0.5521, 0.6073, 0.7203, 0.7500, 0.7425, 0.7352, 0.7579, 0.7426, and 0.7668, respectively. Figure 12E F1‐score analysis. At K‐value = 9, F1‐score computed by deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.7607, 0.6213, 0.5845, 0.6515, 0.7427, 0.7732, 0.7656, 0.7580, 0.7814, 0.7656, and 0.8473, respectively.
FIGURE 12

Analysis by K‐fold, (A) accuracy, (B) sensitivity, (C) specificity, (D) precision, (E) F1‐score

Analysis of ROC Analysis by K‐fold, (A) accuracy, (B) sensitivity, (C) specificity, (D) precision, (E) F1‐score Figure 13 implies analysis with ROC curve. When FPR is 40, the TPR of deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, Multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.4437, 0.4518, 0.4276, 0.4679, 0.4491, 0.4580, 0.4349, 0.4763, 0.4835, 0.4617, and 0.4760, respectively. When FPR is 80, the TPR of deep transfer learning, PARL, deep learning, LA‐DNN, WOA‐based GAN, E‐DiCoNet, multi‐COVID‐Net, WOANet, GSO‐based GAN, GAN, and proposed Jaya‐TSA based GAN is 0.72, 0.744, 0.696, 0.792, 0.725, 0.750, 0.703, 0.800, 0.808, 0.815, and 0.84, respectively.
FIGURE 13

Analysis of ROC curve

Analysis of segmentation accuracy Analysis of ROC curve Figure 14 expresses analysis of segmentation accuracy using dataset 2. By assuming 25 sample images, the segmentation accuracy of the proposed method 85.4187 and 73.7682 for COVID and non‐COVID cases, respectively.
FIGURE 14

Analysis of segmentation accuracy

Analysis of segmentation accuracy

CONCLUSION

In this research, an effective prediction method is modeled by Jaya‐TSA‐based GAN to predict the patients affected with COVID‐19 disease. The proposed Jaya‐TSA involves the following phases, such as image acquisition, preprocessing, lung lobe segmentation, lesion segmentation, feature collection, and COVID‐19 prediction to predict the patients of COVID‐19 disease. Initially, the input image is fed to the preprocessing module, in which image is effectively preprocessed and resulted image is passed to lung lobe segmentation module, in which lung lobes are partitioned by BFC. After the completion of lung lobe segmentation, the lung lesions are segmented by hybrid‐based Renyi entropy measure. In feature acquisition phase, features like LGP and LTP are effectively acquired from partitioned result. Prediction process is achieved using GAN, which is trained using the proposed Jaya‐TSA. Accordingly, the Jaya‐TSA is the incorporation of Jaya optimization and TSA, respectively. The Jaya‐TSA obtained better performance using the metrics, like accuracy, sensitivity, and specificity with values of 0.8727, 0.85, and 0.8857 by varying training data. The future dimension of the research would be the usage of deep learning classifier for effective performance enhancement.

CONFLICT OF INTEREST

The author declares that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported.
  24 in total

1.  Enhanced local texture feature sets for face recognition under difficult lighting conditions.

Authors:  Xiaoyang Tan; Bill Triggs
Journal:  IEEE Trans Image Process       Date:  2010-02-17       Impact factor: 10.856

2.  Knowledge-based Collaborative Deep Learning for Benign-Malignant Lung Nodule Classification on Chest CT.

Authors:  Yutong Xie; Yong Xia; Jianpeng Zhang; Yang Song; Dagan Feng; Michael Fulham; Weidong Cai
Journal:  IEEE Trans Med Imaging       Date:  2018-10-17       Impact factor: 10.048

3.  Local transform features and hybridization for accurate face and human detection.

Authors:  Bongjin Jun; Inho Choi; Daijin Kim
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2013-06       Impact factor: 6.226

4.  Deep Transfer Learning Based Classification Model for COVID-19 Disease.

Authors:  Y Pathak; P K Shukla; A Tiwari; S Stalin; S Singh; P K Shukla
Journal:  Ing Rech Biomed       Date:  2020-05-20

5.  Pulmonary Artery-Vein Classification in CT Images Using Deep Learning.

Authors:  Pietro Nardelli; Daniel Jimenez-Carretero; David Bermejo-Pelaez; George R Washko; Farbod N Rahaghi; Maria J Ledesma-Carbayo; Raul San Jose Estepar
Journal:  IEEE Trans Med Imaging       Date:  2018-05-04       Impact factor: 10.048

6.  FissureNet: A Deep Learning Approach For Pulmonary Fissure Detection in CT Images.

Authors:  Sarah E Gerard; Taylor J Patton; Gary E Christensen; John E Bayouth; Joseph M Reinhardt
Journal:  IEEE Trans Med Imaging       Date:  2018-08-10       Impact factor: 10.048

7.  Radiological findings from 81 patients with COVID-19 pneumonia in Wuhan, China: a descriptive study.

Authors:  Heshui Shi; Xiaoyu Han; Nanchuan Jiang; Yukun Cao; Osamah Alwalid; Jin Gu; Yanqing Fan; Chuansheng Zheng
Journal:  Lancet Infect Dis       Date:  2020-02-24       Impact factor: 25.071

8.  Automatic Screening of COVID-19 Using an Optimized Generative Adversarial Network.

Authors:  Tripti Goel; R Murugan; Seyedali Mirjalili; Deba Kumar Chakrabartty
Journal:  Cognit Comput       Date:  2021-01-25       Impact factor: 4.890

9.  Multi-COVID-Net: Multi-objective optimized network for COVID-19 diagnosis from chest X-ray images.

Authors:  Tripti Goel; R Murugan; Seyedali Mirjalili; Deba Kumar Chakrabartty
Journal:  Appl Soft Comput       Date:  2021-12-09       Impact factor: 6.725

View more
  1 in total

1.  Jaya-tunicate swarm algorithm based generative adversarial network for COVID-19 prediction with chest computed tomography images.

Authors:  Palanivel Rajan Doraiswami; Velliangiri Sarveshwaran; Iwin Thanakumar Joseph Swamidason; Sona Chandra Devadass Sorna
Journal:  Concurr Comput       Date:  2022-07-30       Impact factor: 1.831

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.