Literature DB >> 34812247

Cognitive computing-based COVID-19 detection on Internet of things-enabled edge computing environment.

E Laxmi Lydia1, C S S Anupama2, A Beno3, Mohamed Elhoseny4,5, Mohammad Dahman Alshehri6, Mahmoud M Selim7.   

Abstract

In the current pandemic, smart technologies such as cognitive computing, artificial intelligence, pattern recognition, chatbot, wearables, and blockchain can sufficiently support the collection, analysis, and processing of medical data for decision making. Particularly, to aid medical professionals in the disease diagnosis process, cognitive computing is helpful by processing massive quantities of data rapidly and generating customized smart recommendations. On the other hand, the present world is facing a pandemic of COVID-19 and an earlier detection process is essential to reduce the mortality rate. Deep learning (DL) models are useful in assisting radiologists to investigate the large quantity of chest X-ray images. However, they require a large amount of training data and it needs to be centralized for processing. Therefore, federated learning (FL) concept can be used to generate a shared model with no use of local data for DL-based COVID-19 detection. In this view, this paper presents a federated deep learning-based COVID-19 (FDL-COVID) detection model on an IoT-enabled edge computing environment. Primarily, the IoT devices capture the patient data, and then the DL model is designed using the SqueezeNet model. The IoT devices upload the encrypted variables into the cloud server which then performs FL on major variables using the SqueezeNet model to produce a global cloud model. Moreover, the glowworm swarm optimization algorithm is utilized to optimally tune the hyperparameters involved in the SqueezeNet architecture. A wide range of experiments were conducted on benchmark CXR dataset, and the outcomes are assessed with respect to different measures . The experimental outcomes pointed out the enhanced performance of the FDL-COVID technique over the other methods.
© The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021.

Entities:  

Keywords:  COVID-19; Chest X-ray images; Cognitive computing; Deep learning; Edge computing; Federated learning; Internet of things; Pattern recognition

Year:  2021        PMID: 34812247      PMCID: PMC8600340          DOI: 10.1007/s00500-021-06514-6

Source DB:  PubMed          Journal:  Soft comput        ISSN: 1432-7643            Impact factor:   3.732


Introduction

Recently, cognitive computing has rapidly transformed the healthcare industry in assisting physicians in better treatment of diseases and improvising the patient services. The cognitive computing examines huge quantities of data promptly to respond to particular queries and offer intelligent recommendations. On the other hand, the rapid growth of social networking and Internet of things (IoT) applications results in a dramatic increase in the data created at network edges (Wang et al. 2019). It can be anticipated that the data generated rate would surpass the capability of the present Internet in the following days (Chiang and Zhang 2016). Because of the data privacy concerns and network bandwidth, it is not practical and often needless for sending the entire data to a remote cloud (Kelly 2016 ). Local data processing and storing with global management is developed probably by the developing technologies of mobile edge computing (MEC) (Kelly 2016), whereas edge nodes like home gateway, sensor, small cell, and micro-server are outfitted with computation and storage capacity. Multiple edge nodes collaborate with the remote cloud for performing large-scale distributed tasks which include both remote coordination and execution and local processing. For analyzing huge numbers of data and attaining effective data for the prediction, detection, and classification of upcoming events, ML methods are frequently employed. This variation of distributed ML from a federation of edge nodes is called federated learning (FL) (Mao et al. 2017). Figure 1 shows the framework of the IoT-permitted MEC system.
Fig. 1

Structure of IoT-enabled MEC systems

Structure of IoT-enabled MEC systems The COVID-19 pandemic has greatly affected the continual loss to the health and regular living of people globally. Thus, studies on diagnosing and detecting COVID-19 persons are highly significant (Shankar et al. 2021; Dash et al. 2021). Mainly, the medical appearances of COVID-19-diseased pneumonia contain systemic pain, fever, chills, and dry cough. Some people have abdominal symptoms. Hence, it can be essential for testing more people without delay (Satpathy et al. 2021; Khadidos et al. 2020). The ML and computer vision technologies perform a significant part in this method. The DL has created a large involvement in the classification of images in the healthcare sector, and it is to turn into an efficient tool for the physicians to analyze and judge the situation. For obtaining a robust and accurate depth module, the key component is massive and extensively varied training data. Gathering these training data was one of the main problems. To a certain range, it has created insufficient data while implementing DL methods for detecting COVID-19. The FL is one of the accessible manners for addressing this problem. The FL is an architecture of learning through multiple organizations without sharing a person’s data. It is possible for solving fundamental challenges of data silos and data privacy. FL applications in medicinal big data are promising studies. The fundamental FL is for utilizing the datasets shared on multiple devices to collectively make a distributed module and does not need local raw data sharing. This accurately secures personal data. This paper presents a federated deep learning-based COVID-19 (FDL-COVID) detection model on an IoT-enabled edge computing environment. Initially, the FDL-COVID technique enables the IoT devices in capturing the patient data, and then the DL model is designed based on the SqueezeNet architecture. The IoT devices upload the encrypted variables into the cloud server which then performs FL on major variables using the SqueezeNet model to produce a global cloud model. Furthermore, glowworm swarm optimization (GSO) algorithm-based hyperparameter optimizer is applied for optimal selection of hyperparameters involved in the SqueezeNet model. A wide range of experimental analysis is performed on the R dataset, and the outcomes are evaluated with respect to diverse measures.

Related works

Liu et al. (2020) proposed a scheme which utilizes FL for COVID-19 data training and place research for verifying the efficiency. Also, they relate efficiency of four common modules (COVID-Net, MobileNet, ResNet18, and MoblieNet) with/without FL architecture. Park et al. (2021) proposed the method using PSO rather than FedAvg that upgrades the global module by gathering weights of learned modules that are mostly utilized in FL. The method is called FedPSO and increases its strength in unstable network platforms by transferring score values instead of larger weights. Dou et al. (2021) determine the feasibility of FL technique to detect COVID-19-oriented CT abnormality with exterior authentication on persons from multi-national research. Abdul Salam et al. (2021) investigated the efficiency of FL vs. conventional learning by emerging two ML modules (FL and conventional ML) with TensorFlow and Keras federated. In the module training phase, they attempt to detect which factor affects module predictive loss and accuracy, such as model optimizer, activation function, data size, number of rounds, and rate of learning, they saved plotting and recording the module predictive loss and accuracy for every training round, to detect which factor affects the module efficiency, and they discovered softmax activation function and SGD optimizer to provide optimum predictive loss and accuracy; altering the numbers of rounds and learning rate has somewhat influence on module predictive loss and accuracy; however, rising the data size did not have any effects on module predictive loss and accuracy. Feki et al. (2021) presented a combined FL architecture permitting multiple medical organizations to screen COVID-19 from chest X-ray images by DL with no sharing of personal data. They examine a few important specificities and properties of FL settings including non-IID and unbalanced data distribution that arises naturally. Qayyum et al. (2021) utilize the emergent model of CFL for automated diagnoses of COVID-19. They calculate the efficiency of the presented architecture which has distinct investigative settings on two standard datasets. Zhang et al. (2021) presented a new dynamic fusion-enabled FL method for diagnosing medical image analyses to identify COVID-19 diseases. Firstly, they designed a framework for a dynamic fusion-based FL system for analyzing medical diagnosis images. Additionally, they summarized a classification of medical diagnosis image dataset for detecting COVID-19 that is utilized by the ML method for analyzing an image. Kumar et al. (2021) proposed an architecture which gathers a minimal quantity of data from distinct sources and trains a global DL method by blockchain-based FL. Blockchain technique validates the data, and FL undergoes training the module globally when maintaining the secrecy of the organization. Firstly, they proposed a data normalization method that handles data heterogeneity as data are collected from distinct medical institutions containing distinct types of CT scanner. Next, they utilize CapsNet-based classification and segmentation for detecting COVID-19 persons. Lastly, they designed a technique that could collectively train a global module by blockchain technique with FL when maintaining privacy. Vaid et al. (2021) intended for using FL and ML methods which evade locally aggregating raw medical data through multiple organizations, for predicting death of hospitalized persons with COVID-19 within 7 days. The LR using L1 regularization or LASSO and MLP modules have been trained with the help of local data at all the sites.

Background and problem definition

FL is a learning approach projected (Koneˇcný et al. 2017) for shared datasets. It involves training a module by the datasets shared through several devices when avoiding leakage of data. It is beneficial in that it enhances privacy and decreases transmission costs. Over FL, ANN methods could be learned without data breaches or private data. Furthermore, transmitting the entire data from many devices to the central server surges storage costs and network traffic. FL considerably decreases transmission cost by interchanging the weights obtained from the training modules. Figure 2 summarizes the FL procedure.
Fig. 2

Process involved in FL

The server transmits the learning module to all the clients. The obtained modules are trained on user data. Every user transmits their trained module to the server. The server collects the gathered modules and aggregates them to an upgraded module. The server transmits the upgraded modules to all the clients, and steps 1 to 5 are continued. Process involved in FL In the study, assume COVID-19 CXR image owners as . In this limitation, each of them needs to train its individual module by combining their corresponding data . A traditional technique occurs to place each datum together and utilize for training to attain an MSUM module. In this procedure, the data owner collectively trains the module . In this procedure, some data owners will not reveal their personal data to others. Moreover, the accuracy of denoted as must be closer to the efficiency of MSUM, where denotes a nonnegative real number; when , they could mediate the FL.

Materials and methods

Framework of federated learning

Figure 3 illustrates the overall framework of the proposed model. The FL is a distributed learning technique. The server is utilized for maintaining the entire method and allocating it to several user terminals (UTs). The server would fix the score S and remove the UT on the basis of proportion for updating the central method of the server. Later, it would upload the client-enhanced model parameters to the server for updating server model parameters. Then, it is allocated to UT for improving the UT models. This method could ensure the privacy and accuracy of the UT, employ the UT computing power and a huge amount of client data for learning, and maintain an outstanding central method. It needs client data for using their actual private data to train and provide the trained local method to the FL servers. In general, the FL training procedure comprises the succeeding three training phases. Primarily, they determine the local module representing the method trained on every contributing device, and the global method denotes the module afterward the FL server is aggregated.
Fig. 3

General framework of the proposed model

Step 1 Implement tasks initiation. The server defines the training process that is to define the corresponding data requirements and target application. Simultaneously, the server states the global method and determines the parameter at the time of training procedure, for example, rate of learning. Later, the server assigns the initiated global method WG0 and trained task to the participating users for completing the task distribution. Step 2 Implement training and upgrading of the local module. The training is executed based on the global method WG, whereas t denotes the present iteration index, and every participating client utilizes local data and tools for updating the local methods’ parameter W. The last aim of participating in iterations is to detect the optimum parameter W that reduces the loss function L(W). Step 3 Realize the aggregation and upgradation of the global method. The server aggregates the local methods of participating clients and forwards the upgraded global methods’ parameter G to the clients who keep the data. General framework of the proposed model In this work, they utilize SqueezeNet for learning cloud modules and client-side SqueezeNet methods. Consider the features of FL; this study creates substantial developments for local SqueezeNet network, i.e., the key parameters beforehand the latter hidden layer are distributed, and the parameter among the latter hidden and the outcome layers is not allocated. The thorough details are described in the succeeding subsections. The cloud server utilizes public data and the parameter uploaded by the client for establishing a global cloud module . The objective procedure at the time of training could be given bywhere represents the loss function of a trained module, for example, cross-entropy loss functions. denotes sample and the equivalent label signifies the sample size of public data. indicates the parameter matrix which should be learned, with the bias and weights of the hidden layer. Afterward, the cloud method is determined, and the parameter is allocated to every user.

User‐side model

The user creates a local SqueezeNet method as the cloud module. The training procedure residues are generally similar, excluding that the instance data are comparatively smaller and belong to private data (Wang et al. 2020). For some users , the local SqueezeNet method is stated as , and the objective function is given by Since a significant variable of local SqueezeNet, , is loaded into the cloud in the encrypted model, the cloud trains the parameter set , for updating the global cloud module and the parameter , and later allocates the upgraded parameter to each user. Based on the actual requirements, local parameters could be upgraded frequently, for example regularly upgrading every night. The aforementioned procedure is a dynamic procedure of iterated optimization that always enhances the detection capability of a method.

Network architecture

In this study, SqueezeNet architecture is employed for the detection of COVID-19 utilizing CXR images. CNN is one of the common DL techniques mainly utilized for image classification problems. It generally contains five layers that have output, input, convolution, pooling r, and fully connected layers. The real-world support of CNN is containing less parameters that considerably decreases the time taken for learning and decreases the number of data required to train the module. Additionally, CNN could be trained end to end for the selection and extraction of feature images and, finally, could be utilized for predicting or classifying the image (Muhammad et al. 2021). Since the number of parameters for VGGNet and AlexNet is increasing, the SqueezeNet network method was projected to have a low number of parameters when maintaining accuracy (Xu et al. 2020). The fire model is the core base model in SqueezeNet, and its framework is displayed in Fig. 4. This model is separated to expand and squeeze frameworks. Squeeze has Conv kernel. The expand layer has kernels and Conv kernels. The amount of Conv kernel is , and the amount of Conv kernels is The module should fulfill . Min utilized an MLP rather than the conventional linear Conv kernel for improving the expressive power of networks. If the amount of output and input channel is larger, the Conv kernel parameters become larger. They include Conv for all the inception models, decreasing the amount of input channel, and the Conv kernel parameter and also operational complexity decrease. Finally, a 1 × 1 Conv is included for improving the amount of channels and improving feature extraction (Iandola et al. 2016). SqueezeNet substitutes 3 × 3 Convs with 1 × 1 Conv for reducing the amount of parameters to one-ninth.
Fig. 4

SqueezeNet module

SqueezeNet module

Hyperparameter optimization

In order to optimally adjust the hyperparameters involved in the SqueezeNet model, the GSO algorithm is employed to it and thereby the classification performance gets improved. For optimizing the hyperparameters of the GSO method, a group of glowworms is initiated arbitrarily distributed in the solution space where it is efficiently distributed. The glowworms or agent carries a luminescent quantity known as luciferin with an identical primary value. The intensities of emitted light are connected to the luciferin count that is nearly combined to it where the glowworms are placed in their motion and have a dynamic decision domain constrained with a spherical sensor range . A glowworm recognizes other glowworm as a fellow memory utilizing the probabilistic manner in which is placed in the present local decision region of . The glowworms discharge higher quantity of luciferin to attract further glowworms for moving toward it. Initially, the glowworm has a similar number of luciferins, . Depending upon the resemblance of luciferin value, the glowworm chooses its neighboring one with a probability and moves in the direction of decision range , whereas the location of the glowworm is denoted by . The iteration comprises a luciferin upgrade stage following a motion stage relying on a transition rule. Firstly, the value of luciferin equivalent to the calculated value is determined at the sensed profile. Simultaneously, a part of luciferin value endures subtraction for simulating the decay in luciferin with time. In luciferin, upgrade value is determined by:whereas signifies the luciferin level linked to glowworm at time denotes the luciferin declining constant signifies the luciferin upgrading constant, and indicates the value of an objective function at agent s position at time Based on the process included in the GSO method, the glowworms are attracted by the adjacent ones that glow brighter. As a result, at the time of motion stage, the glowworms utilize probabilistic procedure for moving toward the adjacent one that has a maximal luciferin intensity. In the event of each glowworm , the likelihood of motion on an adjacent glowworm is denoted bywhere , represents the group of neighboring glowworms at time signifies the Euclidean distance among the glowworm and at time indicates the parameter adjacent range interrelated with glowworm at time . The parameter is constrained by a radial sensor range (). At that time, the different time module of the glowworm motion is determined aswhere denotes the step size and represents the Euclidean norm operator. Besides, indicates the position of glowworm at time in m‐dimension real space . The adjacent range upgrade phase is used for detecting various peak values in the multimodal function landscape. Later, assume be the initiated adjacent range of every glowworm (i.e., , ). To dynamically upgrade the decision range of every glowworm, the beneath rule is used:where indicates a constant and describes a variable used to control the degree.

Performance validation

The proposed FDL-COVID technique is validated on a freely accessible COVIDx dataset. The parameter setting of the proposed model is given as follows: batch size 500, max. epochs 15, learning rate 0.05, dropout rate 0.2, and momentum 0.9. It has a set of 15,282 images where 13,703 images are used to train the model and the rest 1579 images are utilized to test the method. The dataset comprises images of three classes, namely normal, pneumonia, and COVID-19. Firstly, a brief results analysis of various models’ sensitivity takes place using FL in Table 1 and Fig. 5. From the table values, it is ensured that the MN-v2 model has showcased least outcome with the sensitivity of 0.912 and 0.868 on the applied training and testing sets correspondingly.
Table 1

Result analysis of various models’ sensitivity using FL

MethodsTraining dataTesting data
COVID-Net0.9240.892
MN-v20.9120.868
RN-180.9620.913
Res-NXT0.9470.904
FDL-COVID0.9760.965
Fig. 5

Comparison study of various models’ sensitivity using federated learning

Result analysis of various models’ sensitivity using FL Comparison study of various models’ sensitivity using federated learning Then, the COVID-Net and Res-NXT techniques have depicted slightly increased performance with the somewhat enhanced sensitivity values on the applied training and testing sets, respectively. Moreover, the RN-18 technique has gained moderate performance with the sensitivity of 0.962 and 0.913 on the applied training and testing sets correspondingly. However, the FDL-COVID technique has gained superior performance with the sensitivity of 0.976 and 0.965. Primarily, a detailed outcomes analysis of different methods’ perplexity takes place utilizing FL in Table 2. From the table values, it can be stated that the MN-v2 method has outperformed worse results with the perplexity of 0.949, 0.872, and 0.503 on the applied normal, pneumonia, and COVID-19, respectively. Afterward, the COVID-Net and Res-NXT manners have showcased slightly improved efficiency with the somewhat enhanced perplexity values on the applied normal, pneumonia, and COVID-19 correspondingly. Furthermore, the RN-18 approach has attained moderate performance with the perplexity of 0.962, 0.927, and 0.736 on the applied normal, pneumonia, and COVID-19 correspondingly. But, the FDL-COVID algorithm has accomplished higher performance with the perplexity of 0.987, 0.949, and 0.898 on the applied normal, pneumonia, and COVID-19.
Table 2

Result analysis of various models’ perplexity using federated learning

MethodsNormalPneumoniaCOVID-19
COVID-Net0.9650.8820.510
MN-v20.9490.8720.503
RN-180.9820.9390.663
Res-NXT0.9620.9270.736
FDL-COVID0.9870.9490.898
Result analysis of various models’ perplexity using federated learning Next, a brief outcomes analysis of distinct technique loss convergence speed takes place using FL in Table 3 and Fig. 6. From the table values, it can make sure that the MN-v2 technique has exhibited minimum outcome with the loss convergence speed of 0.941 and 0.890 on the applied training and testing sets correspondingly. Afterward, the COVID-Net and Res-NXT manners have showcased somewhat higher performance with the slightly improved loss convergence speed values on the applied training and testing sets correspondingly. Besides, the RN-18 manner has obtained moderate performance with the loss convergence speed of 0.977 and 0.913 on the applied training and testing sets correspondingly. But, the FDL-COVID methodology has accomplished maximum efficiency with the loss convergence speed of 0.989 and 0.956.
Table 3

Result analysis of various models’ loss convergence speed using FL

MethodsTraining dataTesting data
COVID-Net0.9450.901
MN-v20.9410.890
RN-180.9810.911
Res-NXT0.9770.913
FDL-COVID0.9890.956
Fig. 6

Comparison study of various models’ loss convergence speed using federated learning

Result analysis of various models’ loss convergence speed using FL Comparison study of various models’ loss convergence speed using federated learning Figure 7 showcases the set of confusion matrices generated by the existing techniques. The COVID-Net technique has classified a set of 531 images into pneumonia, 854 images into normal, and 23 images into COVID-19. Eventually, the MN-v2 manner has classified a set of 555 images into pneumonia, 777 images into normal, and 39 images into COVID-19. In the meantime, the RN-18 method has classified a set of 555 images into pneumonia, 845 images into normal, and 41 images into COVID-19. Lastly, the Res-NXT algorithm has classified a set of 564 images into pneumonia, 805 images into normal, and 58 images into COVID-19.
Fig. 7

Confusion matrices of recently developed methods

Confusion matrices of recently developed methods The confusion matrix produced by the FDL-COVID technique on the classification of benchmark images is demonstrated in Fig. 8. From the figure, it is obvious that the FDL-COVID technique has resulted in the classification of 575 images into pneumonia, 866 images into normal, and 66 images into COVID-19.
Fig. 8

Confusion matrix of FDL-COVID technique

Confusion matrix of FDL-COVID technique A detailed classification results analysis of the FDL-COVID technique with existing techniques is made in Table 4 and Fig. 9. From the obtained values, it is demonstrated that the MN-v2 FL technique has exhibited lower performance with an average sens. of 0.734, spec. of 0.925, and acc. of 0.912. In line with the COVID-Net, FL technique has reported slightly enhanced outcome with an average sens. of 0.696, spec. of 0.929, and acc. of 0.928. Followed by the Res-NXT, FL technique has accomplished a moderate outcome with an average sens. of 0.813, spec. of 0.946, and acc. of 0.936. In the meantime, the RN-18 FL technique has resulted in a competitive outcome with an average sens. of 0.766, spec. of 0.946, and acc. of 0.942. However, the proposed FDL-COVID technique has resulted in maximum performance with an average sens. of 0.869, spec. of 0.974, and acc. of 0.970.
Table 4

Result analysis of various models in terms of accuracy, sensitivity, and specificity

ModelsMetricsSensitivitySpecificityAccuracy
COVID-Net FLPneumonia0.8940.9210.911
Normal0.9650.8660.922
COVID-190.2301.0000.951
Average0.6960.9290.928
MN-v2 FLPneumonia0.9340.8500.882
Normal0.8780.9370.904
COVID-190.3900.9890.951
Average0.7340.9250.912
RN-18 FLPneumonia0.9340.9150.922
Normal0.9550.9220.941
COVID-190.4101.0000.963
Average0.7660.9460.942
Res-NXT FLPneumonia0.9500.8940.915
Normal0.9100.9540.929
COVID-190.5800.9890.963
Average0.8130.9460.936
FDL-COVID FLPneumonia0.9680.9610.964
Normal0.9790.9670.973
COVID-190.6600.9930.972
Average0.8690.9740.970
Fig. 9

Result analysis of FDL-COVID model with different measures

Result analysis of various models in terms of accuracy, sensitivity, and specificity Result analysis of FDL-COVID model with different measures A brief classification outcomes analysis of the FDL-COVID manner with existing methods is made in Table 5 and Fig. 10. From the attained values, it can be outperformed that the Fed. Learning-VGG16 manner has depicted lesser performance with the sens. of 0.9503, spec. of 0.9212, and acc. of 0.9357. Following this, the Cen.-VGG16 technique has reported somewhat improved results with a sens. of 0.9520, spec. of 0.9230, and acc. of 0.9375. Also, the Fed. Learning-ResNet50 approach has accomplished reasonable outcomes with the sens. of 0.9603, spec. of 0.9478, and acc. of 0.9540. At the same time, the Cen.-ResNet50 methodology has resulted in a competitive result with a sens. of 0.9600, spec. of 0.9460, and acc. of 0.9530. Finally, the presented FDL-COVID algorithm has resulted in maximal efficiency with a sens. of 0.8690, spec. of 0.9740, and acc. of 0.9700.
Table 5

Result analysis of recent methods with the proposed model in terms of accuracy, sensitivity, and specificity

MethodsAccuracySensitivitySpecificity
Fed. Learning-VGG160.93570.95030.9212
Cen.-VGG160.93750.95200.9230
Fed. Learning-ResNet500.95400.96030.9478
Cen.-ResNet500.95300.96000.9460
FDL-COVID0.97000.86900.9740
Fig. 10

Comparative analysis of FDL-COVID model with different measures

Result analysis of recent methods with the proposed model in terms of accuracy, sensitivity, and specificity Comparative analysis of FDL-COVID model with different measures From the above-mentioned tables and figures, it is demonstrated that the proposed FDL-COVID technique has accomplished superior performance with the maximum diagnostic performance over the other recent techniques. The improved diagnostic performance of the proposed FDL-COVID technique is due to the inclusion of hyperparameter optimizer.

Conclusion

This paper has presented a new FDL-COVID technique to detect and classify COVID-19 on IoT-enabled MEC environment. The proposed FDL-COVID technique involves IoT-based data acquisition process, and the CXR images of the patients are collected. In addition, SqueezeNet method is employed for the detection and classification of COVID-19 utilizing CXR images. The IoT devices upload the encrypted variables into the cloud server which then performs FL on major variables using the SqueezeNet model to produce a global cloud model. At last, GSO algorithm-based hyperparameter optimizer is applied for optimal selection of hyperparameters involved in the SqueezeNet model, which in turn considerably enhances the COVID-19 detection outcomes. An extensive set of simulations were carried out on the benchmark CXR dataset, and the proposed model outperformed the existing techniques with the maximum accuracy of 0.9700. As a part of future work, data offloading and resource management approaches can be focused on the IoT-enabled MEC platform.
  9 in total

1.  Dynamic-Fusion-Based Federated Learning for COVID-19 Detection.

Authors:  Weishan Zhang; Tao Zhou; Qinghua Lu; Xiao Wang; Chunsheng Zhu; Haoyun Sun; Zhipeng Wang; Sin Kit Lo; Fei-Yue Wang
Journal:  IEEE Internet Things J       Date:  2021-02-04       Impact factor: 10.238

2.  Automated COVID-19 diagnosis and classification using convolutional neural network with fusion based feature extraction model.

Authors:  K Shankar; Sachi Nandan Mohanty; Kusum Yadav; T Gopalakrishnan; Ahmed M Elmisery
Journal:  Cogn Neurodyn       Date:  2021-09-10       Impact factor: 3.473

3.  COVID-19 detection using federated machine learning.

Authors:  Mustafa Abdul Salam; Sanaa Taha; Mohamed Ramadan
Journal:  PLoS One       Date:  2021-06-08       Impact factor: 3.240

4.  FedPSO: Federated Learning Using Particle Swarm Optimization to Reduce Communication Costs.

Authors:  Sunghwan Park; Yeryoung Suh; Jaewoo Lee
Journal:  Sensors (Basel)       Date:  2021-01-16       Impact factor: 3.576

5.  Federated Learning of Electronic Health Records to Improve Mortality Prediction in Hospitalized Patients With COVID-19: Machine Learning Approach.

Authors:  Akhil Vaid; Suraj K Jaladanki; Jie Xu; Shelly Teng; Arvind Kumar; Samuel Lee; Sulaiman Somani; Ishan Paranjpe; Jessica K De Freitas; Tingyi Wanyan; Kipp W Johnson; Mesude Bicak; Eyal Klang; Young Joon Kwon; Anthony Costa; Shan Zhao; Riccardo Miotto; Alexander W Charney; Erwin Böttinger; Zahi A Fayad; Girish N Nadkarni; Fei Wang; Benjamin S Glicksberg
Journal:  JMIR Med Inform       Date:  2021-01-27

6.  Federated learning for COVID-19 screening from Chest X-ray images.

Authors:  Ines Feki; Sourour Ammar; Yousri Kessentini; Khan Muhammad
Journal:  Appl Soft Comput       Date:  2021-03-20       Impact factor: 6.725

7.  Analysis of COVID-19 Infections on a CT Image Using DeepSense Model.

Authors:  Adil Khadidos; Alaa O Khadidos; Srihari Kannan; Yuvaraj Natarajan; Sachi Nandan Mohanty; Georgios Tsaramirsis
Journal:  Front Public Health       Date:  2020-11-20

8.  Federated deep learning for detecting COVID-19 lung abnormalities in CT: a privacy-preserving multinational validation study.

Authors:  Qi Dou; Tiffany Y So; Meirui Jiang; Quande Liu; Varut Vardhanabhuti; Georgios Kaissis; Zeju Li; Weixin Si; Heather H C Lee; Kevin Yu; Zuxin Feng; Li Dong; Egon Burian; Friederike Jungmann; Rickmer Braren; Marcus Makowski; Bernhard Kainz; Daniel Rueckert; Ben Glocker; Simon C H Yu; Pheng Ann Heng
Journal:  NPJ Digit Med       Date:  2021-03-29

9.  A Deep Learning Method to Forecast COVID-19 Outbreak.

Authors:  Satyabrata Dash; Sujata Chakravarty; Sachi Nandan Mohanty; Chinmaya Ranjan Pattanaik; Sarika Jain
Journal:  New Gener Comput       Date:  2021-07-18       Impact factor: 1.048

  9 in total
  2 in total

1.  Lung Disease Classification in CXR Images Using Hybrid Inception-ResNet-v2 Model and Edge Computing.

Authors:  Chandra Mani Sharma; Lakshay Goyal; Vijayaraghavan M Chariar; Navel Sharma
Journal:  J Healthc Eng       Date:  2022-03-30       Impact factor: 2.682

2.  Towards edge devices implementation: deep learning model with visualization for COVID-19 prediction from chest X-ray.

Authors:  Shaline Jia Thean Koh; Marwan Nafea; Hermawan Nugroho
Journal:  Adv Comput Intell       Date:  2022-09-28
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.