Literature DB >> 35993085

A sustainable advanced artificial intelligence-based framework for analysis of COVID-19 spread.

Misbah Ahmad1, Imran Ahmed2, Gwanggil Jeon3.   

Abstract

The idea of sustainability aims to provide a protected operating environment that supports without risking the capacity of coming generations and to satisfy their demands in the future. With the advent of artificial intelligence, big data, and the Internet of Things, there is a tremendous paradigm transformation in how environmental data are managed and handled for sustainable applications in smart cities and societies. The ongoing COVID-19 (Coronavirus Disease) pandemic maintains a mortifying impact on the world population's health. A continuous rise in the number of positive cases produced much stress on governing organizations worldwide, and they are finding it challenging to handle the situation. Artificial Intelligence methods can be extended quite efficiently to monitor the disease, predict the pandemic's growth, and outline policies and strategies to control its transmission or spread. The combination of healthcare, along with big data, and machine learning methods, can improve the quality of life by providing better care services and creating cost-effective systems. Researchers have been using these techniques to fight against the COVID-19 pandemic. This paper emphasizes on the analysis of different factors and symptoms and presents a sustainable framework to predict and detect COVID-19. Firstly, we have collected a data set having different symptoms information of COVID-19. Then, we have explored various machine learning algorithms or methods: including Logistic Regression, Naive Bayes, Decision Tree, Random Forest Classifier, Extreme Gradient Boost, K-Nearest Neighbour, and Support Vector Machine to predict and detect COVID-19 lab results, using different symptoms information. The model might help to predict and detect the long-term spread of a pandemic and implement advanced proactive measures. The findings show that the Logistic Regression and Support Vector Machine outperformed from other machine learning algorithms in terms of accuracy; algorithms exhibit 97.66% and 98% results, respectively.
© The Author(s), under exclusive licence to Springer Nature B.V. 2022, Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Entities:  

Keywords:  Artificial intelligence; Big data; COVID-19; Internet of Things; Machine learning; Sustainable healthcare

Year:  2022        PMID: 35993085      PMCID: PMC9379242          DOI: 10.1007/s10668-022-02584-0

Source DB:  PubMed          Journal:  Environ Dev Sustain        ISSN: 1387-585X            Impact factor:   4.080


Introduction

Advancement in artificial intelligence, the Internet of Things (IoT), and big data transformed the management and handling of data in sustainable applications. However, a crucial difficulty in artificial intelligence, and big data analytics, is the collection of data and the proper utilization of that data by applying advanced intelligent methods. Currently, COVID-19, a deadly disease or virus that originated in Wuhan, China, in late December 2019, has infected millions of people in the world and become life threatening for thousands of people.1 The outbreak’s initial spread made thousands of deaths, as the pharmaceutical organizations could not handle many severely sick or infected patients. In March 2020, the WHO (World Health Organization) declared it a pandemic, as the number of cases increased significantly across the globe. This declaration also highlighted the increasing alarm of the dangerous transmission or spread and severity of the virus. The administrative organizations in various counties are executing prohibitions, transport limitations, social distancing, and improving health awareness (Ahmed et al., 2021) and (Ahmed et al., 2021). However, still, the virus persists in spreading very quickly; people diagnosed with the COVID-19 virus have a mild pulmonary failure, whereas some have severe pneumonia. Older individuals having critical health issues, like asthma, coronary artery disorder, chronic lung cancer, and liver or kidney diseases, are expected to suffer severe infections. In February 2022, 386,548,962 confirmed cases of COVID-19, including 5,705,754 deaths, were reported across the globe because of this pandemic.2 The latest figure of confirmed COVID-19 cases and deaths has been depicted in Fig. 1 (adopted from 2).
Fig. 1

Confirmed COVID-19 positive cases and recorded deaths per million peoples

Confirmed COVID-19 positive cases and recorded deaths per million peoples In a rapidly growing pandemic situation, inappropriate predictions and analysis of the disease/ infection/ virus severity and the number of patients may lead to an ineffective distribution of medical resources. In addition, lack of pharmaceutical resources and mismanagement of resource allocation, particularly in developing countries, can also cause extra intense cases, leading to lower recovery rates. Therefore, researchers have been developing several outbreak prediction systems (Ahmed et al., 2022), for COVID-19 to obtain notified decisions and enforce relevant management efforts to cope with this situation. Among these methods, worldwide pandemic prediction simple epidemiological and statistical designs have been gained more recognition from researchers (Ardabili et al., 2020). However, these methods have revealed low accuracy for long-term prediction because of the high uncertainty, lack of primary data, and due to emergence of different variants of COVID-19. Although numerous efforts have been made in the literature to cope with this issue, current designs’ central robustness and generalization capabilities require improvement. Advance machine learning has shown encouraging results in healthcare (Ahmed et al., 2021), and (Ahmed et al., 2022) through its decision making and data analyzing capability. It also provides efficient monitoring, analysis, and prediction systems that allow fast and effective analysis of the COVID-19 pandemic to decrease healthcare organizations’ burden. Prediction systems that integrate different features to determine the uncertainty of infection have been introduced, which might assist pharmaceutical staff globally in tracking subjects, particularly with inadequate healthcare resources. These systems utilize different features such as X-Ray images (Ahmed et al., 2020), Computer Tomography (CT) scans (Gozes et al., 2020), clinical symptoms (Tostmann et al., 2020), laboratory tests (Punn et al., 2020), and combination of these features (Mei et al., 2020). As discussed that COVID-19 has affected millions of people, and people in developing countries are susceptible to its future outcomes. Therefore, it is necessary to develop a system that will help to analyze, detect and predict the pandemic. One solution to manage the current devastation is the diagnosis of disease using various advanced machine learning methods. This paper analyzed textual clinical data set using machine learning algorithms such as Logistic Regression, Naive Bayes, Decision Tree, Random Forest Classifier, Extreme Gradient Boost, K-Nearest Neighbour, and Support Vector Machine. Different algorithms are trained using different clinical attributes information. The machine learning based system predicts COVID-19 test results (positive or negative) and patients final status (active, recovered, expired) with high accuracy using different features: patient id, age in years, gender, flu, fever, sore throat, cough, breathing issue, headache, cardiovascular & hypertension, chronic lung disease, foreign travel history, and test specimen information. The central contribution of the work is provided as follows:The paper is organized as follows: Sect. 2 provides a summary of related work. In Sect. 3, the system developed for the prediction of COVID-19 is presented. Different machine learning methods are also elaborated in this Section. The details of the data set and experimental outcomes are explained in Sect. 4. The conclusion and future guidelines of the work are finally given in Sect. 5. An intelligent and sustainable artificial intelligence-based system is developed for the detection of COVID-19 lab results and patient’s final status. Various machine learning algorithms are explored and implemented using textual data set to predict and classify the target attributes of COVID-19. Finally, the results of different algorithms are compared in terms of accuracy.

Related work

Advanced artificial intelligence (Ahmed and Jeon, 2021; Ahmed et al., 2021, 2020), and (Ahmed et al., 2021a) along with big data analytics (Ahmed et al., 2021b), and IoT (Ahmad et al., 2021, 2020; Ahmed et al., 2021), and (Ahmed et al., 2021) helps to understand variables, behaviors, and data trends utilizing a broad range of algorithms and techniques that are substantially better than human performance. Initial investigations studied different techniques like statistical methods, data mining, and artificial intelligence-based approaches (machine learning), supported segmentation methods (parallel, classified processing, statistical rationalizing), and distribution of big data applications for various types of control studies like prediction and risk (Khanday et al., 2020). This section presented a review of various artificial intelligence (machine learning-based algorithms) studied by researchers for analysis and detection of disease. In (Ardabili et al., 2020), authors provided a comparative analysis of soft computing and machine learning paradigms to estimate the outbreak. Punn et al., (2020) employed machine and deep learning to identify growth and divine the future measure of the pandemic. Ahmed et al., (2021b), designed a framework applying big data analytics and IoT for the investigation and prediction of COVID-19. Authors performed various types of analysis, using different pandemic symptoms. In Khanday et al., (2020), researchers analyzed clinical data (textual) into four separate categories by using conventional and ensemble machine learning methods. Authors in Pashazadeh and Navimipour (2018) presented a broad survey on big data tools utilized in various healthcare practices using machine learning algorithms. Authors in Prakash et al., (2020) applied machine learning for the forecast and evaluation of COVID-19. Behnam and Jahanmahin (2021), presented a data analytics system for predicting the COVID-19 virus. Authors compiled a data set that mainly contains the number of recovered, confirmed, and death cases reported every day using machine learning algorithms. Khakharia et al., (2021), produced a prediction method for the COVID-19 outbreak; the system is developed for the top ten extremely dense populated regions. Applying different machine learning algorithms, the models predict the number of newly reported cases expected to rise for five consecutive days. Yan et al., (2020) applied machine learning to present a prognostic prediction method for the death risk of people affected, utilizing data of 29 subjects collected from Tongji Hospital in Wuhan, China. Jiang et al., (2020) added a machine learning method that predicts COVID-19 infected patients. The system produced 80% accuracy on sample data of 53 patients used for training. Rao and Vazquez (2020) produced a method based on artificial intelligence for the identification of subjects with COVID-19 using a mobile phone. Finally, authors in Yan et al., (2020) presented a modeling method to assess high-risk patients in the early stage using machine learning with three clinical features. Different researchers have made various research work on the prediction of COVID-19 (Peng et al., 2020). The study mainly focused on a conceptual method for different applications, using data mining methods (Peng et al., 2020). Li et al., (2021) designed a regression model for the prediction of pandemic’s spread. Pinter et al., (2020) added a hybrid model for the outbreak prediction in Hungary. The model made no supposition on the transmission of the infection and pandemic; rather, it divines the time series analysis of the infected and fatality cases. Ahmad et al., (2020), offered a comprehensive study based on machine learning techniques used for investigation of the pandemic. Authors analyzed the difficulties and presented suggestions to the experts that might improve methods used to predict positive cases. Rustam et al., (2020), described the potential of machine learning to find the number of predicted COVID-19 patients. In particular, four traditional prediction models have been practiced. Roy et al., (2020), performed research identifying the pandemic impacts worldwide utilizing the machine learning algorithms. The authors suggested an additive regression design that specialists normally evaluate with field knowledge about the time series.Ahmed and Jeon (2021) studies machine learning method for genome sequence analysis of the COVID-19 infection. It is concluded, from the above discussion, hat various machine learning methods (Ahmed et al., 2017, 2019; Ahmed and Carter, 2012; Ullah et al., 2019; Ahmed et al., 2019), and (Ahmad et al., 2019) have been used by researchers that helped in analyzing, detecting, and predicting the COVID-19 pandemic. However, it still needs many research efforts in order to enhance the performance. Therefore, we practiced machine learning algorithms for prediction of COVID-19 virus using a textual data set inspired by previous work.

Methodology

The presented machine learning-based system for predicting the COVID-19 textual data set is discussed in this section. Figure. 2 explains the technical features and specifications of the system applied to predict the lab results of the COVID-19 virus. The data set is collected from different medical organizations, which are labeled with the help of medical experts. The raw collected clinical data are processed through the data processing layer.
Fig. 2

Proposed intelligent machine learning-based COVID-19 detection system. The collected raw clinical data set is pre-processed for missing values; the data attributes are then converted into final attributes and binary form, the data set is splited into training and testing samples. The training data are utilized for the training of different algorithms; the trained model is used with testing data for the detection of COVID-19 virus; the final patient status is determined based on predicted lab results. Finally, the performance of the algorithms are evaluated using different parameters

Proposed intelligent machine learning-based COVID-19 detection system. The collected raw clinical data set is pre-processed for missing values; the data attributes are then converted into final attributes and binary form, the data set is splited into training and testing samples. The training data are utilized for the training of different algorithms; the trained model is used with testing data for the detection of COVID-19 virus; the final patient status is determined based on predicted lab results. Finally, the performance of the algorithms are evaluated using different parameters After pre-processing, the data are splitted into training and testing and forwarded to the prediction stage, where different machine learning algorithms/techniques/methods are applied for training and testing purposes–the textual data set containing different attributes for the prediction of the lab results of COVID-19 virus. We used two target attributes "Lab results" and "Patient final status" as shown in Table 1 to develop a machine learning-based system to predict the lab results of virus, diagnosed as either positive or negative. Furthermore, another model is also trained for the prediction of final patient status as expired, alive, or recovered after diagnosing the virus, as shown in Fig. 2.
Table 1

Detailed description of different attributes of the clinical data set used for experimentation

S. no.AttributeAttribute codeAttribute description
1Patient idPatient numberIn numbers
2AgeAgeAge in years
3GenderGenderMale = 0, Female = 1
4Is patient symptomatic?IsptsymptamaticYes = 1, No = 0
5FluFluYes = 1, No = 0
6FeverFeverYes = 1, No = 0
7Sore throatSoreThroatYes = 1, No = 0
8CoughCoughYes = 1, No = 0
9Breathing issueBreathingissueYes = 1, No = 0
10HeadacheHeadacheYes = 1, No = 0
11Cardiovascular & hypertensionCardiovascular and hypertensionYes = 1, No = 0
12Chronic lung diseasechroniclungYes = 1, No = 0
13Foreign travel historyForeignTravel HistoryYes = 1, No = 0
14Test specimen informationSpecimen informationNasopharyngeal swab = 0, Oropharyngeal swab=1
15Lab resultsLabResultsPositive = 1, Negative = 0
16Patient final statusPatientFinal StatusExpired = 0, Active = 1, Recovered = 2

Data collection

The data set used in this paper is collected from different medical organizations, which contains data of 25,000 patients having symptoms of coronavirus (Ahmed et al., 2021b). Data consists of different attributes, namely patient id, age in years, gender, flu, fever, sore throat, cough, breathing issue, headache, cardiovascular , hypertension, chronic lung disease, foreign travel history, and specimen information. In addition, two attributes are defined as target attributes, namely Lab results and final patient status. The details of the collected data set have been shown in Table 1. Detailed description of different attributes of the clinical data set used for experimentation

Data pre-processing

The data obtained from the data acquisition or previous step is in raw form, as it contains some inconsistent data and incomplete parameters. Thus, the data are processed in the pre-processing phase. The data cleansing has been performed where missing values are replaced using mean or mode, and outliers are removed. Further, the useful attributes are filtered and converted into binary form, e.g., flu symptom yes, and no is converted into 0, and 1 form, also depicted in Table 1.

Machine leaning-based classification

Different machine learning algorithms have been practiced for prediction and classification purposes. The machine learning-based system first classifies the patient’s lab results as positive or negative on the basis of symptoms and then detects the patient’s final status, as shown in Fig. 2. Various machine learning algorithms are being utilized by researchers for classification purposes. Machine learning algorithms like Logistic Regression, Naive Bayes, Decision Tree, Random Forest Classifier, Extreme Gradient Boost, K-Nearest Neighbour, Support Vector Machine are practiced in this work. The details of each algorithm are provided as follows.

Logistic Regression

It is a supervised algorithm utilized for the prediction of target variable possibility. It has two variables, the dependent and target, which means there would be two viable classes. The binary coded dependent variable is set as either 1 ( yes/ success) or 0 (no/failure). In our work, features shown in Table 1 are supplied as an input. The algorithm usually determines the class association probability. It defines a linear relationship between the dependent and independent variables. The relationship between the two factors is provided as;In the above equation, y is the predicted output, and dependent variable, , are the weights or coefficient values, also known as the model parameters, x is the input values, and e represents an error term. The input values x are combined linearly using , here is the intercept term and is the coefficient for the single input value (x). In this work, we have two classes or predicted output results, including lab results as positive and negative, and three classes for final patient status (active, expired, recovered), represented as and , respectively.

Naive Bayes

It is an efficient and simple algorithm based on the Bayes’ theorem application, which strongly implies that a certain attribute of the sample is independent of other attributes. In other words, each feature of a sample contributes independently to determine the probability of the classification of that sample by providing the highest probability category of the sample. This algorithm makes the computation process easy and fast while producing accurate results for huge amounts of data. Let c indicate the set of classes; in our case, we have two classes for lab results as positive and negative, and three classes for patient final status (active, expired, recovered), represented as and , respectively. Furthermore, N is the number of attributes. The posterior probability is given as;where P(c|x) defines the posterior probability, used for target class c, given for x input values or attributes. P(c) is used for the prior class probability. P(x|c) represents the class likelihood. P(x) indicates the predictor prior probability.

Decision Tree

The decision tree is utilized for both classification and regression. Its popularity comes from its simplicity, efficiency, and the fact that it is easy to implement, interpret, and explain. It is similar to a flowchart that comprises the root node, branches, and leaf nodes. Every node represents the condition or test; the branches represent the outcomes, whereas the leaf node indicates the class label. The classification rule is then determined by the route from the root to the leaf node. The decision tree is built through several steps. Splitting is the process of dividing the data set into subsets on a particular variable. Pruning refers to reducing the tree’s size, obtained by converting branch nodes into leaf nodes. Pruning helps in avoiding over-fitting. First, the best attribute is chosen through Attribute Selection Measures (entropy) for splitting the records. Then that attribute is made the decision node for breaking the data set further into smaller subsets. The entropy is computed when the data are split into feature X; this process is repeated recursively for each child until one of the following conditions reaches: All tuples belong to the same class, or no attribute remains. The information gain is utilized by splitting the data and applying entropy. It is estimated as the decrease in entropy after the data set is divided into attributes;In Eq. 3, T represents the target variable, and X is the features to be split on entropy (T, X).

Random Forest classifier

Random Forest is a popular machine learning algorithm that is supervised in nature and can also be applied for classification problems. The popularity of this technique comes from its computing efficiency, even with large data sets and high dimensionality. It is an ensemble of decision trees that are suggested by its name and works like a decision tree with a key difference: this method builds a forest of decision trees with attribute locations chosen randomly. At the time of training, multiple decision trees are constructed and considered before producing an output. This technique is based on the idea of more trees that will reach the right decision. It employs a voting system before deciding the class in case of classification, whereas the output’s mean of all decision trees is taken for regression problems. The feature importance values for all trees are calculated as follows;In the above equation, normfi is the normalized importance, and fi is the importance of feature f feature i in tree j, is the i feature importance that is estimated for all trees in the model .

Extreme Gradient Boost

Extreme Gradient Boosting is the most widely used machine learning algorithm. It can be used for supervised learning applications. It is built on a gradient boosting framework and based on function approximation. It optimizes specific loss functions utilizing several regularization techniques. One of this algorithm’s main features is its scalability, which forces fast learning throughout distributed, parallel computing and gives effective memory usage. For an assigned data set with n samples and m number features , where , and , , K number of trees, the output is predicted as;The objective function at iteration t is provided as;In the above equation, is the real value or label known from the training data set, is utilized for the prediction of the ith sample at the tth iteration, and is employed to minimize the objective function.

K-Nearest Neighbour

It is a simple algorithm that assembles all cases and classifies them using similarity measures (for example, distance functions). It has been utilized in statistical evaluation and pattern recognition. It is a nonparametric method and can be applied for both regression and classification. The algorithm is based on feature similarity. This means that the classification of new data points will be based on their similarity to the training set’s data points. An object is allocated to the class most prevalent among its k nearest neighbors. The value of k determines the number of neighbors chosen initially. Different distance functions have been used, such as;

Support Vector Machine

It is a traditional and effective supervised machine learning technique employed for regression and classification problems. It defines a hyperplane, which separates the data into two categories in feature space. The greater the possibilities are of obtaining classification accuracy as more data points drop from the hyperplane accurately. The nearest data points are related to support vectors. If these support vectors are dropped, the state of the hyperplane changes; therefore, they are supposed to be the essential components of the data set. The gap between both sides is correlated with the margin. The aim is to choose a hyperplane with the biggest edge that lies between every training set point to accurately incorporate the new data points. The objective function is given as fo lows;In the above equation, there are two terms; one is for regularization, and the other is for loss. The loss term is applied to penalize missclassifications, which calculates the error, while the other one is utilized to evade over-fitting. The regularization coefficient is defined with .

Experimental results

To evaluate the performance of above discussed algorithms, we used different evaluation metrics as follows:Using the above matrices, we estimated the following performance measuring parameters; Accuracy is the ratio of the correct predictions and total input samples, evaluated as follows;Precision is the estimated as follows;Recall is provided as;F1-Score ranges between [0, 1], and mean between recall and precision. It essentially defines how close the classifier is to accuracy (how many samples are accurately labeled) and how strong it is. The higher the F1-Score, the greater is the performance. Mathematically, estimated as;True Positive Rate (TPR) is defined as;True Negative Rate TNR is mathematically, evaluated as;False Positive Rate FPR indicates the amount of negative samples that are mistakenly classified as true class, defined as;FPR and TPR have values in the range [0, 1]. The Receiver Operating Characteristic (ROC) Curve is plotted, as shown in Fig.  3 using the FPR values versus the TPR. We can observe from the Figure that all algorithms performed well and were nearly equal to 90%, but the performance of SVM is outperformed among all.
Fig. 3

ROC curve plotted using TPR versus FPR

True Positive (TP) defines correctly positive predicted classes. False Positive (FP) used for incorrectly positive predicted classes. True Negative (TN) defines correctly negative predicted classes. False Negative (FN) used incorrectly negative predicted classes. ROC curve plotted using TPR versus FPR As we discussed, Precision specifies the number of true class predictions that actually exist in the true class, while Recall computes the number of true class predictions obtained from all true samples in the data set. F1-Score provides a score that assesses both the interests of Precision and Recall in unit number. Thus, we have also shown the Precision, Recall, and F1-Score results in Fig. 4; it can be seen that Logistic Regression and SVM values are higher than compared to other algorithms, which shows the robustness of both algorithms. The Recall, Precision, and F1-Score of Logistic Regression and SVM 98, 97 and 98%. The value of Extreme Gradient Boost is small as compared to other algorithms.
Fig. 4

Precision, Recall and F1-Score, of different machine learning algorithms

Precision, Recall and F1-Score, of different machine learning algorithms The accuracy comparison of various machine learning algorithms used in this work has been provided in Fig. 5, it can be seen that the accuracy of Logistic Regression, Naive Bayes, Decision Tree Random Forest, and K-Nearest Neighbour is more than 96%. The accuracy of Extreme Gradient Boost is nearly equaled to 95% as compared to other algorithms; the SVM outperforms among all with an accuracy of 99%.
Fig. 5

Accuracy comparison of different machine learning algorithms

Accuracy comparison of different machine learning algorithms The comparison results of algorithms are shown in Table 2; it can be seen that the performance of Logistic Regression and SVM is good as compared to other machine learning algorithms.
Table 2

Comparison results of different machine learning algorithms

S. no.AlgorithmAccuracy (%)Recall (%)Precision (%)F1-Score (%)
1Logistic Regression97.66989798
2Naive Bayes97969596
3Decision Tree97969596
4Extreme Gradient Boost94.91939493
5K-Nearest Neighbour97969496
6Random Forest97.50959595
7Support Vector Machine98989798
Comparison results of different machine learning algorithms

Conclusion and future work

This paper highlights the importance of machine learning algorithms to analyze different symptoms and factors of the COVID-19 in order to predict and detect infected patients’ lab results and their final status. A clinical data set has been used for the analysis of different symptoms of COVID-19 disease. We have explored different machine learning algorithms such as Logistic Regression, Naive Bayes, Decision Tree, Random Forest Classifier, Extreme Gradient Boost, K-Nearest Neighbour, and Support Vector Machine to predict and detect COVID-19 on the basis of different symptoms. The system mainly applied supervised machine learning algorithms to analyze and predict the disease, which might help to predict the long-term transmission of an outbreak in order to implement advanced proactive measures. The findings reveal that the Logistic Regression and Support Vector Machine outperformed and archived 97.66% and 98% results in terms of accuracy. We intend to extend this work to other machine learning algorithms in the future. Also, the effectiveness of algorithms can be enhanced by increasing the amount of data.
  12 in total

1.  Artificial intelligence-enabled rapid diagnosis of patients with COVID-19.

Authors:  Xueyan Mei; Hao-Chih Lee; Kai-Yue Diao; Mingqian Huang; Bin Lin; Chenyu Liu; Zongyu Xie; Yixuan Ma; Philip M Robson; Michael Chung; Adam Bernheim; Venkatesh Mani; Claudia Calcagno; Kunwei Li; Shaolin Li; Hong Shan; Jian Lv; Tongtong Zhao; Junli Xia; Qihua Long; Sharon Steinberger; Adam Jacobi; Timothy Deyer; Marta Luksza; Fang Liu; Brent P Little; Zahi A Fayad; Yang Yang
Journal:  Nat Med       Date:  2020-05-19       Impact factor: 53.440

2.  Big data handling mechanisms in the healthcare applications: A comprehensive and systematic literature review.

Authors:  Asma Pashazadeh; Nima Jafari Navimipour
Journal:  J Biomed Inform       Date:  2018-04-12       Impact factor: 6.317

3.  An IoT-Based Deep Learning Framework for Early Assessment of Covid-19.

Authors:  Imran Ahmed; Awais Ahmad; Gwanggil Jeon
Journal:  IEEE Internet Things J       Date:  2020-10-27       Impact factor: 10.238

4.  The Number of Confirmed Cases of Covid-19 by using Machine Learning: Methods and Challenges.

Authors:  Amir Ahmad; Sunita Garhwal; Santosh Kumar Ray; Gagan Kumar; Sharaf Jameel Malebary; Omar Mohammed Barukab
Journal:  Arch Comput Methods Eng       Date:  2020-08-04       Impact factor: 7.302

5.  Machine learning based approaches for detecting COVID-19 using clinical text data.

Authors:  Akib Mohi Ud Din Khanday; Syed Tanzeel Rabani; Qamar Rayees Khan; Nusrat Rouf; Masarat Mohi Ud Din
Journal:  Int J Inf Technol       Date:  2020-06-30

6.  Identification of COVID-19 can be quicker through artificial intelligence framework using a mobile phone-based survey when cities and towns are under quarantine.

Authors:  Arni S R Srinivasa Rao; Jose A Vazquez
Journal:  Infect Control Hosp Epidemiol       Date:  2020-03-03       Impact factor: 3.254

7.  A deep learning-based social distance monitoring framework for COVID-19.

Authors:  Imran Ahmed; Misbah Ahmad; Joel J P C Rodrigues; Gwanggil Jeon; Sadia Din
Journal:  Sustain Cities Soc       Date:  2020-11-01       Impact factor: 7.587

8.  A data analytics approach for COVID-19 spread and end prediction (with a case study in Iran).

Authors:  Arman Behnam; Roohollah Jahanmahin
Journal:  Model Earth Syst Environ       Date:  2021-01-30

9.  COVID-19 epidemic outside China: 34 founders and exponential growth.

Authors:  Yi Li; Meng Liang; Xianhong Yin; Xiaoyu Liu; Meng Hao; Zixin Hu; Yi Wang; Li Jin
Journal:  J Investig Med       Date:  2020-10-06       Impact factor: 2.895

10.  Enabling Artificial Intelligence for Genome Sequence Analysis of COVID-19 and Alike Viruses.

Authors:  Imran Ahmed; Gwanggil Jeon
Journal:  Interdiscip Sci       Date:  2021-08-06       Impact factor: 3.492

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.