Literature DB >> 35016072

ANTi-Vax: a novel Twitter dataset for COVID-19 vaccine misinformation detection.

K Hayawi1, S Shahriar2, M A Serhani3, I Taleb2, S S Mathew2.   

Abstract

OBJECTIVES: COVID-19 (SARS-CoV-2) pandemic has infected hundreds of millions and inflicted millions of deaths around the globe. Fortunately, the introduction of COVID-19 vaccines provided a glimmer of hope and a pathway to recovery. However, owing to misinformation being spread on social media and other platforms, there has been a rise in vaccine hesitancy which can lead to a negative impact on vaccine uptake in the population. The goal of this research is to introduce a novel machine learning-based COVID-19 vaccine misinformation detection framework. STUDY
DESIGN: We collected and annotated COVID-19 vaccine tweets and trained machine learning algorithms to classify vaccine misinformation.
METHODS: More than 15,000 tweets were annotated as misinformation or general vaccine tweets using reliable sources and validated by medical experts. The classification models explored were XGBoost, LSTM, and BERT transformer model.
RESULTS: The best classification performance was obtained using BERT, resulting in 0.98 F1-score on the test set. The precision and recall scores were 0.97 and 0.98, respectively.
CONCLUSION: Machine learning-based models are effective in detecting misinformation regarding COVID-19 vaccines on social media platforms.
Copyright © 2021 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

Entities:  

Keywords:  COVID-19; Deep learning; Misinformation detection; Natural language processing; Text classification; Vaccines

Mesh:

Substances:

Year:  2021        PMID: 35016072      PMCID: PMC8648668          DOI: 10.1016/j.puhe.2021.11.022

Source DB:  PubMed          Journal:  Public Health        ISSN: 0033-3506            Impact factor:   2.427


Introduction

As of July 26, 2021, more than 194 million infections and more than 4 million deaths are attributed to the SARS-CoV-2, commonly referred to as the COVID-19 pandemic. Since the outbreak emerged in Wuhan, Hubei province in China and spread worldwide, lockdown measures and social distancing methods were introduced in most parts of the globe. The impacts were significant on various sectors including the economy, education, and the mental health of the population. The emergence of various safe and effective vaccines provided a potential solution by increasing population immunity and rising as an effective method to control the outbreak. Most vaccines authorization and distribution began during December 2020. Despite the vaccine introduction, increasing hesitancy on vaccine uptake can be observed among significant parts of the population in various countries. The vaccine hesitancy can be explained in part by the spread of misinformation regarding vaccines that are spread in person. However, with wide social media access and usage, the spread of vaccine misinformation can be significantly increased, potentially leading to a further decline in vaccine uptake. Misinformation can be spread on social media by human users as well as social bots. , Social bots are programmed to automatically spread false information in disguise. Therefore, it is essential for algorithms to automatically detect the content of the misinformation regardless of the source being a human or a social bot. More specifically, the focus of this research is on Twitter and detecting misinformation in tweets related to vaccines. To the best of our knowledge, there are no existing datasets for detecting vaccine misinformation tweets and this is the first proposed approach on detecting COVID-19 vaccine misinformation. Machine learning–based algorithms have been widely and effectively utilized for various COVID-19–related applications including screening, contact tracing, and forecasting. CoAID dataset introduced by Cui and Lee contains misinformation related to COVID-19. The authors utilized several machine learning models to classify fake news with the best performance of 0.58 F1-score being obtained using a hierarchical attention network–based model. A COVID-19 vaccine misinformation tweets dataset was introduced by Memon and Carley. This dataset characterizes both users who are actively posting misinformation and those who are calling out misinformation or spreading true information. It was concluded that informed users tend to use more narratives in their tweets than misinformed ones. The ReCOVery dataset proposed by Zhou et al. contains more than 2000 news articles and their credibility. Furthermore, it also includes more than 140,000 tweets that reveal the way these news articles are spread on Twitter. A F1-score of 0.83 was obtained for predicting reliable news and 0.67 was obtained for predicting unreliable news using a neural network model. A billion-scale COVID-19 Twitter dataset covering 268 countries with more than 100 languages was collected by Abdul-Mageed et al. Two predictive models were proposed for classifying whether a tweet was related to the pandemic (COVID relevance) and detecting whether a tweet was COVID-19 misinformation. The misinformation detection models were trained using the aforementioned CoAID and ReCOVery datasets, and combining them resulted in the best F1-score of 0.92 using a bidirectional encoder representations from transformers (BERT)-based model. Abdelminaam et al. combined four existing datasets including CoAID and used several machine learning algorithms to classify COVID-19 misinformation. The best F1-score of 0.985 was obtained using a two-layer long short-term memory (LSTM) network. The ArCOV19-Rumors dataset was presented by Haouari et al. to detect COVID-19 misinformation in Arabic tweets. Two Arabic BERT-based models were used for classification, obtaining a highest F1-score of 0.74. A bilingual Arabic and English dataset for detecting COVID-19 misleading tweets was presented in the study by Elhadad et al. Several machine learning models were used to annotate the unlabeled tweets. However, the authors did not quantify the evaluation of the predictive models. Finally, a Chinese microblogging dataset for detecting COVID-19 fake news was presented by Yang et al. Various deep learning models were explored, and the best F1-score of 0.94 was obtained using the TextCNN model. More recently, several research works have focused on analyzing tweets related to COVID-19 vaccines. Muric et al. presented a dataset containing tweets that indicate a strong anti-vaccine stance. Descriptive analysis of the tweets as well as geographical distribution of the tweets across the United States (US) were presented. Similarly, Sharma et al. utilized tweets to investigate any hidden coordinated efforts promoting misinformation about vaccines and obtain insights into conspiracy communities. A dataset called Covaxxy containing one week of vaccine tweets was introduced to perform a statistical analysis of COVID-19 vaccine tweets. Moreover, the authors also introduced a dashboard for visualizing the relationship between vaccine adoption and US geolocated posts. Malagoli et al. focused on vaccine sentiment on Twitter by analyzing vaccine-related tweets collected between December 2020 and January 2021. The analysis included the usage of emojis as well as the psycholinguistic properties of these tweets. Finally, Hu et al. examined the public sentiment of COVID-19 vaccines in the US by investigating the spatiotemporal patterns of public perception and emotion at national and state levels. No predictive models were introduced by the existing works in the context of the COVID-19 vaccine, and therefore, the proposed work to the best of our knowledge is the first to perform vaccine misinformation detection. Table 1 summarizes the existing works in COVID-19 misinformation detection and COVID-19 vaccine–related tweet datasets.
Table 1

Existing works in COVID-19 misinformation and COVID-19 vaccine tweets.

SourceApplicationDatasetAvailable onlinePrediction results
12Misinformation dataset, analysis, and classificationSocial media and website misinformation regarding COVID-19F1-score: 0.58 using hierarchical attention network–based model
13Misinformation dataset and analysisAnnotated COVID-19 misinformation tweetsN/A
14Reliable and unreliable news dataset, analysis, and predictionNews articles and their credibility level as well as tweets related to their spreadF1-scores: 0.83 and 0.67 for reliable and unreliable news detection, respectively, using neural networks
15Large COVID-19 tweets dataset, analysis, and classificationTweets related to COVID-19 in more than 100 languages from 268 countriesF1-score: 0.98 for COVID-relevant tweets using the transformer-based masked language modelF1-score: 0.92 for detecting misinformation tweets using BERT-based model
16COVID-19 misinformation detectionCombination of various existing tweets datasets related to COVID-19, disasters, news, and gossipF1-score: 0.985 using LSTM
17COVID-19 misinformation detection in ArabicArabic tweets related to COVID-19F1-score: 0.74 using MARABERT
18COVID-19 misinformation detection in English and ArabicEnglish and Arabic tweets related to COVID-19Not presented
19COVID-19 fake news detection in ChineseChinese microblog posts from WeiboF1-score: 0.94 using TextCNN
20COVID-19 anti-vaccine tweets dataset and analysisTweets exhibiting anti-vaccine stance collected using keywordsN/A
21COVID-19 anti-vaccine tweets analysisCOVID-19 vaccine tweets collected using keywordsN/A
22COVID-19 vaccine tweets analysisCOVID-19 vaccine tweets collected using keywordsN/A
23COVID-19 vaccine tweets sentiment analysisCOVID-19 vaccine tweets collected using keywordsN/A
24COVID-19 vaccine tweets sentiment analysisCOVID-19 vaccine tweets collected using keywordsN/A
Existing works in COVID-19 misinformation and COVID-19 vaccine tweets.

Methods

This section describes the methodology of the proposed application. The details of the implementation are presented next chronologically.

Dataset collection

Twitter is one of the most popular social media platforms with 353 million active users, and more than 500 million tweets are being posted every day. Twitter API allows the extraction of public tweets including the tweet text, user information, retweets, and mentions in JSON format. A Python library called Twarc was utilized to access the Twitter API. To obtain the relevant tweets about COVID-19 vaccines, we followed the approach in some of the existing works in the literature and collected the tweets using keywords. The following keywords (case insensitive) were used: ‘vaccine,’ ‘pfizer,’ ‘moderna,’ ‘astrazeneca,’ ‘sputnik,’ and ‘sinopharm.’ Additionally, we only considered tweets in the English language. Replies to tweets, retweets, and quote tweets were not considered. Overall, the vaccine-related tweets from December 1, 2020, until July 31, 2021, were collected. In total, 15,465,687 tweets were collected. Fig. 1 illustrates the total number of tweets per month from December 2020 until July 2021. As vaccines started gaining approval for administration during December 2020, we notice a high volume of tweets with people sharing their initial sentiments regarding the vaccine. In the next couple of months, there is a natural decline as the topic becomes outdated. However, the volume of tweets goes up again from March 2021 and reaches a peak during April 2021. During this time, the rate of vaccination was going up particularly in the UK and the US where a large percentage of Twitter users are from. This led to many expressing their feelings after receiving their vaccines.
Fig. 1

Number of vaccine tweets by month.

Number of vaccine tweets by month.

Data annotation

In supervised learning, a labeled dataset is required before model training. Because no existing labeled dataset is available for vaccine misinformation, manual annotation of tweets was performed. Unlike the single verification approach by many existing works, we used an additional validation step by medical experts. To label the misinformation, some common myths regarding the COVID-19 vaccines were obtained from reliable sources including Public Health, Healthline, the Centers for Disease Control and Prevention (CDC), and the University of Missouri Health Care. This approach is similar to several of the existing works in misinformation detection including the studies by Cui and Lee and Elhadad et al. Some of the common myths and misinformation include ‘The vaccine can alter DNA,’ ‘The vaccine can cause infertility,’ ‘The vaccine contains dangerous toxins,’ and ‘The vaccine contains tracking device.’ In this process, tweets containing this common misinformation were manually read and labeled/flagged. This ensured the context of the tweets was considered and tweets that were sarcastic and humorous were not included as misinformation. Tweets other than these common myths were considered not misinformation and included general opinions regarding the vaccine, official news, and appointment details of vaccination centers. Finally, once the dataset was accurately annotated using verified sources, we invited medical experts in public health to validate the annotation process. This approach helped in ensuring the manual annotation of data was accurate and the quality of the dataset was of high standard. Consequently, a total of 15,073 tweets were labeled, 5751 of which were misinformation and 9322 were general vaccine-related tweets. Word clouds are a simple but effective tool for text visualization. They are created by collecting words in a corpus and presenting them in different sizes. The larger and bolder a word appears, the more frequent and relevant is its presence in the corpus. Fig. 2, Fig. 3 illustrate the world cloud for misinformation and general tweets, respectively. The vaccine misinformation tweets include several conspiracy terms such as ‘gene therapy,’ ‘untested vaccine,’ and ‘depopulation.’ Meanwhile, the general vaccine tweets include terms related to people sharing their vaccine experience including ‘first dose’ and ‘grateful.’
Fig. 2

Word cloud visualization for vaccine misinformation tweets.

Fig. 3

Word cloud visualization for general vaccine-related tweets.

Word cloud visualization for vaccine misinformation tweets. Word cloud visualization for general vaccine-related tweets.

Data preprocessing

Preprocessing the contents of the tweets is significant for efficient model training. First, external links, punctuations, and text in brackets were removed. All text contents were also converted to lower case. Common words such as ‘the,’ ‘and,’ ‘in,’ and ‘for,’ are referred to as stop words. Removing these low-information words that provide little contextual information can reduce the complexity of training. To perform this step, NLTK library in Python was utilized. Stemming is a common preprocessing step that reduces derivationally linked forms of a word to a common base form. For example, both ‘walking’ and ‘walked’ will be converted to the stem ‘walk.’ In this step, snowball stemmer from the NLTK library was used.

Models architecture and implementation

Machine learning enables computer systems to learn from experience using data, without requiring explicit programming. Feature extraction is required to identify relevant features in the dataset before training the models. However, this process is labor-intensive and predictive performance depends to a large extent on the quality of feature engineering. Deep learning models on the other hand can automatically learn the necessary and useful input features and optimize them. Nevertheless, the computational complexity is higher, and consequently, a much longer training time is required. To provide a more comparative evaluation, three models were explored belonging to different categories of machine learning models. From the traditional machine learning, XGBoost was utilized; from the deep learning models, LSTM was utilized; and from the transformer models, BERT was utilized. A description of the models and their implementation are presented next. XGBoost is considered one of the most competitive and frequently used traditional machine learning models. It is a type of ensemble learning model that uses multiple decision trees which reduces overfitting and maintains complexity at the same time. Term frequency-inverse document frequency (tf-idf) was used to identify the most relevant features. Tf-idf computes values for each word in the corpus by the inverse proportion of the frequency of the word in a specific document to the percentage of documents the word appears in the study by Ramos et al. XGBoost library in Python was used for this implementation. LSTM is a popular deep learning architecture for text and sequential data. These networks are composed of cyclic connections as well as specialized memory cells for storing the temporal state of the network. Glove, a popular unsupervised approach for obtaining vector representations of words, was used with the LSTM network. The obtained word embeddings using Glove represent the semantic similarity between words in a corpus by transforming the words into an n-dimensional space. After the embedding layer, a Bidirectional LSTM layer with 45 units was used followed by a GlobalMaxPool1D. Next, two dense layers of 128 and 32 units, respectively, with ReLU activation were used. A dropout layer with 0.5 rate was used after all the previous three layers. Finally, the classification layer consisted of a sigmoid activation, and the model was optimized using Adam optimizer on binary cross-entropy loss. The implementation was done in Python using Keras. The last approach utilized the transformer-based BERT model. The unconventional training approach used in BERT by looking at a text sequence from both directions provides a comprehensive sense of language context. BERT is pretrained on a large corpus of English texts from Wikipedia and BookCorpus. In this work, the bert-large-uncased version was used. It consists of 24 layers (1024 hidden dimensions), 16 attention heads, and a total of 340M parameters. Transformers library in Python was used to implement this approach. Overfitting is considered a major obstacle in training machine learning algorithms. When a specific model performs outstandingly well during the training phase, by using unnecessary input features, but fails to make generalized predictions on the test set, it is ‘overfitting’ to the training dataset. To avoid the overfitting problem for the two deep learning models, dropout technique was used. Also, training and validation accuracy curves were monitored to ensure no overfitting occurred during training. The research framework for COVID-19 vaccine misinformation classification is summarized in Fig. 4 . The COVID-19 vaccine–related tweets were first collected and then annotated for misinformation or regular tweets using reliable sources. After necessary preprocessing and feature extraction, machine learning and deep learning models were trained to classify vaccine misinformation. Finally, the performance of the models was evaluated on the test set.
Fig. 4

Research framework. tf-idf, term frequency-inverse document frequency.

Research framework. tf-idf, term frequency-inverse document frequency. Classification algorithms can be evaluated using several metrics including accuracy, precision, recall, and F1-score, as defined in the following equations.(1), (2), (3), (4)

Results

The results from the XGBoost model as well as the two deep learning models are presented next. All models were first trained and validated on 75% of the dataset and then evaluated on the remaining 25% of the dataset.

Performance comparison

The training time for XGBoost as expected was much quicker than the other two deep learning models. The training accuracy obtained was 96.9%, and the accuracy on the test set was 95.6%. The precision, recall, and F1-score on the test were 0.96, 0.95, and 0.95, respectively. Fig. 5 presents the confusion matrix on the test set using XGBoost. The majority of the error (84%) resulted from misinformation being classified as otherwise, whereas very few of the non-misinformation tweets were wrongly classified.
Fig. 5

Confusion matrix on the test set using XGBoost.

Confusion matrix on the test set using XGBoost. The LSTM model was trained for six iterations with 20% of the data from the training set used for validation. Fig. 6 displays the training and validation accuracy curves. Because both the curves are very close to each other, there is no indication of overfitting.
Fig. 6

Training and validation accuracies using LSTM.

Training and validation accuracies using LSTM. The maximum training accuracy using LSTM was 99%, and the accuracy on the test set was 96%. The precision, recall, and F1-score on the test were 0.97, 0.96, and 0.96, respectively. Overall, there was a slight improvement compared with XGBoost. The confusion matrix on the test set using this approach is presented in Fig. 7 . Compared with XGBoost, there was a decrease in misinformation being misclassified (68%). However, more non-misinformation tweets were being classified as misinformation.
Fig. 7

Confusion matrix on the test set using LSTM.

Confusion matrix on the test set using LSTM. Finally, we used the pretrained BERT transformer model for classification. It was trained for three iterations with a 20% validation set taken from a subset of the training set. The training and validation accuracy curve is plotted in Fig. 8 . No overfitting is apparent in this approach as well.
Fig. 8

Training and validation accuracies using BERT.

Training and validation accuracies using BERT. The maximum training accuracy using BERT was 99%, and the accuracy on the test set was 98%. The precision, recall, and F1-score on the test were 0.97, 0.98, and 0.98, respectively. The performance using BERT was superior compared with the previous two models. Fig. 9 displays the confusion matrix on the test set using BERT. Compared with the previous two models, BERT provides the lowest error rate (43%) on misclassifying the misinformation tweets, but it has a higher error rate in misclassifying the non-misinformation tweets.
Fig. 9

Confusion matrix on the test set using BERT.

Confusion matrix on the test set using BERT.

Discussion

In the previous section, the effectiveness of all the models in vaccine misinformation detection was discussed. Consistent with the literature, superior performance was obtained using the deep learning models compared with XGBoost for a relatively larger training set. BERT is recommended for this application because it was able to predict most of the misinformation. Table 2 presents a performance comparison between the existing works in COVID-19 misinformation classification and the proposed work. The results reported in this study are consistent with those reported in the previous literature. The focus of this work was specifically on classifying vaccine-related misinformation, unlike the existing works which focused on general COVID-19 misinformation. However, by making the dataset used in the proposed work publicly available, we encourage the research community to experiment with other models and approaches.
Table 2

Performance comparison with related COVID-19 misinformation detection works.

SourceClassificationF1-score
12COVID-19 misinformation0.58
14COVID-19 news reliability detection0.83 and 0.67
15COVID-19 misinformation0.92
16COVID-19 misinformation0.985
17Arabic COVID-19 misinformation0.74
19Chinese COVID-19 misinformation0.94
ANTi-Vax (Ours)COVID-19 vaccine misinformation0.98
Performance comparison with related COVID-19 misinformation detection works. There are several implications of the proposed application that are not limited to the following: 1) the dataset and models presented in this work can be used by social media sites effectively to limit the spread of misinformation, 2) it would also facilitate the detection of social bots spreading vaccine misinformation, 3) the dataset can also be thoroughly analyzed to identify patterns of misinformation and their spread over the time, and 4) this study will raise awareness regarding the misinformation about vaccines in social media and also will trigger further research in this area. A limitation of this study is that statistical analysis was not presented. As the focus of this study was on detecting misinformation, an in-depth analysis of the vaccine misinformation tweets was not performed. As future work, it would be interesting to experiment the combination of general COVID-19 misinformation with vaccine misinformation. Moreover, a further performance enhancement is possible by using extracted tweet–level features including the number of capital letters, links, and emojis. Similarly, account-level features such as follower count, tweet count, retweets can potentially provide useful information to the models. Furthermore, sentiment analysis in the English language on COVID-19 vaccines can be performed using the large COVID-19 vaccine dataset. This would reveal the public perception of vaccines and how they evolved over the months. Also, the focus of this study was on English tweets, but researchers are encouraged to extend this study to multilingual tweets related to COVID-19 vaccines. The use of hashtags can provide insights into the general behavior of social media users, and this could be utilized for future research. Finally, it is also worth investigating vaccine-related misinformation on social media platforms other than Twitter as well as blog posts.

Author statements

Acknowledgements

The authors would also like to thank the medical experts for volunteering their time and effort in validating the annotated dataset.

Ethical approval

None sought.

Funding

This work was supported by under the research grant RIF R20132.

Competing interests

None declared.

Availability of data and materials

Both the COVID-19 vaccine tweets dataset and the annotated misinformation dataset are available publicly for researchers. Complying with Twitter's Terms of service, the dataset has been anonymized and contains only the tweet IDs. The dataset can be accessed using the following link: https://github.com/SakibShahriar95/ANTiVax.
  12 in total

1.  Long short-term memory.

Authors:  S Hochreiter; J Schmidhuber
Journal:  Neural Comput       Date:  1997-11-15       Impact factor: 2.026

Review 2.  Applications of machine learning and artificial intelligence for Covid-19 (SARS-CoV-2) pandemic: A review.

Authors:  Samuel Lalmuanawma; Jamal Hussain; Lalrinfela Chhakchhuak
Journal:  Chaos Solitons Fractals       Date:  2020-06-25       Impact factor: 5.944

3.  Understanding COVID-19 misinformation and vaccine hesitancy in context: Findings from a qualitative study involving citizens in Bradford, UK.

Authors:  Bridget Lockyer; Shahid Islam; Aamnah Rahman; Josie Dickerson; Kate Pickett; Trevor Sheldon; John Wright; Rosemary McEachan; Laura Sheard
Journal:  Health Expect       Date:  2021-05-04       Impact factor: 3.318

4.  The COVID-19 pandemic and mental health impacts.

Authors:  Kim Usher; Joanne Durkin; Navjot Bhullar
Journal:  Int J Ment Health Nurs       Date:  2020-04-10       Impact factor: 3.503

5.  Safety and Efficacy of the BNT162b2 mRNA Covid-19 Vaccine.

Authors:  Fernando P Polack; Stephen J Thomas; Nicholas Kitchin; Judith Absalon; Alejandra Gurtman; Stephen Lockhart; John L Perez; Gonzalo Pérez Marc; Edson D Moreira; Cristiano Zerbini; Ruth Bailey; Kena A Swanson; Satrajit Roychoudhury; Kenneth Koury; Ping Li; Warren V Kalina; David Cooper; Robert W Frenck; Laura L Hammitt; Özlem Türeci; Haylene Nell; Axel Schaefer; Serhat Ünal; Dina B Tresnan; Susan Mather; Philip R Dormitzer; Uğur Şahin; Kathrin U Jansen; William C Gruber
Journal:  N Engl J Med       Date:  2020-12-10       Impact factor: 91.245

Review 6.  COVID-19 Vaccine Hesitancy Worldwide: A Concise Systematic Review of Vaccine Acceptance Rates.

Authors:  Malik Sallam
Journal:  Vaccines (Basel)       Date:  2021-02-16

7.  CHECKED: Chinese COVID-19 fake news dataset.

Authors:  Chen Yang; Xinyi Zhou; Reza Zafarani
Journal:  Soc Netw Anal Min       Date:  2021-06-22

8.  An interactive web-based dashboard to track COVID-19 in real time.

Authors:  Ensheng Dong; Hongru Du; Lauren Gardner
Journal:  Lancet Infect Dis       Date:  2020-02-19       Impact factor: 25.071

9.  Revealing public opinion towards COVID-19 vaccines with Twitter data in the United States: a spatiotemporal perspective.

Authors:  Tao Hu; Siqin Wang; Wei Luo; Mengxi Zhang; Xiao Huang; Yingwei Yan; Regina Liu; Kelly Ly; Viraj Kacker; Bing She; Zhenlong Li
Journal:  J Med Internet Res       Date:  2021-07-26       Impact factor: 5.428

View more
  3 in total

1.  Myth and Misinformation on COVID-19 Vaccine: The Possible Impact on Vaccination Refusal Among People of Northeast Ethiopia: A Community-Based Research.

Authors:  Mulugeta Hayelom Kalayou; Shekur Mohammed Awol
Journal:  Risk Manag Healthc Policy       Date:  2022-10-01

2.  Analyzing the public sentiment on COVID-19 vaccination in social media: Bangladesh context.

Authors:  Md Sabab Zulfiker; Nasrin Kabir; Al Amin Biswas; Sunjare Zulfiker; Mohammad Shorif Uddin
Journal:  Array (N Y)       Date:  2022-06-12

Review 3.  Machine learning applications for COVID-19 outbreak management.

Authors:  Arash Heidari; Nima Jafari Navimipour; Mehmet Unal; Shiva Toumaj
Journal:  Neural Comput Appl       Date:  2022-06-10       Impact factor: 5.102

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.