Literature DB >> 32695885

Estimating the efficacy of symptom-based screening for COVID-19.

Alison Callahan1, Ethan Steinberg1, Jason A Fries1, Saurabh Gombar2, Birju Patel1, Conor K Corbin1, Nigam H Shah1.   

Abstract

There is substantial interest in using presenting symptoms to prioritize testing for COVID-19 and establish symptom-based surveillance. However, little is currently known about the specificity of COVID-19 symptoms. To assess the feasibility of symptom-based screening for COVID-19, we used data from tests for common respiratory viruses and SARS-CoV-2 in our health system to measure the ability to correctly classify virus test results based on presenting symptoms. Based on these results, symptom-based screening may not be an effective strategy to identify individuals who should be tested for SARS-CoV-2 infection or to obtain a leading indicator of new COVID-19 cases.
© The Author(s) 2020.

Entities:  

Keywords:  Epidemiology; Viral infection

Year:  2020        PMID: 32695885      PMCID: PMC7359358          DOI: 10.1038/s41746-020-0300-0

Source DB:  PubMed          Journal:  NPJ Digit Med        ISSN: 2398-6352


Introduction

There is substantial interest in developing symptom-based screening to prioritize who should be tested for SARS-CoV-2 infection and to establish symptom-based surveillance to provide an early indicator of new COVID-19 cases[1-3]. However, the degree to which presenting symptoms are reliable indicators of SARS-CoV-2 infection is unknown[4]. Therefore, it is crucial to determine whether symptom-based screening to prioritize testing is feasible. To assess the feasibility of using symptom-based screening to assign a probability of SARS-CoV-2 infection, we first quantified the ability to correctly predict results of tests for common respiratory viruses observed to frequently co-infect patients positive for SARS-CoV-2 at Stanford Health Care[5], using symptoms mentioned in clinical notes at the time of the test order. After establishing a baseline for the performance of machine learning models to correctly classify common respiratory virus infections[6], we then trained a similar model for SARS-CoV-2 test results[7].

Results and discussion

Performance of models to predict respiratory virus test results

For the respiratory viruses examined, area under the receiver operator curve (AUROC) on the test set ranged from 0.60 to 0.77 (Table 1). Two non-SARS-CoV-2 viruses (influenza type A, and RSV) were moderately predictable given presenting symptoms. For example, mentions of coughing, wheezing and rhinorrhea were features with high importance for the RSV model. However, SARS-CoV-2 and the remaining common respiratory viruses (adenovirus, rhinovirus, metapneumovirus and parainfluenza) were not highly predictable, with average AUROCs below 0.70.
Table 1

AUROCs of logistic regression models for each respiratory virus with 95% confidence intervals .

VirusAUROC
Adenovirus0.68 (0.60–0.76)
Influenza virus A0.73 (0.68–0.77)
Metapneumovirus0.64 (0.57–0.71)
Parainfluenza virus0.60 (0.53–0.68)
RSV0.77 (0.73–0.80)
Rhinovirus0.62 (0.58–0.66)
SARS-CoV-20.64 (0.49–0.79)
AUROCs of logistic regression models for each respiratory virus with 95% confidence intervals . These results suggest that, for both SARS-CoV-2 and other commonly diagnosed respiratory viral infections, the presenting symptoms at the time of the test order may not provide sufficient information to correctly classify whether a given patient will test positive for that virus. Prior studies of presenting symptoms and case definitions for influenza found that information on presenting symptoms alone is not sufficient to accurately diagnose influenza, or distinguish it from other influenza-like illnesses[8-12], and our results support this finding. Though our Influenza Virus A model had one of the higher AUROCs we observed, it is not sufficient for use in a clinical setting.

Limitations

There are several limitations to this work. Firstly, we did not include information on the duration of reported symptoms as features in our models, because duration is often not reliably described in clinical notes and therefore difficult to ascertain[13]. The sensitivity and specificity of SARS-CoV-2 PCR tests is highest in the first few days that symptoms present[14]. It is possible that the data used in our analysis included patients tested later in the course of SARS-CoV-2 infection, which would diminish classifier performance. Secondly, the number of SARS-CoV-2 positive cases in our data was small, which is fortunate for our population’s health, but creates a sample size limitation which can adversely impact machine learning model performance. Lastly, the prevalence of positive cases in those tested for SARS-CoV-2 infection is dependent on the health system’s protocols used to decide whether to test a patient; this may impact the applicability of our findings as testing protocols evolve. We note that clinicians’ knowledge of the signs and symptoms of COVID-19 are rapidly evolving, such as recent findings that a substantial fraction of patients experience gastrointestinal symptoms[15] and dermatologic symptoms[16], and the prevalence of these presentations are still being characterized. The CDC has recently updated their list of COVID-19 related symptoms to include loss of taste or smell, headache, and chills with fever[17]. Documentation of such emerging symptoms will increase in the clinical notes of tested patients. Therefore, as part of Stanford Health Care’s response to COVID-19, we continue to collect data on patients tested for SARS-CoV-2 and to profile their presenting symptoms[7,18]. Doing so will allow us to assess the effect of improved symptom characterization and additional data on the performance of models to identify SARS-CoV-2 infections in an ongoing manner.

Summary

In summary, our current findings indicate that symptom-based screening may not be an effective strategy to quantify an individual’s likelihood of having COVID-19. The non-specific nature of the symptoms, and the fact that co-infections with other respiratory viruses are common[5], might limit the utility of symptom-based screening strategies to prioritize testing and the use of symptom surveys as a leading indicator of new COVID-19 cases in a region[1,19].

Methods

Patient cohort selection

Patients included in our study were those tested for adenovirus, influenza virus A, metapneumovirus, parainfluenza virus, respiratory syncytial virus (RSV), rhinovirus, and SARS-CoV-2 (Table 2). Historical data on patients tested for other respiratory viruses were collected between September 2010 and October 2019, and included virus tests ordered as part of respiratory virus panels as well as tests ordered on their own. Data on patients tested for SARS-CoV-2 were collected up to March 30, 2020, and also included results of tests ordered as part of respiratory pathogen panels[5]. Only the contents of the emergency department or urgent care note associated with the order of a patient’s virus test was used as input for training the models, in order to emulate the information that would be available in a real-life usage setting.
Table 2

Total number of patients and number of positive and negative test results used to develop models for each virus test.

VirusPatientsPositive testsNegative tests
Adenovirus10,91121810,693
Influenza virus A86429807662
Metapneumovirus95043609144
Parainfluenza virus95073519156
RSV13,525105912,466
Rhinovirus830513876918
SARS-CoV-289564831
Total number of patients and number of positive and negative test results used to develop models for each virus test.

Feature engineering

Each patient’s test-associated note was processed using a rule-based NLP pipeline[20] to extract mentions of medical concepts. Extracted concepts were classified to filter out negated terms, to identify relative timing of the concept (e.g. a current condition or past condition), and to identify note sections, in order to filter out concepts that were not noted at initial observation but added later in the course of documentation. All concepts were derived from the 2018AA SNOMED CT US vocabulary with the semantic group DISORDER[21], maintained as part of the National Library of Medicine Unified Medical Language System. Extracted concepts were encoded as binary variables to indicate their presence or absence in a given patient’s clinical note, after filtering to keep only the non-historical, positive mentions about the patient and restricting to signs and symptoms based on SNOMED CT US semantic types. The outcome was whether the virus test returned positive or negative for infection with the tested pathogen.

Model training and evaluation

For respiratory viruses other than SARS-CoV-2, we trained logistic regression models on a randomly selected 80% sample of patients’ note-derived data and tested their performance on the remaining 20%. We calculated 95% confidence intervals using bootstrapping on the testing set. We evaluated performance using the AUROC, which is a measurement of a model’s ability to distinguish positive and negative test results. For SARS-CoV-2, because we were not able to use a fixed test set due to the limited sample size of tested patients, we performed tenfold cross validation and estimated uncertainty with a t-distribution confidence interval. Hyperparameters were tuned using cross validation on the training set. This study was approved by the Stanford University Institutional Review Board, and this approval included a waiver of informed consent due to the retrospective nature of the study.
  15 in total

1.  Validity of a clinical model to predict influenza in patients presenting with symptoms of lower respiratory tract infection in primary care.

Authors:  Saskia F van Vugt; Berna Dl Broekhuizen; Nicolaas Pa Zuithoff; Gerrit A van Essen; Mark H Ebell; Samuel Coenen; Margareta Ieven; Christine Lammens; Herman Goossens; Chris C Butler; Kerenza Hood; Paul Little; Theo Jm Verheij
Journal:  Fam Pract       Date:  2015-06-04       Impact factor: 2.267

Review 2.  Does this patient have influenza?

Authors:  Stephanie A Call; Mark A Vollenweider; Carlton A Hornung; David L Simel; W Paul McKinney
Journal:  JAMA       Date:  2005-02-23       Impact factor: 56.272

3.  Clinical prediction rules combining signs, symptoms and epidemiological context to distinguish influenza from influenza-like illnesses in primary care: a cross sectional study.

Authors:  Barbara Michiels; Isabelle Thomas; Paul Van Royen; Samuel Coenen
Journal:  BMC Fam Pract       Date:  2011-02-09       Impact factor: 2.497

4.  Parameterizing time in electronic health record studies.

Authors:  George Hripcsak; David J Albers; Adler Perotte
Journal:  J Am Med Inform Assoc       Date:  2015-02-26       Impact factor: 4.497

5.  Varicella-like exanthem as a specific COVID-19-associated skin manifestation: Multicenter case series of 22 patients.

Authors:  Angelo Valerio Marzano; Giovanni Genovese; Gabriella Fabbrocini; Paolo Pigatto; Giuseppe Monfrecola; Bianca Maria Piraccini; Stefano Veraldi; Pietro Rubegni; Marco Cusini; Valentina Caputo; Franco Rongioletti; Emilio Berti; Piergiacomo Calzavara-Pinton
Journal:  J Am Acad Dermatol       Date:  2020-04-16       Impact factor: 11.527

6.  High Prevalence of Concurrent Gastrointestinal Manifestations in Patients With Severe Acute Respiratory Syndrome Coronavirus 2: Early Experience From California.

Authors:  George Cholankeril; Alexander Podboy; Vasiliki Irene Aivaliotis; Branden Tarlow; Edward A Pham; Sean P Spencer; Donghee Kim; Ann Hsing; Aijaz Ahmed
Journal:  Gastroenterology       Date:  2020-04-10       Impact factor: 22.682

7.  Medical device surveillance with electronic health records.

Authors:  Alison Callahan; Jason A Fries; Christopher Ré; James I Huddleston; Nicholas J Giori; Scott Delp; Nigam H Shah
Journal:  NPJ Digit Med       Date:  2019-09-25

8.  Evidence of SARS-CoV-2 Infection in Returning Travelers from Wuhan, China.

Authors:  Sebastian Hoehl; Holger Rabenau; Annemarie Berger; Marhild Kortenbusch; Jindrich Cinatl; Denisa Bojkova; Pia Behrens; Boris Böddinghaus; Udo Götsch; Frank Naujoks; Peter Neumann; Joscha Schork; Petra Tiarks-Jungk; Antoni Walczok; Markus Eickmann; Maria J G T Vehreschild; Gerrit Kann; Timo Wolf; René Gottschalk; Sandra Ciesek
Journal:  N Engl J Med       Date:  2020-02-18       Impact factor: 91.245

9.  Estimated effectiveness of symptom and risk screening to prevent the spread of COVID-19.

Authors:  Katelyn Gostic; Ana Cr Gomez; Riley O Mummah; Adam J Kucharski; James O Lloyd-Smith
Journal:  Elife       Date:  2020-02-24       Impact factor: 8.140

10.  Clinical Predictors of Influenza in Young Children: The Limitations of "Influenza-Like Illness".

Authors:  Nicholas T Conway; Zoe V Wake; Peter C Richmond; David W Smith; Anthony D Keil; Simon Williams; Heath Kelly; Dale Carcione; Paul V Effler; Christopher C Blyth
Journal:  J Pediatric Infect Dis Soc       Date:  2012-09-03       Impact factor: 3.164

View more
  9 in total

1.  Detection of COVID-19 Using Heart Rate and Blood Pressure: Lessons Learned from Patients with ARDS.

Authors:  Milad Asgari Mehrabadi; Seyed Amir Hossein Aqajari; Iman Azimi; Charles A Downs; Nikil Dutt; Amir M Rahmani
Journal:  Annu Int Conf IEEE Eng Med Biol Soc       Date:  2021-11

2.  App-based symptom tracking to optimize SARS-CoV-2 testing strategy using machine learning.

Authors:  Leila F Dantas; Igor T Peres; Leonardo S L Bastos; Janaina F Marchesi; Guilherme F G de Souza; João Gabriel M Gelli; Fernanda A Baião; Paula Maçaira; Silvio Hamacher; Fernando A Bozza
Journal:  PLoS One       Date:  2021-03-25       Impact factor: 3.240

3.  Screening for COVID-19 in Older Adults: Pulse Oximeter vs. Temperature.

Authors:  Catherine R Van Son; Deborah U Eti
Journal:  Front Med (Lausanne)       Date:  2021-04-14

Review 4.  Data science approaches to confronting the COVID-19 pandemic: a narrative review.

Authors:  Qingpeng Zhang; Jianxi Gao; Joseph T Wu; Zhidong Cao; Daniel Dajun Zeng
Journal:  Philos Trans A Math Phys Eng Sci       Date:  2021-11-22       Impact factor: 4.226

5.  Estimating COVID-19 cases in Puerto Rico using an automated surveillance system.

Authors:  Marijulie Martinez-Lozano; Rajendra Gadhavi; Christian Vega; Karen G Martinez; Waldo Acevedo; Kaumudi Joshipura
Journal:  Front Public Health       Date:  2022-08-04

6.  Virus database annotations assist in tracing information on patients infected with emerging pathogens.

Authors:  Akiko Nakashima; Mitsue Takeya; Keiji Kuba; Makoto Takano; Noriyuki Nakashima
Journal:  Inform Med Unlocked       Date:  2020-10-08

7.  Lyme Disease in the Era of COVID-19: A Delayed Diagnosis and Risk for Complications.

Authors:  Cheryl B Novak; Verna M Scheeler; John N Aucott
Journal:  Case Rep Infect Dis       Date:  2021-02-13

8.  Understanding On-Campus Interactions With a Semiautomated, Barcode-Based Platform to Augment COVID-19 Contact Tracing: App Development and Usage.

Authors:  Carson Paige Moore; Jenna Maria DeSousa; Thomas Foster Scherr; Austin N Hardcastle; David Wilson Wright
Journal:  JMIR Mhealth Uhealth       Date:  2021-03-26       Impact factor: 4.773

9.  Dried blood spot specimens for SARS-CoV-2 antibody testing: A multi-site, multi-assay comparison.

Authors:  François Cholette; Christine Mesa; Angela Harris; Hannah Ellis; Karla Cachero; Philip Lacap; Yannick Galipeau; Marc-André Langlois; Anne-Claude Gingras; Cedric P Yansouni; Jesse Papenburg; Matthew P Cheng; Pranesh Chakraborty; Derek R Stein; Paul Van Caeseele; Sofia Bartlett; Mel Krajden; David Goldfarb; Allison McGeer; Carla Osiowy; Catherine Hankins; Bruce Mazer; Michael Drebot; John Kim
Journal:  PLoS One       Date:  2021-12-07       Impact factor: 3.240

  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.