Literature DB >> 36223335

Missing science: A scoping study of COVID-19 epidemiological data in the United States.

Rajiv Bhatia1, Isabella Sledge2, Stefan Baral3.   

Abstract

Systematic approaches to epidemiologic data collection are critical for informing pandemic responses, providing information for the targeting and timing of mitigations, for judging the efficacy and efficiency of alternative response strategies, and for conducting real-world impact assessments. Here, we report on a scoping study to assess the completeness of epidemiological data available for COVID-19 pandemic management in the United States, enumerating authoritative US government estimates of parameters of infectious transmission, infection severity, and disease burden and characterizing the extent and scope of US public health affiliated epidemiological investigations published through November 2021. While we found authoritative estimates for most expected transmission and disease severity parameters, some were lacking, and others had significant uncertainties. Moreover, most transmission parameters were not validated domestically or re-assessed over the course of the pandemic. Publicly available disease surveillance measures did grow appreciably in scope and resolution over time; however, their resolution with regards to specific populations and exposure settings remained limited. We identified 283 published epidemiological reports authored by investigators affiliated with U.S. governmental public health entities. Most reported on descriptive studies. Published analytic studies did not appear to fully respond to knowledge gaps or to provide systematic evidence to support, evaluate or tailor community mitigation strategies. The existence of epidemiological data gaps 18 months after the declaration of the COVID-19 pandemic underscores the need for more timely standardization of data collection practices and for anticipatory research priorities and protocols for emerging infectious disease epidemics.

Entities:  

Mesh:

Year:  2022        PMID: 36223335      PMCID: PMC9555641          DOI: 10.1371/journal.pone.0248793

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

Efficient and effective pandemic control measures demand complete epidemiological data, including timely and precise parameters of infectious transmission, disease severity and disease burden. Insufficient or poor-quality data on transmission mechanisms, setting and activity specific risks, and intervention benefits can reduce the effectiveness, efficiency, and equity of a public health response [1]. Prior to COVID-19, the US Center for Disease Control and Prevention’s (USCDC) pandemic strategy outlined specific data requirements for managing epidemics caused by novel respiratory viruses [2, 3]. The influenza H1N1 pandemic provided an opportunity to apply and evaluate these essential data elements [4-7], and many calls for their timely production accompanied the onset of the COVID-19 pandemic [8-10] (Table 1).
Table 1

Epidemiological data required for managing emerging respiratory virus epidemics.

MeasureDefinitionInformation valueTypical source
Transmissibility
Basic reproductive numberThe expected number of secondary cases directly generated by one casePotential speed of epidemic growthCalculated from contact rate, secondary infection risk, and infectious period or from the growth rate of the early disease incidence curve
Growth rateChange per unit time (acceleration or deceleration) of the incidence rateCurrent trajectory of epidemic growthPopulation disease monitoring
Susceptible populationProportion of the population who have immunity to the infection or to disease due to natural or acquired immunityTargeting control measuresStudies of immunity such as seroprevalence of antibodies
Incubation periodInterval between infection and the development of symptomsTimeframe for prevention of secondary infectionTransmission studies
Duration of infectiousnessViral load and duration in symptomatic and asymptomatic peopleTimeframe for prevention of secondary infectionTransmission studies
Serial intervalInterval between development of symptoms in a case and an infected contactNecessary for capturing the R0Transmission studies
Pre-symptomatic transmissionProportion of infections spread by persons who appear well but are infected and later develop symptomsTimeframe for prevention of secondary infectionTransmission studies
Secondary infection risk (SIR or alternatively, secondary attack rate)Proportion of exposed people who become ill in a setting (household, school workplace)Targeting control measuresTransmission studies
Infection Severity
Symptomatic fractionProportion of infected people who become symptomatically illEstimating disease burdens and adopting a proportional responseHousehold or contact tracing transmission studies
Case hospitalization and fatality ratiosRatio of identified cases to hospitalized and fatal casesEstimating disease burdens and adopting a proportional responsePopulation surveillance
Cohort studies
Large transmission studies
Infection hospitalization and fatality ratiosRatio of estimated infections to hospitalized and fatal infectionsEstimating disease burdens and adopting a proportional responseReported hospital data. Death records
Severity risk factorsDemographic, clinical, occupational, social, and environmental risk factors affecting vulnerability to severe disease outcomesTargeting prevention measuresCase control studies
Syndromic surveillance
Disease Burden
Incidence rateNumber of new cases of illness, hospitalization or death in a population per unit timeIs the disease accelerating or slowing down and whereSyndromic surveillance
Serial prevalence studies
Disease hospitalization rates
Disease mortality rates
Point or period prevalenceProportion of the population that is a current case a point or period in timeCurrent level of active infection and transmissionSymptom and test-based surveys
Community attack rate / cumulative incidenceNumber of new cases of disease during specified time intervalPopulation disease burden & remaining population susceptibleCohort studies, statistical estimation
Key parameters of epidemic transmission include the incubation period (how long after infection symptoms appear), the clinical fraction, the generation interval (the time between a person becoming infected and subsequently infecting someone else), the infectivity period (when and how long an infected person can spread the illness), and the secondary attack rate (the risk of infection due to an infectious contact). Collectively, these parameters help predict the extent and intensity of epidemic transmission and help determine the feasibility and value of strategies for isolation and contact tracing [11, 12] as well as those for targeting groups or settings [13]. Measures of infection severity, such as infection-hospitalization and infection-fatality ratios, inform the social impact of epidemic transmission and help calibrate the scope and scale of control measures. Early estimates of infection severity come from surveillance systems while more reliable ones require prospective cohort studies [7]. Robust active and passive surveillance systems provide real-time monitoring of disease burden, including the incidence and prevalence of illness, numbers of people hospitalized, and infection-related deaths. Optimally, such surveillance data is disaggregated by population subgroup, setting, severity and patient characteristics to inform timely, targeted community mitigations as well as to allocate healthcare resources. A scoping study aims to examine the extent, range, nature of evidence or research activities in a particular field and is useful for identifying data gaps [14]. Here, we report on a scoping study of COVID-19 epidemiologic data and epidemiologic research relevant for pandemic management in the United States. Our aims were to assess the completeness of this data during the first two years of the COVID-19 pandemic along with the responsiveness of governmental public health research. We first document authoritative estimates of key transmission and disease severity parameters in the U.S. and characterize publicly available disease surveillance data. We then identify and characterize epidemiologic investigations informing these parameters conducted and reported through November 2021 by U.S. governmental public health entities. Based on our review, we identify gaps in knowledge and missed research opportunities that may have weakened the U.S. pandemic response.

Methods

We first examined the US CDC’s public website for U.S. government estimates of COVID-19 transmission and infection severity parameters as well as surveillance indicators of infection and disease burden. We used the Internet Archive Wayback Machine (https://archive.org/web/), to assess the evolution of these parameters at different time-points during the pandemic. We characterized the scope of COVID-19 surveillance indicators of infection and disease burden with regards to measures and their person, time, and place aggregation on US CDC webpages at two timepoints—November 2020 and November 2021. We next conducted a scoping review of observational epidemiology studies with outcomes related to COVID-19 infection transmission, severity, or disease burden which were conducted in U.S. settings, reported by authors with U.S. governmental public health affiliations, and published before November 30, 2021. The protocol for this scoping review adapted the Preferred Reporting Items for Systematic Reviews and Meta-analysis Protocols (PRISMAP) (S1 Protocol). We did not register this protocol prospectively. The protocol provides details of our study eligibility criteria, search sources, search strategy, and data collection and management procedures. Briefly, we utilized PubMed to identify potentially eligible studies with our desired target outcomes. We screened each PubMed search result in duplicate, including the title, abstract, and all author affiliations, to identify potentially eligible studies then read the full text of candidate studies to assess eligibility. We did not include studies reporting outcomes related to clinical management or pharmaceutical and vaccine interventions, nor did we include laboratory studies of viral biology, phylogenetic studies, or synthetic modeling exercises. For included studies, we abstracted published information on authors’ governmental affiliations, study methods, data source, data period, study setting, study population, and analytic outcomes. We classified studies as descriptive or analytic based on their methods. We subcategorized descriptive studies as either case series or cluster investigations or as incidence studies. We subcategorized analytic studies as either cross-sectional, case-control, ecologic prospective cohort, or retrospective cohort designs. We categorized the study’s primary data source as: passive or active surveillance program, administrative program records, medical or vital statistic records, serosurveys, questionnaire surveys, or original field data. We noted the end of each study’s data collection period and classified a study’s specific setting or sub-population. For descriptive incidence studies and all analytical studies, we assessed and noted whether the study reported any of the following outcomes: reproductive number or growth rate, secondary attack rate, incubation period, serial interval or generation time, symptomatic fraction, infection or case hospitalization ratio, infection, case, or hospital fatality ratio, incidence of infection, seroprevalence, case status, emergency department care, hospitalization or death, predictors of infection incidence, predictors of disease severity.

Results

We found authoritative estimates for infection transmission and severity parameters published on the US CDC’s Pandemic Planning Scenarios webpages. This webpage was published first in May 2020 and subsequently updated three times (Table 2) [15-18]. Point estimates for most parameters were consistent over the period although for some parameters, such as the clinical fraction, the confidence range remained wide. Most transmission parameter estimates were based on epidemiological studies conducted outside the U.S. using data collected in the first months of the pandemic. We did not find an authoritative estimate of the secondary attack rate for any setting. We also did not find applications of established US CDC’s pandemic risk assessment tools, such as the Pandemic Severity Assessment Framework (PSAF).
Table 2

USCDC estimates for COVID-19 selected infection transmission and severity parameters.

May 2020July 2020Sept 2020Mar 2021
Transmission Measures
Reproductive number2.0 (2.0–3.0)2.5 (2.0–4.0)2.5 (2.0–4.0)2.5 (2.0–4.0)
SusceptibilityNANANANA
Mean incubation period6 days6 days6 days6 days
Mean serial interval6 days6 days6 days6 days
Percentage of transmission occurring prior to symptom onset40%50% (35–70)50% (30–70)50% (30–70)
Percent of infections that are asymptomatic35% (20–50)40% (10–70)40% (10–70)30% (15–70)
Relative infectiousness of asymptomatic individuals100% (50–100)75% (20–100)75% (25–100)75% (25–100)
Secondary Attack RateNot EstimatedNot EstimatedNot EstimatedNot Estimated
Infection Severity Measures
Symptomatic Case Hospitalization Ratio0–49 y 0.017
50–64 y 0.045
65+ y: 0.074
Symptomatic Case Fatality Ratio0–49 y 0.0005
50–64 y 0.002
65+ y: 0.013
Infection fatality ratio (%)0.0650–19 y: 0.0030–17 y: 0.002
20–49 y: 0.0218–49 y: 0.05
50–69 y: 0.550–64 y: 0.6
70+ y: 5.465+ y: 0.09
Hospital fatality ratio (%)18–49 years: 218–49 years: 2.40–17 y: 0.7
50–64 years: 9.850–64 years: 1018–49 y: 2.1
≥65 years: 28≥65 years: 26.650–64 y: 7.9
65+ y: 18.8
The US CDC estimated case fatality and case hospitalization ratios in May 2020 but subsequently characterized disease severity measures as ratios of infection-fatality and hospital-fatality disaggregated by age (Table 2). The infection fatality estimates relied on European data while hospital fatality estimates relied on data from the US CDC COVID-NET active surveillance program. The US CDC began weekly reporting of national-level COVID-19 surveillance measures in April 2020 on a page titled ‘COVID View’ [19]. In 2020, surveillance measures reported on the COVID View page included emergency department visits for coronavirus-like illness (CLI), COVID-19 test positive hospital admissions, and deaths from pneumonia, influenza, and COVID-19 [20] (Table 3). Data on hospital admissions came from a US CDC led active hospital-based surveillance implemented in 13 sub-state regions [21]. Mortality data came from the US National Vital Statistics System [22]. From August 2020, data on COVID-19 case and death incidence, disaggregated by age, sex, and race were also reported separately on the COVID Data Tracker page [23].
Table 3

Scope of US CDC publicly reported Covid-19 disease surveillance data.

Nov 2020Nov 2021
Surveillance MeasureNationalStateCountyNationalStateCounty
Surveillance Case incidenceXXXXXX
SexXXXX
Age groupXXXX
Race/ ethnicityXXXX
Nursing home resident / staffXX
Health care personnelXX
Fraction of positive COVID-19 laboratory testsXXXXX
Share of emergency department visits for Influenza-like-illnessX
Share of emergency department visits for Covid-like-illnessX
Share of emergency department visits for confirmed COVID-19XX
COVID-19 test positive new hospital admissions rateXXXX
Age groupXXX
Mortality due to COVID-19XXXXX
SexXXXX
Age groupXXXX
Race/ ethnicityXXXX
Nursing home resident / staffXX
Health care personnelXX
Infection-induced antibody seroprevalenceXXX
SexX
Age groupX
Combined infection and vaccination-induced antibody seroprevalenceXX
SexX
Age groupX
Race/ ethnicityX
Estimated cumulative Incidence of infections, hospitalizations, and deaths
Infections by ageX
Symptomatic Infections by ageX
Hospitalizations by ageX
Deaths by ageX
As illustrated in Table 3, over time, the scope and granularity of US CDC publicly reported surveillance data increased, and all data was accessible via the COVD Data Tracker page [24, 25]. National seroprevalence estimates from commercial laboratory sampling appeared after August 2020 and uniform national hospital admissions data appeared in December 2020. In 2021, the COVID Data Tracker page added data on vaccination effectiveness and included case and fatality rates for nursing home and health care personnel. On additional websites, USCDC published estimates of cumulative burdens of infection, symptomatic infection, hospitalization, and death [26] and estimates of pandemic period excess deaths [27]. Our scoping study report identification strategy returned 4823 unique publications of which, after screening and review, 283 met our inclusion criteria (Fig 1). We provide a table of the data abstracted from the included studies in supporting information (S1 Table).
Fig 1

PRISMA 2020 flow diagram for identification, screening, and inclusion of studies.

The largest share of reports (61%) was published in the US CDC publication MMWR (Table 4). Most reports (74%) had authors affiliated with a combination of US CDC and State or local public health entities although a substantial number (25%) had authors affiliated only with the US CDC. The majority (57%) of reports were published in 2020. Seventy percent of studies utilized data collected before October 2020 with the remainder utilizing data collected between October 2020 and November 2021.
Table 4

Epidemiologic studies on COVID-19 transmission, infection severity, disease burden with US governmental public health authors affiliations published through November 2021.

Analytic ReportsDescriptive Reports
20202021Total20202021TotalTotal
All Studies 475610311565180283
Public Health Affiliation
Federal13193224164072
Federal and State or Local2022427535110152
State or Local14152916143059
Publication
Clinical Infectious Diseases821042616
Emerging Infectious Diseases11516881632
MMWR2717448545130174
Other Journals11223318102861
Data Period
Jan–Mar 20207182823038
April–Sept 2020362864821597161
Oct 2020–July 2021424285424775
August–Nov 202133669
Study Design
Case-control2133
Case series, cluster, or outbreak338441125128
Cross-sectional2021412243
Ecologic1015251126
Incidence031205151
Prospective6288
Retrospective617231124
Principle Source of Study Data
Active surveillance program2351431722
Administrative data records5505
Original field data1662252257799
Medical records34732512
Passive surveillance program152035403272107
Serosurvey713201121
Questionnaire survey3364410
Vital statistics records1232247
Study Settings
Assisted living facility0222
Childcare facility0222
College or university1121679
Community314374633598172
Congregate living facilities (multiple)01122
Correctional or detention facility33741114
Gym or fitness facility11223
Healthcare facility41561712
Homeless Facility112335
Military facility11112
Other Workplace24642612
Primary or secondary school3316710
Skilled nursing facility4261631925
Social gathering or event0851313
Study Subpopulations
Armed Forces111123
Children and Adolescents19107121929
College students1121568
General population293766593796162
Healthcare Workers or First Responders4488816
Homeless Individuals or Facility Staff1124157
LCTF residents and staff4152032328
Other Occupations23562813
Pregnant women11223
Prison inmates and staff33741114
Study Outcomes
Secondary Attack Ratio6399
Serial Interval / Generation Time222
Growth Rate1122
Period Incidence Measures
Any Incidence Measure9162535276287
Infection Incidence222
Case Incidence2227164345
ED Visit Incidence112134
Hospitalization Incidence2245911
Mortality Incidence11741112
Seroprevalence812202222
Excess Deaths1231236
Predictors of Infection
Any Infection Predictor32346666
Age or Sex15142929
Race, Ethnicity, or Income10112121
Co-morbidity5388
Behavioral491313
Occupational7101717
Environmental222
Residential891717
Prior Infection111
Geospatial791616
Disease Severity Measures
Symptomatic Fraction4155
Case Fatality Ratio111
Infection Hospitalization Ratio111
Infection Fatality Ratio111
Hospital Fatality Ratio111
Predictors of Disease Severity
Any Severity Predictor10132323
Age or Sex471111
Race, Ethnicity, or Income461010
Co-morbidity881616
Behavioral111
SARS-CoV-2 Variant222
Geospatial1122
We categorized 180 of the 283 reports as descriptive studies [28-207]. One hundred and twenty-eight of the descriptive studies enumerated a series or cluster of confirmed COVID-19 cases in the general population within a particular setting or sub-population typically characterizing attack rates and the frequency of characteristics, exposures, and clinical outcomes among individual cases. Long-term care facilities, prisons, and social gatherings were the most common settings for descriptive reports. Congregate facility residents and staff and children and adolescents were the most frequently reported sub-populations. We characterized 51 of the 180 descriptive studies as incidence studies; these provided period, population, and geographically specific estimates of case, disease, seroprevalence, ED visits, or mortality incidence. We categorized 103 of the 283 reports as being analytic [208-310]. Analytic studies most applied cross-sectional (41/103), ecologic (25/103) or retrospective (23/103) designs. The most common data sources for analytic studies were passive surveillance systems (35/103), field collected data (22/103), or seroprevalence surveys (20/103). Most (66/103) of the analytic studies examined general population subjects in community settings (74/103). Less commonly, analytic studies focused children and adolescents (10/103), healthcare workers or first responders (8/103), workers in other occupations (5/103), and residents and staff of long-term care facilities (5/103). Nine analytic studies estimated the secondary attack rate [208, 223, 237, 235, 244, 248, 288, 297, 307]. Only one only of these estimated the SAR for non-household contacts [307]. Eight of the nine estimating a SAR utilized data from the first 6 months of the pandemic. Two studies estimated the serial interval [273, 291]. Both utilized state-level data from the first six months of the pandemic. Two studies estimated the reproductive number utilizing early pandemic period data from one U.S. State [225, 291]. Twenty-five analytic studies provided estimates of period, population, and place incidence measures. Twenty of these studies provided period estimates of antibody seroprevalence. Three analytic studies estimated excess mortality at various timepoints [231, 269, 287]. One of these found that, proportionally, excess mortality in 2020 was highest minority populations and in persons aged 25–44 [269]. Five studies estimated the symptomatic fraction [210, 215, 243, 256, 306]. These studies utilized data from various settings (a military ship, schools, skilled nursing facilities, households, and a prison) and reported heterogeneous results. Only four analytic studies estimated other infection severity measures. One measured the 30-day probability of fatality for cases in one skilled nursing facility [243], one assessed the infection fatality ratio by ethnicity in New York State [218], and two reported infection hospitalization ratio [261] or hospitalization fatality ratio [296]. Many analytic studies (66/103) evaluated one or more predictor of having a confirmed COVID-19 infection. Specific predictors varied by study and included age or gender [210, 211, 215–217, 219, 222, 224, 226, 235, 237, 239, 242, 244, 253, 254, 257–259, 261, 266, 268, 274, 283–285, 292, 308, 309], race or ethnicity [211, 215, 217, 222, 224, 237, 239, 240, 241, 242, 253, 254, 257, 259, 266, 268, 283, 284, 285, 292, 309], co-morbidities [216, 224, 245, 257, 274, 283, 284, 297], the influence of behaviors such as masking or social distancing [210, 226, 234, 239, 257, 260, 262, 264, 265, 277, 294, 295], occupation, industry or workplace factors [216, 224, 233, 239, 246, 257, 259, 270, 272, 274, 279, 282, 284, 285, 289, 299, 309], and housing status, including homelessness [271, 302] or living in shared spaces like dormitories or detention facilities [236, 300]. Sixteen analytic studies reported on variation of infection incidence by geospatial characteristics including the proportion of particular demographic groups in a community [221, 286, 305, 301, 310], work location [239], neighborhood deprivation/vulnerability levels or income [229, 230, 250, 283, 285], zip code education levels [292], and correlation of school-related infection to community incidence rates [264]. Several studies of infection predictors examined risk factors within specific settings and sub-populations, including face coverings and distancing among occupants of a military ship [210], ethnic composition and exposure risk factors in employees of industrial facilities [222, 241, 265, 270, 282], including one study that compared the incidence before and after mitigation strategies such as masking and barriers in a meat processing facility [265], shelter residence status among people experiencing homelessness [271, 302], screening strategies and staffing levels in skilled nursing facilities [227, 228, 243, 247, 272], housing type and athletic participation among college students [234, 300], and community exposures and symptomatic contacts in children [242, 261, 264, 267, 277, 280, 283, 294, 295, 306]. Several studies use seroprevalence surveys or administrative data to examine infection risk factors among healthcare workers and first responders [224, 233, 239, 257, 259, 274, 289, 299]. One analytic study evaluated serial testing of healthcare workers in a skilled nursing facility [272]. Twenty-three analytic studies examined predictors for severe disease outcomes, including, most commonly, age, sex, race/ethnicity, and co-morbidities. These studies consistently found older age and co-morbidities to be a strong predictor of need for hospitalization, ICU care, mechanical ventilation and of death. Older age predicted prolonged symptoms among non-hospitalized cases [214]. Two studies looked at the impact of variants on disease severity and hospitalizations [275, 298]. One study compared the risk for in-hospital complications for patients with COVID-19 relative to those with influenza [232].

Discussion

This scoping review aimed to assess the completeness of authoritative estimates of key epidemiologic data in the United States during the first two years of COVID-19 and the responsiveness of published U.S. governmental public health agency epidemiological research to pandemic knowledge needs. Overall, we found publicly available authoritative estimates for most expected transmission and disease severity parameters; however, some were lacking, and others had significant uncertainties. While official US CDC estimates of these parameters appeared consistent across time-periods, we observed limited assessment of these parameters in US populations as well as a lack of re-assessment over the course of the pandemic. Nationally standardized measures of infection and disease incidence published by the US CDC had limited resolution through most of 2020; however, by the end of 2021, the US CDC was disaggregating most surveillance indicators by county geography and sex, age, and race/ethnicity. Resolution with regards to sub-populations (e.g., occupational groups, those with prior infection) and specific exposure settings (e.g., workplaces, congregate facilities) remained lacking. Investigators affiliated with US government public health entities published a large volume of epidemiological reports. The majority, however, were either descriptive studies such as cluster investigations or reports of period and population-specific case incidence. Descriptive studies, while useful for hypothesis generation, will not provide sufficiently reliable or generalizable information for designing or evaluating mitigation strategies. Collectively, analytic studies published in this period were numerous but did not address key knowledge gaps. We discuss these knowledge gaps and their significance further below.

Gaps in infectious transmission and disease severity data

Transmission parameters, including the incubation period, serial interval, the clinical fraction, and timing and duration of infectiousness are valuable for predicting the pace and magnitude of epidemic growth and for establishing the feasibility and efficacy of isolation, quarantine and contact tracing practices. Estimates of the symptomatic (clinical) fraction for COVID-19 had a wide confidence range, likely reflecting significant heterogeneity among study methods, settings, and populations [311-314]. Ongoing estimation of the clinical fraction utilizing long term cohort studies might have informed understanding of evolving population immunity as well as the pathogenicity of COVID-19 variants. We found no U.S. authoritative estimates of the secondary attack rate (SAR) either for households or other community settings. International meta-analytic reviews of the SAR did provide summary estimates of the SAR for household settings as well as for subpopulations within households [315]. These reviews did include several US based studies. Additional US studies estimating the SAR for household, community, business, transportation, and educational settings might have informed public understanding of comparative setting-specific transmission risks and might have focused attention on additional policy interventions, such as the provision of medical isolation housing. Authoritative estimates of population susceptibility to COVID-19–100% at the pandemic’s onset–did not change over the first 18 months of the pandemic. While the US CDC estimated a cumulative 146 million COVID-19 infections occurred in the US as of September 2021 [26], the duration and clinical significance of infection-provided immunity remained poorly characterized [316]. In addition, national publicly available surveillance data was not disaggregated by prior infection status. Many studies have assessed vulnerability factors for severe disease, hospitalization. However, authoritative estimates of the infection fatality ratio (IFR) based on European countries may not have been generalizable to the US given country-level differences in infection ascertainment, co-morbidities, social vulnerability, and medical care. IFR estimates were also not re-evaluated over the course of the pandemic despite the rapid evolution of clinical therapeutics. Establishing large scale community cohort studies in multiple regions might have supported ongoing assessment of IFR as well as other infection severity parameters.

Gaps in infection and disease burden surveillance data

Further disaggregation of surveillance measures may have optimized the targeting and timing of community mitigations. Through the National Notifiable Disease Surveillance System (NNDSS), CDC accumulated tens of millions of surveillance case reports. Case report forms included data fields for exposure information on residence, occupation, travel [317]. However, except for skilled nursing facilities and healthcare personnel, national surveillance data was not reported by exposure setting, exposure history, or occupation. Complete and consistent collection and reporting of data elements in standard surveillance case reports might have improved understanding of the relative burden of infection across settings and modes of contact. Geographically specific data on COVID-19 hospital admissions also lagged. Through most of 2020, CDC published estimates of age-stratified hospital admissions incidence only from active surveillance in 13 sub-state regions. The U.S. Centers for Medicare Services (CMS) first issued requirements for COVID-19 hospital admission reporting in July 2020, and standardized regional data on hospitalization admissions first became available in December 2020. Notably, case counts of laboratory confirmed infection remained the dominant indicator of pandemic dynamics despite well-understood recognition that case counts underestimate infection incidence variably across population, place, and time [5, 318]. The pre-COVID US National Pandemic Strategy envisioned transitioning from counting individual confirmed cases to monitor epidemic trends to monitoring illness rates (i.e., hospitalization admissions and syndromic surveillance) [2]. During the H1N1 pandemic, the USCDC discontinued state reporting of individual lab-confirmed cases two months after the initiation of the epidemic and initiated state reporting of total numbers of weekly H1N1 hospitalizations and deaths [4, 319].

Alignment between COVID-19 science and policy questions

Governments implemented novel and controversial policies to mitigate the COVID-19 pandemic, such as stay at home orders, school and business closures, and mask mandates. While “precautionary” these polices had little a priori high-quality supporting evidence [320]. Implementing novel policies raise difficult questions of societal trade-offs and demand timely real-world evaluation (Table 5). However, much of the research reported by US public health entities has had little direct bearing on specific pandemic mitigation policy and practices.
Table 5

Examples of research questions relevant to US COVID-19 policy debates.

What is the optimum duration of isolation and quarantine?
Are isolation and quarantine effective mitigation strategies, given asymptomatic and pre-symptomatic transmission?
What is the relative share of disease attributable to different community settings (households, workplaces, retail, transport, health care, schools)
How effective are masks and face coverings for preventing transmission in different settings
Are safety measures in essential workplaces and public transport adequate to prevent occupational transmission?
What are health costs and benefits of closing schools?
How well does recovery from infection protect against subsequent infection and severe disease?
Research previously conducted during the H1N1 pandemic in the US as well as examples of COVID-19 research conducted by non-governmental actors and peer countries suggest that US public health research efforts might have done more to evaluate community mitigation policies. During the 2009 influenza H1N1 pandemic, for example, multiple transmission studies conducted by US CDC investigators contributed to timely estimates of the SAR and their determinants, including age, setting, and timing [321, 322]. Systematic reviews provide another lens on the scope of US COVID-19 research contributions. A systematic review of mask effectiveness published in late 2020 included only one small U.S. study in health workers [323]. A meta-analysis of 61 studies on COVID-19 workplace prevention measures included 15 US studies which were limited to healthcare and skilled nursing settings [324]. None of the 11 studies included in a December 2020 meta-analysis on transmission of COVID-19 by children in schools were set in the US [325]. A November 2021 review identified several large U.S. studies on infection-derived protective immunity; none had U.S. government affiliations [326]. Other countries appeared to better leverage public surveillance to systematically assess vulnerable population subgroups. Norwegian public health authorities estimated comparative infection risks by occupation across different phases of the pandemic identifying health care, food service, transportation, childcare and teaching as risky settings [327]. UK scientists similarly used published national statistics to estimate age-standardized COVID-19 mortality incidence among occupations finding significantly higher mortality among taxi drivers, low skilled occupations, and personal care workers [328].

Limitations

This scoping review has several limitations. We recognize that clinical, academic, and other private institutions in the U.S. also made substantial contributions to COVID-19 data and research. Nevertheless, we limited our review to governmental public health data and research for the following reasons: (1) State and federal public health agencies are the responsible entities for infectious disease surveillance, including collecting, compiling, cleaning, standardizing, and interpreting data; (2) Only public agencies receive legislatively mandated communicable disease reports and have the authority to conduct disease investigations; (3) Authoritative public health data and guidance underpins public policy decisions. Our search strategy may have missed relevant published reports and we did not consider pre-prints or unpublished agency analyses. Furthermore, we did not examine or judge data or study quality, including precision and bias.

Explanations for knowledge gaps

Understanding the causes of these observed knowledge gaps could support planning for future pandemics. In the case of COVID-19, reasons may have been due both to institutional capacities (e.g., time, resources, data, methodological feasibility) as well as to organizational priorities and leadership choices. Limited availability of timely comprehensive and standardized data may have been a contributing factor [329, 330]. In the US, local and state agencies have primary and statutory responsibility for collecting and organizing infectious disease surveillance data. State statutes typically require health care providers or laboratories to report incident cases; public health investigators subsequently conduct case interviews to elaborate on the context of exposure. However, local and state public health agencies implement disease control activities using heterogenous practices and with varying capacities [331]. The US CDC reported significant variation in the timeliness and quality of federal reporting by state authorities [332]. State by state online public reporting of COVID-19 data was also deficient [333]. Further, federal health oversight agencies were slow to require standardized disease data from hospitals and skilled nursing facilities. Failures to identify and communicate with contacts during disease control investigations also limited the utility of data that might have come from contact-tracing. Surveys of health departments during the pandemic reported that case investigators conducted timely COVID-19 case interviews on only a fraction of incident case, identifying and reaching fewer than one contact per case [334, 335]. While many epidemiologic questions about COVID-19 required well-established research methods, the rapid and simultaneous implementation of numerous non-pharmaceutical interventions created methodological challenges for researchers. In the case of school closure for example, an international systematic review concluded a lack of evidence of a significant protective effect, finding that reviewed studies frequently suffered from unaddressed biases from confounding and collinearity from other non-pharmacological interventions [336]. Institutional priorities and choices on the research agenda might also have had influence. It remains unclear whether research conducted by US public health investigators was systematically coordinated or directed. The CDC first publicly offered a set of COVID-19 research priorities in March 2021—one year after the start of the pandemic [337]. Greater transparency of data may have facilitated more timely corrective responses. The CDC has publicly acknowledged selectively releasing the data it has collected [338]. Some states also resisted calls for full public reporting of covid surveillance data [339].

Conclusions

In conclusion, over the first eighteen months of the COVID-19 pandemic, public health authorities in the U.S. appear to have lacked complete and timely epidemiological data for optimal disease control responses. These gaps occurred despite pre-established pandemic data collection priorities and significant public resource commitments to COVID-19 pandemic response. We observed, for example, a delayed implementation of standardized national COVID-19 surveillance measures and limited validation and re-assessment of essential transmission and infection severity parameters. US public health scientists authored many original publications on COVID-19; however, most were descriptive, and few provided high-quality evidence to inform salient policy and management decisions. Moving forward, U.S. public health agencies should examine the reasons for these gaps and plan for a timely, strategic, and prioritized national epidemiological data collection and research agenda for future rapidly emerging infectious disease epidemics. Improved data-driven responses may require national standards for disease control data collection and management, ready-to-use research protocols, and a greater commitment to publicly transparency.

Epidemiological data for COVID-19 pandemic management in the United States: A protocol for a scoping review.

(PDF) Click here for additional data file.

Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist.

(PDF) Click here for additional data file.

Abstracted data from reports meeting scoping review inclusion criteria.

(PDF) Click here for additional data file. 29 Oct 2021
PONE-D-21-07710
The Missing Science: Epidemiological data gaps for COVID-19 policy in the United States
PLOS ONE Dear Dr. Bhatia, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. The reviewer recommends that you make substantial revisions to the manuscript. Please attend to all the concerns raised and return in the revised manuscript as advised in this letter. Please submit your revised manuscript by Dec 11 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Martin Chtolongo Simuunza, PhD Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1.Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf. 2. Please upload with your submission a completed copy of the PRISMA checklist (available from: https://prisma-statement.org/PRISMAStatement/Checklist) as Supplementary Information. 3. . Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: N/A ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This is a much needed and important article. The public health data collection efforts in the United States have been incredibly fragmented. This, though, leads me to my main critiques of the paper. Overall The authors state in the conclusion that: “CDC scientists have the access to data, the expertise, and the resources to provide the data necessary for an optimal epidemic response.” This is actually not entirely true. Public health data collection in the United States is heavily fragmented. In fact, many states did not report or provide the CDC with their COVID-19 surveillance data. This was further exacerbated by the lack of data sharing between health systems and hospitals, as well as, between hospitals and departments of public health. I think that adding a more nuanced, in-depth discussion of why the CDC might not have updated domestic data (or why we still have COVID-19 data problems over a year later) could make this article stronger. Despite the July 2020 HHS requirement, data collection was in fact a bit of a mess and extremely cumbersome. Refer to (pg. 13 provides a few examples): https://oig.hhs.gov/oei/reports/OEI-09-21- 00140.pdf Additionally see: Galaitsi, S.E., Cegan, J.C., Volk, K., Joyner, M., Trump, B.D. and Linkov, I., 2021. The challenges of data usage for the United States’ COVID-19 response. International Journal of Information Management, 59, p.102352. Perhaps, the CDC should place more thought into how data is collected and who is in charge of establishing the back end for data collection. Alternatively, how do you deal with situations where state and federal requirements do not align but instead cause increased administrative burdens? If fact, it might explain a very well received point the authors make: “The content of their [the CDC’s] investigations raises questions about whether and how an explicit national research agenda guided CDC epidemiological endeavors…” as well as, “Regional differences in prevalence could explain part of the variation in the pace of infectious spread.” Politics and federalism clearly play/played a role in data collection and reporting (Florida and Nebraska anyone?), as has been discussed in multiple published articles this last year. For example, see: Rocco, P., Rich, J.A., Klasa, K., Dubin, K.A. and Béland, D., 2021. Who Counts Where? COVID-19 Surveillance in Federal Countries. Journal of Health Politics, Policy and Law. Additionally, the authors could suggest (and cite existing literature) about how to best go about data collection for the specific parameters that they identify. For example, are there essential occupations that we know have had more exposure risk or that peerreviewed articles have shown to have higher transmission rates? Is there perhaps a region, state, or city that was a stellar example of data collection and data-informed policy decisions? What did they do correct? I think that more thought should be placed on who is collecting the data (or should be in charge of collecting the data). The authors infer that this is the CDC, but what challenges might the CDC face in collecting data? And if it is inevitably going to face difficulties then how can we make data collection most effective (and ensure health equity)? What does the evidence say that worked this last year and what didn’t work? Right now, this reads like a giant list of wants in a perfect world. But, these data might not all be possible (or feasible or cost effective) to collect. Discussion (general) I think that the authors could cite existing studies in peer-reviewed literature that do exist whenever they mention that “x” doesn’t exist or “y” doesn’t exist. I realize that the authors limited their search to only studies with CDC affiliated authors, but I think that it is important to emphasize that local (city, county) and state efforts did exist that tried to figure this data out because the CDC was very slow to provide good data throughout 2020. There were studies conducted on infectious transmission, disease severity, burden of infection/disease in smaller settings since COVID-19 began (just not with CDC affiliated investigators). It might be important to cite a few of these throughout the discussion and note that the CDC might have determined that the few US studies that do exist were too small for them to consider. There were some universities that partnered with state or local departments of public health that did some good work. Infectious Transmission Section • Could you cite a few studies that do exist that tried to quantify infectious transmission? It would be useful to reference what the ‘best available” study that exists is (for example, ‘no identified transmission studies have quantitatively examined….then what are the qualitative studies that exist?) • Are the restaurant restrictions not because of the believe that the virus is airborne? • Would state-to-state variability in policy matter at the community level as much as county-to-county variation? • Many employers do not report COVID-19 infections. • Contact tracing will require community trust – even in places with robust contact tracing efforts, follow-up was often difficult or challenging (especially in lowincome or immigrant communities) • Some household transmission studies were conducted. For example, Lewis, N.M., Chu, V.T., Ye, D., Conners, E.E., Gharpure, R., Laws, R.L., Reses, H.E., Freeman, B.D., Fajans, M., Rabold, E.M. and Dawson, P., 2020. Household transmission of SARS-CoV-2 in the United States. Clinical infectious diseases: an official publication of the Infectious Diseases Society of America. Disease Severity Section • In the second sentence, did you mean “county to country” differences or “county to county” differences or “country to country” differences? Minor Edits Last, there are some minor edits that need to be made: • There are a few grammatical errors throughout that shouldn’t be too difficult to fix after another read through • Citations need to be fixed and standardized • Make sure to add updated citations and engage with more recent literature ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. Submitted filename: PONE-D-21-07710_Reviewer_Comments.pdf Click here for additional data file. 8 Jul 2022 Authors’ Response to Reviewer 1 C1: The authors state in the conclusion that: “CDC scientists have the access to data, the expertise, and the resources to provide the data necessary for an optimal epidemic response.”This is actually not entirely true. Public health data collection in the United States is heavily fragmented. In fact, many states did not report or provide the CDC with their COVID-19 surveillance data. This was further exacerbated by the lack of data sharing between health systems and hospitals, as well as, between hospitals and departments of public health. I think that adding a more nuanced, in-depth discussion of why the CDC might not have updated domestic data (or why we still have COVID-19 data problems over a year later) could make this article stronger. Despite the July 2020 HHS requirement, data collection was in fact a bit of a mess and extremely cumbersome. Refer to (pg. 13 provides a few examples): https://oig.hhs.gov/oei/reports/OEI-09-21- 00140.pdf Additionally see: Galaitsi, S.E., Cegan, J.C., Volk, K., Joyner, M., Trump, B.D. and Linkov, I., 2021. The challenges of data usage for the United States’ COVID-19 response. International Journal of Information Management, 59, p.102352. Perhaps, the CDC should place more thought into how data is collected and who is in charge of establishing the back end for data collection. Alternatively, how do you deal with situations where state and federal requirements do not align but instead cause increased administrative burdens? If fact, it might explain a very well received point the authors make: “The content of their [the CDC’s] investigations raises questions about whether and how an explicit national research agenda guided CDC epidemiological endeavors…” as well as, “Regional differences in prevalence could explain part of the variation in the pace of infectious spread.” Politics and federalism clearly play/played a role in data collection and reporting (Florida and Nebraska anyone?), as has been discussed in multiple published articles this last year. For example, see: Rocco, P., Rich, J.A., Klasa, K., Dubin, K.A. and Béland, D., 2021. Who Counts Where? COVID-19 Surveillance in Federal Countries. Journal of Health Politics, Policy and Law. Additionally, the authors could suggest (and cite existing literature) about how to best go about data collection for the specific parameters that they identify. For example, are there essential occupations that we know have had more exposure risk or that peer reviewed articles have shown to have higher transmission rates? Is there perhaps a region, state, or city that was a stellar example of data collection and data-informed policy decisions? What did they do correct? I think that more thought should be placed on who is collecting the data (or should be in charge of collecting the data). The authors infer that this is the CDC, but what challenges might the CDC face in collecting data? And if it is inevitably going to face difficulties then how can we make data collection most effective (and ensure health equity)? What does the evidence say that worked this last year and what didn’t work? Right now, this reads like a giant list of wants in a perfect world. But, these data might not all be possible (or feasible or cost effective) to collect. R1: We have added a new sub-section in the discussion to discuss possible explanations for the gaps we observed in public health agency surveillance data and original investigations dependent on that data which includes discussion of the lack of standardized surveillance data collection and reporting protocols. We note, however, that many epidemiological investigations utilized data sources that not derived from public disease surveillance systems. And many of the research gaps we identify might have been addressed with the application of well-established method (e.g. community cohort studies). C2: I think that the authors could cite existing studies in peer-reviewed literature that do exist whenever they mention that “x” doesn’t exist or “y” doesn’t exist. I realize that the authors limited their search to only studies with CDC affiliated authors, but I think that it is important to emphasize that local (city, county) and state efforts did exist that tried to figure this data out because the CDC was very slow to provide good data throughout 2020. There were studies conducted on infectious transmission, disease severity, burden of infection/disease in smaller settings since COVID-19 began (just not with CDC affiliated investigators). It might be important to cite a few of these throughout the discussion and note that the CDC might have determined that the few US studies that do exist were too small for them to consider. There were some universities that partnered with state or local departments of public health that did some good work. R2: We concur. We expanded the scope of our review to include all published reports affiliated either with the US CDC or with any US state or local public health agencies. Most included studies were conducted and authored by collaborations of federal, state, and local governmental public health actors. We also added a new sub-section in the discussion that describes the contributions of U.S. public health affiliated publications to issue-specific COVID-19 epidemiological reviews. C3: Could you cite a few studies that do exist that tried to quantify infectious transmission? It would be useful to reference what the ‘best available” study that exists is (for example, ‘no identified transmission studies have quantitatively examined….then what are the qualitative studies that exist?) R3. We included several U.S. public health affiliated studies that quantified the household secondary attack rate in the revision. We referenced a review of studies of the household secondary attack rate that demonstrates the international scope of research on the topic. C4: Are the restaurant restrictions not because of the believe that the virus is airborne? R4: We removed the mention on restaurant restrictions specifically from the revised manuscript. We note that restrictions on in-person restaurant dining occurred before there existed either public health agency acknowledgement of airborne transmission. C5: Would state-to-state variability in policy matter at the community level as much as county-to-county variation? R5: We agree that studying the variation local policy would be optimal. C6: Many employers do not report COVID-19 infections. R6: We agree. However, health care facilities and laboratory providers are subject to statutory requirements report cases under U.S. law. Employers do not have this statutory responsibility nor the ability, in most cases, the ability to assess COVID-19 infection status among their employees. However, case surveillance reports submitted by healthcare and utilized by public health agencies do include fields for occupation, industry, and work location. We acknowledge that morbidity report forms submitted to State and Federal public health agencies varied in completeness. C7: Contact tracing will require community trust – even in places with robust contact tracing efforts, follow-up was often difficult or challenging (especially in low income or immigrant communities) R7: We agree. In the revision, we have noted the challenges public health agencies faced in gaining participation from cases and contacts during disease investigations. C8: Some household transmission studies were conducted. For example, Lewis, N.M., Chu, V.T., Ye, D., Conners, E.E., Gharpure, R., Laws, R.L., Reses, H.E., Freeman, B.D., Fajans, M., Rabold, E.M. and Dawson, P., 2020. Household transmission of SARS-CoV-2 in the United States. Clinical infectious diseases: an official publication of the Infectious Diseases Society of America. R8: The above study by Lewis et al. was included in our initial review. We include several US additional household transmission studies in the updated review. C9: In the second sentence, did you mean “county to country” differences or “county to county” differences or “country to country” differences? R9: We have revised the sentence to clarify. C10: There are a few grammatical errors throughout that shouldn’t be too difficult to fix after another read through R10: Noted. C11: References are in non-standard formats. R11: All References are now standardized to NLM formats C12: Make sure to add updated citations and engage with more recent literature R12: The revision extends the review of authoritative estimates and published public health affiliated reports through Nov 30, 2021. Submitted filename: covid-scoping-review-reviewer1-response.docx Click here for additional data file. 12 Sep 2022 Missing Science: A Scoping Study Of COVID-19 Epidemiological Data in the United States PONE-D-21-07710R1 Dear Dr. Bhatia, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Martin Chtolongo Simuunza, PhD Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: N/A ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Thank you so much for letting me review this paper. I can tell that a huge amount of effort and work went into your revisions. Table 3 and Table 4 are excellent! Table 5 could be a table or a figure (if a figure is easier to make with just the list of questions in a box or something like that). The paper is excellent, informative, and a great contribution to public health research! I do not think that any more substantive edits need to be made. Fantastic job :) If there is time, then there are only a few minor grammar edits to be made throughout. A quick read through with fresh eyes should catch all of them. For example: Page 6, paragraph 2: “….Our aims were to assess the completeness of this data during the first two years of the pandemic-19…” I believe you wanted to stay “COVID-19 pandemic” or “pandemic caused by the Sars-COV-2 virus”. Page 25, paragraph 3: “Furthermore, also did not examine or judge…” I believe you wanted to say “we also did not examine…” Page 26, paragraph 3: “While many epidemiologic questions COVID-19 required….” I believe you wanted to say “While many epidemiologic questions about COVID-19…” ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No ********** Submitted filename: PONE-D-21-07710_Reviewer_Comments_2.pdf Click here for additional data file. 29 Sep 2022 PONE-D-21-07710R1 Missing Science: A Scoping Study Of COVID-19 Epidemiological Data in the United States Dear Dr. Bhatia: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Martin Chtolongo Simuunza Academic Editor PLOS ONE
  319 in total

1.  Shelter Characteristics, Infection Prevention Practices, and Universal Testing for SARS-CoV-2 at Homeless Shelters in 7 US Urban Areas.

Authors:  Julie L Self; Martha P Montgomery; Karrie-Ann Toews; Elizabeth A Samuels; Elizabeth Imbert; Temet M McMichael; Grace E Marx; Cortland Lohff; Tom Andrews; Isaac Ghinai; Emily Mosites
Journal:  Am J Public Health       Date:  2021-03-18       Impact factor: 9.308

Review 2.  Industry Sectors Highly Affected by Worksite Outbreaks of Coronavirus Disease, Los Angeles County, California, USA, March 19-September 30, 2020.

Authors:  Zuelma Contreras; Van Ngo; Marifi Pulido; Faith Washburn; Gayane Meschyan; Fruma Gluck; Karen Kuguru; Roshan Reporter; Condessa Curley; Rachel Civen; Dawn Terashita; Sharon Balter; Umme-Aiman Halai
Journal:  Emerg Infect Dis       Date:  2021-05-12       Impact factor: 6.883

3.  Risk Factors for Coronavirus Disease 2019 (COVID-19)-Associated Hospitalization: COVID-19-Associated Hospitalization Surveillance Network and Behavioral Risk Factor Surveillance System.

Authors:  Jean Y Ko; Melissa L Danielson; Machell Town; Gordana Derado; Kurt J Greenlund; Pam Daily Kirley; Nisha B Alden; Kimberly Yousey-Hindes; Evan J Anderson; Patricia A Ryan; Sue Kim; Ruth Lynfield; Salina M Torres; Grant R Barney; Nancy M Bennett; Melissa Sutton; H Keipp Talbot; Mary Hill; Aron J Hall; Alicia M Fry; Shikha Garg; Lindsay Kim
Journal:  Clin Infect Dis       Date:  2021-06-01       Impact factor: 9.079

4.  Antibody Responses after Classroom Exposure to Teacher with Coronavirus Disease, March 2020.

Authors:  Nicole E Brown; Jonathan Bryant-Genevier; Uptala Bandy; Carol A Browning; Abby L Berns; Mary Dott; Michael Gosciminski; Sandra N Lester; Ruth Link-Gelles; Daniela N Quilliam; James Sejvar; Natalie J Thornburg; Bernard J Wolff; John Watson
Journal:  Emerg Infect Dis       Date:  2020-06-29       Impact factor: 6.883

5.  Body Mass Index and Risk for COVID-19-Related Hospitalization, Intensive Care Unit Admission, Invasive Mechanical Ventilation, and Death - United States, March-December 2020.

Authors:  Lyudmyla Kompaniyets; Alyson B Goodman; Brook Belay; David S Freedman; Marissa S Sucosky; Samantha J Lange; Adi V Gundlapalli; Tegan K Boehmer; Heidi M Blanck
Journal:  MMWR Morb Mortal Wkly Rep       Date:  2021-03-12       Impact factor: 17.586

6.  Linked Clusters of SARS-CoV-2 Variant B.1.351 - Maryland, January-February 2021.

Authors:  Kenneth A Feder; Marcia Pearlowitz; Alexandra Goode; Monique Duwell; Thelonious W Williams; Ping An Chen-Carrington; Ami Patel; Catherine Dominguez; Eric N Keller; Liore Klein; Alessandra Rivera-Colon; Heba H Mostafa; C Paul Morris; Neil Patel; Anna M Schauer; Robert Myers; David Blythe; Katherine A Feldman
Journal:  MMWR Morb Mortal Wkly Rep       Date:  2021-04-30       Impact factor: 17.586

7.  Update: Characteristics of Symptomatic Women of Reproductive Age with Laboratory-Confirmed SARS-CoV-2 Infection by Pregnancy Status - United States, January 22-October 3, 2020.

Authors:  Laura D Zambrano; Sascha Ellington; Penelope Strid; Romeo R Galang; Titilope Oduyebo; Van T Tong; Kate R Woodworth; John F Nahabedian; Eduardo Azziz-Baumgartner; Suzanne M Gilboa; Dana Meaney-Delman
Journal:  MMWR Morb Mortal Wkly Rep       Date:  2020-11-06       Impact factor: 17.586

8.  An Outbreak of COVID-19 Associated with a Recreational Hockey Game - Florida, June 2020.

Authors:  David Atrubin; Michael Wiese; Becky Bohinc
Journal:  MMWR Morb Mortal Wkly Rep       Date:  2020-10-16       Impact factor: 17.586

9.  Mitigating a COVID-19 Outbreak Among Major League Baseball Players - United States, 2020.

Authors:  Meghan T Murray; Margaret A Riggs; David M Engelthaler; Caroline Johnson; Sharon Watkins; Allison Longenberger; David M Brett-Major; John Lowe; M Jana Broadhurst; Chandresh N Ladva; Julie M Villanueva; Adam MacNeil; Shoukat Qari; Hannah L Kirking; Michael Cherry; Ali S Khan
Journal:  MMWR Morb Mortal Wkly Rep       Date:  2020-10-23       Impact factor: 17.586

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.