Literature DB >> 35594254

Effects of trust, risk perception, and health behavior on COVID-19 disease burden: Evidence from a multi-state US survey.

Benjamin J Ridenhour1,2, Dilshani Sarathchandra1,3, Erich Seamon1, Helen Brown1,4, Fok-Yan Leung3, Maureen Johnson-Leon5, Mohamed Megheib1, Craig R Miller1,6, Jennifer Johnson-Leung1,2.   

Abstract

Early public health strategies to prevent the spread of COVID-19 in the United States relied on non-pharmaceutical interventions (NPIs) as vaccines and therapeutic treatments were not yet available. Implementation of NPIs, primarily social distancing and mask wearing, varied widely between communities within the US due to variable government mandates, as well as differences in attitudes and opinions. To understand the interplay of trust, risk perception, behavioral intention, and disease burden, we developed a survey instrument to study attitudes concerning COVID-19 and pandemic behavioral change in three states: Idaho, Texas, and Vermont. We designed our survey (n = 1034) to detect whether these relationships were significantly different in rural populations. The best fitting structural equation models show that trust indirectly affects protective pandemic behaviors via health and economic risk perception. We explore two different variations of this social cognitive model: the first assumes behavioral intention affects future disease burden while the second assumes that observed disease burden affects behavioral intention. In our models we include several exogenous variables to control for demographic and geographic effects. Notably, political ideology is the only exogenous variable which significantly affects all aspects of the social cognitive model (trust, risk perception, and behavioral intention). While there is a direct negative effect associated with rurality on disease burden, likely due to the protective effect of low population density in the early pandemic waves, we found a marginally significant, positive, indirect effect of rurality on disease burden via decreased trust (p = 0.095). This trust deficit creates additional vulnerabilities to COVID-19 in rural communities which also have reduced healthcare capacity. Increasing trust by methods such as in-group messaging could potentially remove some of the disparities inferred by our models and increase NPI effectiveness.

Entities:  

Mesh:

Year:  2022        PMID: 35594254      PMCID: PMC9122183          DOI: 10.1371/journal.pone.0268302

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

In response to evidence of community spread of COVID-19 [1], the United States (US) began providing guidance and implementing various mitigation policies to reduce disease transmission in March 2020. These mitigation strategies relied on non-pharmaceutical interventions (NPIs) such as mask wearing and social distancing. State and local governments canceled events, issued stay-at-home orders, and mandated the closure of nonessential businesses. Though type, timing, and duration of the orders varied greatly between jurisdictions [2, 3], all of these public health orders called for behavioral changes and restrictions on personal movement, gatherings, and business activity. In order to properly assess the potential effectiveness of NPIs, decision makers must take into account human behaviors. Voluntary compliance with public health guidance and orders is affected by demographic factors, cognitive constructs, and social constructs [4]. Health behavior theory and risk behavior models [5-7], characterize the demographic factors related to risk perception and health protective behavior. The perception that viruses pose a serious threat, and that one is susceptible to this threat, is the most likely predictor of adoption and compliance with NPIs. Other cognitive constructs, namely perceived severity, perceived susceptibility, and belief in the benefits of adopted behaviors, are all associated with reduced COVID-19 risk behaviors and increased health protective behavior [8, 9]. That behaviors are often shaped by perceptions of what others are doing, by in-group approval, and by desires to protect those in their communities [10, 11] further complicates NPI adoption. Political identity in particular is one factor that can lead to out-group distrust [12]. Affective polarization [13] extends beyond issue-based disagreement to an identity-based comparison between in-groups and out-groups. This exacerbates dislike and distrust of those outside of the in-group [13-15]. When public health institutions are considered part of the out-group, this can result in non-adoption of preventative behaviors [16]. Rural Americans face increased risk of severe illness and death from COVID-19 due to health disparities, health care shortages, and social inequities [17, 18]. On average, rural Americans are older, are more likely to live in poverty, have higher rates of chronic disease and disability, and are less likely to be insured than urban dwellers [19, 20]. Studies have consistently shown less compliance with NPIs in rural areas, particularly among rural Americans identifying as conservative. These associations were less strong among older rural individuals [17, 18]. The lack of healthcare resources due to hospital closures, limited numbers of health professionals, and low critical-care capacity in rural communities poses an additional risk in the face of a surge of patients with COVID-19 [21]. In this study, we use a survey instrument distributed in three socially and demographically diverse US states (Idaho, Texas, and Vermont) during October and November 2020 in order to examine the differences among rural and urban Americans in their attitudes towards, and uptake of, NPIs. To advance health behavior theory, we tested various causal relationships between trust in public health guidance, health and economic risk perception, and resistance to pandemic behavioral change using structural equation modeling. Secondarily, we also explore the relationship between disease burden and behavior with models of our survey data. From the best-supported models, we determine how rurality—along with other exogenous variables such as political ideology—factors into behavior during the early portion of the COVID-19 pandemic. We emphasize that our model is not an attempt to produce the best predictive model of COVID-19 burden, an effort which has been done using many other better suited methods for that task. Rather, we wish to determine a model of human behavior that could augment such models and increase their value to public health officials. Our work is important and novel because it incorporates human attitudes, perceptions, and behavioral intention into infectious disease models, which extends our ability to predict expected differences in disease outcomes across the United States.

Materials and methods

Survey development and data collection

Data for this research come from a sequential mixed-mode survey distributed to a disproportionate stratified sample of households in Idaho, Texas, and Vermont. The specific survey design, employing both an online and a paper survey option, as well as English and Spanish translations, was selected in order to reach communities that are typically harder to reach via online surveys (for example, rural and elderly populations, individuals who lack access to reliable internet connections, and non-English speakers) [22]. Following standard survey design principles, the survey design includes several steps: pre-testing, field testing, pilot testing, and validation [23, 24]. We first pre-tested the survey with a convenience sample of college students (n = 55) recruited from the University of Idaho via the online survey platform Qualtrics (Provo, UT, USA). Pre-testing enabled us to measure pertinent factors such as time for completion, satisfaction, and level of difficulty. Subsequently, we field tested the survey questionnaire by sharing it with 10 state and regional public health experts and one community based organization serving Hispanic populations in Idaho (Community Council of Idaho). This organization helped us to verify the Spanish translation and determine its cultural resonance. Feedback from these experts was used to revise and refine the survey questions to help ensure their validity and reliability. Lastly, we pilot tested the survey using Qualtrics by distributing the survey to 50 respondents each from ID, TX, and VT (n = 150) between August-September 2020. For each state, we obtained equal proportions of rural and urban/suburban respondents. We conducted consistency analysis using the pilot data and examined other factors such as time for completion and any inexplicable patterns in the pilot data. The finalized survey covers topics including: worry about COVID-19, social distancing, mask wearing, economic impacts, contact tracing, vaccination intention, trust, information sources, and demography. Our questions are theory driven, tapping into constructs from common health behavior theories such as Social Cognitive Theory [25] and the Health Belief Model [26]. We also rely on CDC’s Behavioral Risk Factor Surveillance System (BRFSS) and other published survey studies, e.g., Jamieson and Albarracín [27], to determine the consistency and validity of survey questions. Our disproportionate stratified sample purchased from Dynata (Shelton, CT, USA) consists of 2000 rural and 2000 urban or suburban addresses from each of Idaho, Texas, and Vermont (12000 in total). Dynata classifies addresses as rural if they fall outside of a metropolitan statistical area (MSA) as defined by the US Office of Management and Budget. We employed the services of Washington State University’s Social & Economic Sciences Research Center to distribute the survey. All household addresses within the sampling frame were sent an initial invitation letter—which included a $1 USD incentive—directing respondents to a URL where they were asked to enter their unique response ID and complete the survey online. Non-respondents were sent a reminder postcard one week later, and two weeks after that a final reminder letter was mailed. We offered a phone number and an email address with the option to reach out to us to request a paper copy of the survey for those preferring the paper option. Online survey data collection occurred during October and November, 2020. Requested paper surveys were mailed in mid-November, and the data collection was completed in December, 2020. Our survey questionnaire (S1 Appendix) was approved by the University of Idaho Institutional Review Board (IRB #20–119). This study was deemed exempt from full review by the IRB as it includes a voluntary survey data collection of adults over the age of 18. Informed consent was obtained from all survey participants. Consent was documented by online survey participants reading the consent form and voluntarily clicking a button to proceed to the full survey. The participants who took the paper survey read a consent form and voluntarily mailed back their completed surveys. This study does not include any retrospective medical records or archived samples.

Measurements

Our demographic variables are comprised of direct measures of five attributes. Political ideology is coded as an unordered factor with levels: liberal, moderate, conservative, libertarian, non-political, and other; moderate is designated as the reference level for statistical analyses. The remaining measures are recorded as Boolean variables measuring race (white = 1), gender (female = 1), age (over 64 years = 1), and geography (rural = 1). See S3.1 Table in S3 Appendix for a detailed breakdown of demographic characteristics. Except for geography, which is determined by our de-identified address-based survey sample, all demographic variables are self-reported. Rural/urban designations for each response are determined based on the United States Department of Agriculture’s (USDA) rural-urban commuting area codes (RUCA), which classify US census tracts based on population density, urbanization, and daily commuting distance [28]. While RUCAs utilize a similar metropolitan/micropolitan approach used as part of the Office of Management and Budget (OMB) classification of metropolitan statistical areas (MSAs), the use of census tracts in RUCA assignment provides a more detailed geographic structure for urban and rural delineation [29]. For our analyses, we geographically mapped all survey respondents (n = 1034) for all three states (ID, TX, and VT), and associated RUCA codes based on the respondents’ de-identified addresses. We used ArcGIS software from Environmental Systems Research Institute, Inc. (ESRI; West Redlands, CA, USA) to perform this spatial association. We then designated respondents whose RUCA primary code was 1, 2, 3, or 4 as urban, and all other RUCA codes as rural (see S3.2 Table in S3 Appendix for a full list of all RUCA code designations). This stricter classification, as opposed to MSA classifications used in the sampling frame, ensures that rural-classified responses would reflect rural attitudes and experience [29]. We use two different measures of disease burden. For models where behavioral intention is hypothesized to affect disease burden, we consider cumulative cases per 100 people from the beginning of the pandemic in January 2020 through 30 April 2021, at the county level, as reported by the New York Times [30]; county-level data represent the finest spatial scale available for use in the study region (e.g., city-level data are not available). Choosing a date after the survey period enables observation of delayed consequences of behavior on disease burden. The chosen sample date captures the main wave of the pandemic in the US prior to widespread availability of the vaccines. Exploratory analyses showed that the exact choice of date has little-to-no effect on model results, which is to be expected given the auto-correlative structure of spatiotemporally-distributed cumulative disease data. For models where disease burden is hypothesized to affect behavioral intention, we use the cumulative case count from January 2020 to the recorded response date of the observation. Using this measure is consistent with the idea that previous, personal experience with the pandemic is shaping behavior. For all models, each respondent is assigned disease data corresponding to the county of the sampled de-identified address. We explored using other measures of burden, in particular the number of COVID-19 deaths reported. Death data are highly correlated with case data, which produced model convergence issues (singularities) if both measures were used simultaneously. Use of death data alone produced only slight changes in our model results, and thus are not presented herein. Because much of sample comes from small rural populations, death data are sparse due to increased stochasticity; this sparsity reduces statistical power, and we therefore opted to use cumulative case counts.

Statistical models

We use structural equation modeling (SEM) to test six different hypotheses regarding potentially causal pathways between trust, two types of risk perception, behavioral intention, and disease burden. SEM is a flexible modeling method that allows for the decoupling of the random error arising from observation and the error in the model and comparison of different pathways of causality. Our SEM application utilizes the lavaan package [31] which integrates factor analysis to define the latent variables with systems of simultaneous linear regression equations. Latent variables are those for which there is no direct measurement, but rather the variables are inferred indirectly via indicators. All analyses were performed in R v4.1.0 (see S2 Appendix for R code). The four central latent variables in our attitudinal framework are inferred as follows. “Health risk perception” and “economic risk perception” are derived from survey questions which asked directly about respondents’ concern for their own and community health and economic security. For these two measures, higher values of the latent variable indicate higher perceived risk. “Trust” is derived from similar questions which probe the degree of trust in COVID-19 guidance from governmental public health, medical, and scientific authorities. For our trust measure, higher values indicate higher trust in selected sources. “Behavioral intention” is derived from 8 other latent variables corresponding to expected engagement in day-to-day activities and protective behavior. Specifically, these activities include: 1) gathering indoors with close friends and family, 2) gathering indoors with a large group, 3) dining indoors at a restaurant, 4) attending church indoors, 5) shopping in person, 6) attending personal appointments, 7) participating in large community activities, and 8) wearing a mask. Questions about participating in these 8 activities were presented at increasing COVID-19 exposure-risk levels. A higher behavioral intention score indicates that the respondent expects to continue activities 1–7 and eschew masking as risk levels increase. Health risk perception, economic risk perception, and trust, together with the second-order variable behavioral intention, form our attitudinal framework. See S3.2 and S3.3 Fig in S3 Appendix for specifics of the latent variable submodels. For all of the latent variables in our attitudinal framework, as well as disease burden, we controlled for demographic variables via structural regressions (rural/urban, female/non-female, white/non-white, over/under 65 years old, and political ideology). Other control variables—such as education and income—were originally explored in structural regressions as well; however, due to a lack of significant impact, these variables have been dropped from the presented analyses. Prior studies have considered various causal relationships between trust and risk perception [32], finding support for influence in both directions depending on the context. In order to determine the best fitting causal framework for this study, we test three different relationships between them: 1) trust affecting risk perception (models 1A and 1B), 2) risk perception affecting trust (models 2A and 2B), and 3) independence of trust and risk perception (models 3A and 3B). We also test the direction of the causal relationship between disease burden and behavioral intention (e.g., by comparing model 1A to 1B). Thus, we test six different competing hypotheses using SEM in total. Fig 1 gives graphical representations of the different hypotheses being compared. For all of the factor analyses and structural regressions being run, the observational unit is at the individual-level with the exception of the 3 structural regression models (one each in models 1A, 2A, and 3A) where the dependent variable is cumulative cases per 100; in these three models, 117 unique values of case burden as of 30 April 2021 were available for the 117 counties sampled and were paired according to each individual’s county of residence.
Fig 1

Hypothesized conceptual models.

We tested several hypotheses about the interplay between trust, risk perception, behavioral intention, and COVID-19 disease burden. Each path diagram shows the hypothesized causal relationships between our measured variables. Latent variables are shown in ovals; exogenous variables are shown in rectangles. Structural equation modeling (SEM) was used to assess which model was best supported by our survey data.

Hypothesized conceptual models.

We tested several hypotheses about the interplay between trust, risk perception, behavioral intention, and COVID-19 disease burden. Each path diagram shows the hypothesized causal relationships between our measured variables. Latent variables are shown in ovals; exogenous variables are shown in rectangles. Structural equation modeling (SEM) was used to assess which model was best supported by our survey data.

Results

Overall, we received 1087 responses, a majority online. 57 people chose to receive a paper copy of the survey, and 44 of those respondents mailed completed surveys back to us. Our overall response rate is 9.98%, excluding the 1110 addresses that were not deliverable. After eliminating redundant and incomplete surveys, we received 1034 responses that were usable for analysis. Raw data are available at Dryad (http://datadryad.org); S2 Appendix contains R scripts with our data processing and analysis routines. Of the survey respondents, 55% identify themselves as female, 44% as male, and 0.6% as neither male nor female. The mean age of the full sample is 55 years of age (range 16–96, SD = 16.42). A majority of our sample has college degrees or higher levels of education (66%) followed by those who have attended some college (16%). A majority indicate that their total household income exceeds $75,000 per year (52%) with only 7% reporting household incomes less than $25,000. In terms of ethnicity, 4% of our respondents are Hispanic or Latino; racially, the majority of respondents were white (85%). Approximately 29% of respondents in each category identify as liberal, moderate, and conservative while the rest identify as Libertarian, non-political or other. Most of our respondents indicate that they are currently married or in domestic partnerships (68%). In terms of religion, most respondents identify as evangelical Christian (17%), followed by Catholic (16%), Mainline Christian (15%) and Agnostic (14%). S3.1 Table in S3 Appendix has a full breakdown of our demographic variables. Overall, our survey sample is disproportionately white, has higher levels of education and income, and is older than the national and state distributions, which has been observed elsewhere in mail surveys [33]. We tested whether demographic distribution of our responses is dependent on the state in which a respondent lives. Overall, respondent distributions for age, gender, and income are similar across ID, TX, and VT. While our sample has a large fraction of rural responses due to the sampling method, the only state for which a majority of respondents are rural is Vermont; the distribution of urban/rural respondents is significantly different between the sampled states (χ2: 46.74, df: 2, p < 0.001). Statistically significant differences are also observed for political orientation (χ2: 113.04, df: 10, p < 0.001), ethnicity (χ2: 29.45, df: 2, p < 0.001), race (χ2: 71.64, df: 14, p < 0.001), educational attainment (χ2: 31.37, df: 8, p < 0.001), relationship status (χ2: 24.3, df: 8, p = 0.002), and religion (χ2: 201.83, df: 18, p < 0.001). For our SEM models, of the 1034 respondents, 829 are usable (“complete” or lacking any missing columns) for this analysis. All of the SEM hypotheses in which we test behavioral intention driving disease burden produce good fits (RMSEA values of 0.071, 0.072, and 0.072, respectively for models 1A, 2A, and 3A). Comparison of models 1A, 2A, and 3A via Akaike’s Information Criterion (AIC) give values of 98774, 98950, and 98932, respectively. Likelihood ratio tests indicate that the first model, where trust affects risk perception, is supported significantly better by our data (model 1A vs. 2A: χ2 = 178.41, df = 1, p < 0.001; model 1A vs. 3A: χ2 = 159.91, df = 1, p < 0.001). Thus, all fit measures indicate 1A to be the best supported model of the three. For models 1B, 2B, and 3B, where behavioral intention is hypothesized to be affected by prior pandemic experience, we also observe good fits of the model (RMSEA values of 0.071, 0.072, 0.072, respectively). AIC values for these models are 89908, 90122, and 90080, respectively, and likelihood tests again favor the first trust-risk structure (model 1B vs. 2B: χ2 = 216.12, df = 1, p < 0.001; model 1B vs. 3B: χ2 = 173.51, df = 1, p < 0.001). Thus, model 1B is the best supported model of the three by all fit measures. From hereon, we will focus on the results of models 1A and 1B for the remainder of this article. We note that there is no method to statistically compare model 1A with 1B because of differences in the underlying data and equation structures. However, because the model structures for the social and cognitive latent variables in these models are, in general, similar, we report the p-values in parallel with p denoting the p-value of model 1A, and p denoting the p-value of model 1B; the full results of each of the models are provided in S3.3-S3.8 Table in S3 Appendix. Both model 1A and model 1B posit that trust affects risk perception which subsequently affects behavioral intention; for model 1A all paths are significant with the exception of economic risk perception affecting behavioral intention, while for model 1B all paths are significant (Fig 2). Increased trust leads to increased health risk perception (p, p < 0.001) and economic risk perception (p, p < 0.001). Higher health risk perception is associated with lower behavioral intention to engage in activities that have greater potential for exposure to disease (p, p = 0.009). For model 1A, this riskier behavioral intention leads to increased disease burden (p = 0.013). While for model 1B, where we consider the impact of the respondents’ pandemic experience on their expressed behavioral intentions, higher disease burden is associated with riskier behavioral intention (p = 0.019). In model 1B, we also found a marginally significant effect of increased economic risk perception on behavioral intention (p = 0.073).
Fig 2

Best supported social cognitive model with A and B types.

The results of our SEM show that the model where trust influences perceived health risk which in turn alters behavioral intention is the best of our causal hypotheses. Model A shows behavioral intentions affecting disease burden, and Model B shows the effect of disease burden on behavioral intention. In model A, rurality has an indirect effect on disease burden, with a negative effect on trust ultimately leading to increased disease burden. Only pathways that are supported with p < 0.1 are shown.

Best supported social cognitive model with A and B types.

The results of our SEM show that the model where trust influences perceived health risk which in turn alters behavioral intention is the best of our causal hypotheses. Model A shows behavioral intentions affecting disease burden, and Model B shows the effect of disease burden on behavioral intention. In model A, rurality has an indirect effect on disease burden, with a negative effect on trust ultimately leading to increased disease burden. Only pathways that are supported with p < 0.1 are shown. In model 1A, we can consider the indirect impacts of social cognitive factors on disease burden via behavioral intentions. For the indirect effect of trust on disease burden, mediated via perceived health risk and behavioral intention, we find that increased trust is significantly associated with decreased disease burden (p < 0.001); the intermediate pathway of increased health risk perception is also associated with higher disease burden (p < 0.001). Because rurality has a marginal effect on trust, we examine the indirect effect of rurality on disease burden mediated via the trust-health risk-behavioral intention pathway. We find that this indirect effect of rurality increases disease burden but is only marginally significant (p = 0.095). However, the net effect of rurality is still protective at the time of the survey because rural areas experienced fewer COVID-19 cases per capita through the Spring of 2021, i.e., the direct effect of being rural overwhelmed the indirect effect. The results of the factor analysis for behavioral intention indicate which activities are tied to a higher intention to engage in activities that potentially increase exposure to COVID-19. Recall that behavioral intention is estimated using 8 day-to-day activities as indicator variables. The results of the SEM for both models show that all 8 activities are at least marginally significant for this measure. (Only respondents’ answers regarding willingness to go shopping and attend appointments are marginally significant.) Listed from strongest association to weakest association, indicators of behavioral intention are eating in restaurants, attending indoor group gatherings, participating in large community activities, attending church, going to appointments, meeting indoors with close friends and family, mask wearing, and shopping. We find that demographic variables have significant effects on several of the latent variables. The only significant effect of geography is on disease burden, with rural communities having a significantly lower disease burden (p < 0.001). There are, however, marginally significant effects of rurality on trust (decreasing; p, p = 0.088) and on economic risk perception (decreasing; p = 0.096, p = 0.094). Women show significantly increased health risk perception (p, p < 0.001) and economic risk perception (p, p = 0.035). Women are also significantly more likely to continue daily activities in 1A (p = 0.045); this effect was marginally significant in 1B (p = 0.057). Individuals who are white have significantly higher trust (p, p < 0.001) and lower perceived health risk (p, p < 0.001). Elderly individuals perceive significantly higher health risk (p, p < 0.001). The most significant exogenous factor included from our survey data is political ideology. Compared to respondents self-identifying as moderates, self-identified liberals communicate more trust (p, p = 0.001) and self-identified conservatives communicate the least trust (p, p < 0.001). Those self-identifying as non-political or libertarian also express significantly less trust than self-identified moderates (p, p < 0.001 for both). In terms of risk to health from COVID-19, self-identified liberals are significantly more concerned (p, p = 0.013), while self-identified libertarians are less concerned (p = 0.053, p = 0.050), though this effect is marginal in model 1A. In considering economic risks from the pandemic to themselves and their community, self-identified conservatives are less concerned than self-identified moderates (p, p = 0.005), while self-identified libertarians are more concerned (p, p = 0.023). Finally, with respect to behavioral intention, identifying as conservative has the strongest positive association with increased behavioral intention to continue pre-pandemic activities and avoid masking (p = 0.013, p = 0.014); self-identified libertarians are also more likely to take on more risk of exposure to COVID-19, though only marginally so (p = 0.079, p = 0.084). In model 1A, self-identified liberals are predicted to have increased protective behavior (p = 0.037), but not in model 1B.

Discussion

Our results imply there are downstream, indirect consequences of demographic and ideological characteristics on behavior and potentially on disease burden. Specifically, we find the most support for social cognitive models where trust influences risk perception, which in turn affects behavioral intention (models 1A, 1B). Counter intuitively, model 1B predicts higher observed disease burden during the beginning of the COVID-19 pandemic is associated with decreased prophylactic behaviors. Individuals from rural communities express reduced trust and reduced perceived risk, indicating that the barrier to public health engagement is stronger in these regions. Importantly, our research suggests cultivating trust in authorities tasked with communicating public health information would be the optimal way to increase adoption of NPIs to slow the spread of future pandemics. In the case of COVID-19, trust in the message and the messenger has been undermined by several factors. Namely, there was a lack of uniform national, state and local strategies; inadequate reach, accessibility, and consistency of public health information; and widespread misinformation and disinformation that was not adequately refuted [34, 35]. Studies suggest that misinformation not only erodes trust in public health authorities, but also decreases the motivation to seek and adopt correct information [36]. The influence of social media on information consumption exacerbates the impact of misinformation [27]. News partisanship further impacts trust in public health authorities’ message of risk and the reduction of risk through social distancing and other actions [37, 38]. COVID-19 pandemic response protocols ask individuals, families, schools, and communities to adopt life-altering precautions and behavioral changes. To adopt these practices individuals must perceive the risk of COVID-19 to themselves, their families, and communities. Furthermore, they must trust public health authorities to accurately identify and communicate protective disease intervention protocols [39]. One consequence of the request by authorities for social distancing and mask wearing was increased uncertainty and skepticism [40, 41]. Individuals with more trust in public health authorities are less likely to characterize such requests as a result of incompetence or malfeasance and comply [35]. The result of increased trust leading to increased pandemic protective behavior, as measured by decreasing day-to-day activities and increasing mask wearing, is borne out in both of our best supported models (Fig 2). Observed early support for NPIs in the US was notably absent in rural communities and essential workers [42]. Our analyses similarly shows lower levels of institutional trust, lower levels of intention to comply with public health measures, and decreased risk perception in rural areas (Fig 2). Nonetheless, our analyses also shows that disease burden was significantly lower among rural persons. This suggests that, at least in the earlier stages of the pandemic, rurality had a protective effect. This was most likely due to reduced population density and time-of-onset of epidemic waves in those areas. However, not all rural residents were at low exposure risk to SARS-CoV-2. Some rural residents working in meat, poultry, food processing, and agricultural industries face additional COVID-19 risks as these industries involve working and/or traveling in enclosed spaces closer than the recommended 6-foot distance. These industries were deemed essential and were not closed, even in cases of high community transmission. As a result, outbreaks of COVID-19 disproportionately impacted workers and their families in such industries [43]. Decreased levels of trust in rural areas likely worsened the issues stemming from these outbreaks among essential workers. Our best supported models propose a role for behavioral intention in influencing future disease burden (model 1A) and, conversely, previously observed disease burden influencing behavioral intention (model 1B). In model 1A, we find that resistance to behavioral change during the COVID-19 pandemic, vís-a-vís adoption of NPIs, is significantly predictive of higher disease burden in the Winter 2020 wave of the COVID-19 pandemic. This result fits with standard epidemiological theory where the rate and frequency of uptake of NPIs drastically affects the epidemic trajectory. Classic examples of these effects are found in post-hoc analyses of the 1918 Spanish Flu pandemic [44, 45]. Surprisingly in model 1B, we find that increased observed prior disease burden actually leads to reduced prophylactic behavior. While this result is counter intuitive, it is perhaps not without precedent. Recent research [46, 47] shows that perceived disease severity is influenced by various ideological and social factors. Therefore, one potential explanation for the predictions of model 1B would be a disconnect between perceived and actual disease burden in a county. If individuals are being told by their in-group that disease burden is not severe, then they may continue engaging in behaviors that increase their chances of contracting COVID-19, even in the face of high case counts. These effects may have been worsened by the fact that a majority of COVID-19 cases are mild and deaths are concentrated in the elderly [48, 49]. In the United States, adoption and approvals of public health interventions for COVID-19 fall along political lines. Specifically, other research finds people identifying as democrats favor publicly mandated disease interventions and practice protective health recommendations more than people identifying as republicans [37, 50–52]. Political ideology similarly influences every aspect (trust, risk perception, and behavioral intention) of our social cognitive model results. The finding that political ideology affects trust and compliance with NPIs (i.e., behavioral intention in our models) has been reported in several other studies [34, 37, 41, 53]. Furthermore, our findings are consistent with other recent work in which partisan differences were found to be more significant than other factors in determining social distancing behavior, and with results of disparate health outcomes based on party identity [38, 54]. Thus, our work adds to the body of evidence for the consequences of political ideology on behavioral changes in response to the pandemic. Our model, however, offers a more nuanced view of where partisanship plays a role in affecting various aspects of cognition. In particular, the social construct of trust in public health guidance seems to be affected by all of the political categories we analyzed (i.e., liberal, moderate, conservative, libertarian, non-political). For the cognitive constructs, only libertarian identity is significant for both health and economic risk perception. In addition, health risk perception is also significantly affected by liberal identity, while economic risk perception is significantly affected by conservative identity. Lastly, self-identified liberals expressed willingness to reduce their day-to-day activities as the risk of SARS-CoV-2 infection increased, while conservative and libertarian identities were significantly associated with reluctance to reduce activity. Therefore, public health strategies appealing to certain cognitive constructs might be better focused toward particular partisan groups. For example, advertising health risks of a disease may impact liberals and libertarians more effectively than other groups. Still, trust has the strongest effect on both types of risk perception, there we suggest maintaining trusting relationships with all groups is the most vital action. Our findings related to gender are also in-line with other studies that report women as more concerned about the health consequences of COVID-19 [55-57]. These results are somewhat surprising given that men are more likely to contract severe COVID-19 cases resulting in hospitalization or death [58]. However, our findings that women engage in higher levels of activity that could expose them to SARS-CoV-2 differ from other studies [56]. This might be explained partially by the increased household responsibilities of women resulting in higher activity levels [59]. 64% of the women in our survey indicated that they are married and therefore may feel increased pressure to perform some the day-to-day activities about which we asked. Finally, women also perceived more economic risk to themselves and their community from COVID-19, which is consistent with women having generally higher risk perception [60]. Our study has several limitations. First, survey instruments are subject to response bias. Our respondents tend to be older, wealthier, more-educated individuals compared to the population as a whole. This is typical of many survey-based studies [33]. We interpret our findings in light of this limitation. Second, we received fewer responses from Texas (144) than from Idaho and Vermont. However, we received a substantial fraction of rural responses from each state, resulting in a multifaceted picture of rural attitudes; therefore, the effect of a lack of respondents from Texas may have been minimal. Third, with respect to disease data, we are limited by the shortcomings of the disease surveillance and reporting mechanisms. Because of limitations in testing for COVID-19, reported case counts are an underestimate of the true number of cases. This should have little effect on the outcome of our study so long as there are no heterogeneous biases in under-reporting of cases. Fourth, we are limited by the fact that COVID-19 cases are reported at the county level within the US. We may have been able to achieve greater resolution in our study had we been able to associate case counts with census tracts, the geographic level at which the geographic analysis was conducted. Related to this, in determining whether a zip code is rural or urban, we use the RUCA classification system. This system offers a finer level of granularity of which locations are urban and which are rural than the MSA classifications used by Dynata. Fifth, our survey instrument measures an individual’s self-reported political leanings, rather than political affiliation directly. Previous work shows that individuals may be afraid to honestly identify their political beliefs for fear of repercussions. Sixth, it should be emphasized that our study represents a snapshot of attitudes in late 2020, and it is possible that attitudes toward NPIs have changed with the progression of the pandemic and the availability of effective vaccines. Finally, while we received 1,087 responses out of the 12,000 surveys we sent out, having a larger sample size may have allowed us to attribute significant effects to other factors than those discussed here. That being said, the smallest significant effects in our models are of magnitude around 0.05, which suggests that our analyses are strong enough to detect small effects.

Conclusion

Understanding how individuals process and respond to threats in their environment is critical to optimizing public health messaging and policy. Using structural equation modeling to identify latent variables for trust, risk perception, and behavioral intention, our survey results best support the hypothesis that building trust in government organizations can be used to influence behavioral intentions indirectly via risk perception. Higher risk perception leads to reduced behavioral intention, and model 1A predicts reduced behavioral intentions leads to reduced disease burden. We therefore propose decision makers focus efforts on trust building to increase NPI effectiveness in future pandemics. Our work is novel in its attempt to reach and understand individuals living in rural areas. Rural populations indicate less trust and reduced risk perception compared to urban populations, making them vulnerable to higher disease burden and a possible focus area for public health. Lack of trust in rural communities combined with increased risked to essential workers could have negative synergy; this issue is beyond the scope of this work but merits future study. In agreement with other COVID-19 studies, political ideology seems to be an overwhelming factor influencing the trust–risk–behavior cognitive pathway. Our results align with other research on politicization and polarization of public views towards controversial topics. Future research utilizing increased spatial and temporal resolution of survey data, along with other measures of disease burden, such as years-of-life-lost, could further elucidate the links between political affiliation and social cognition.

Full English survey.

(DOCX) Click here for additional data file.

R markdown file with detailed R code.

(RMD) Click here for additional data file.

Compiled version of R markdown file.

(PDF) Click here for additional data file. 28 Dec 2021
PONE-D-21-36310
Effects of trust, risk perception, and health behavior on COVID-19 disease burden: Evidence from a multi-state US survey
PLOS ONE Dear Dr. Ridenhour, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Feb 11 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, José Alberto Molina Academic Editor PLOS ONE Journal Requirements: 1. When submitting your revision, we need you to address these additional requirements. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. Thank you for stating the following in the Acknowledgments Section of your manuscript: (We would like to thank the pandemic modeling group at the Institute for Modeling Collaboration and Innovation (IMCI) at the University of Idaho for help working on and  thinking about COVID-19 related issues. Similarly, we thank the University of  Texas–Austin COVID modeling consortium, led by Drs. Lauren Ancel Meyers and Spencer Fox, for useful conversation and feedback regarding this work. We also thank our undergraduate research assistants Isabella Bermingham, Chloe Dame, Maria Elizarraras, and Bishal Thapa who were supported by a College of Science Renfrew  Faculty Fellowship awarded to JJL. We thank Drs. Erkan Buzbas and Tim Johnson for  helpful conversations about statistical models. Finally, Dr. Holly Wichman has  provided invaluable leadership in finding and giving funding to perform this research;  specifically, this work was funded by NIH Grant number 3P20GM104420-06A1S1.) We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows: (BJR, JJL received funds via NIH (National Institutes of Health; http://www.nih.gov) Grant number 3P20GM104420-06A1S1. JJL also received intramural funds at the University of Idaho (http://www.uidaho.edu) via the Renfrew Fellowship to pay for undergraduate research.  The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.) Please include your amended statements within your cover letter; we will change the online submission form on your behalf. 3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide. 4. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found (S2 R markdown file with detailed R code). PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability. "Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized. Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access. We will update your Data Availability statement to reflect the information you provide in your cover letter. Additional Editor Comments: The literature about the Covid effects is very extensive, e.g https://covid-19.iza.org/publications/ and, similarly, with other wp series and, of course, journals. It is particularly relevant to review the economic papers which use this kind of econometric methods to evaluate the covid-19 effects. Authors need to prove the novelty of this contribution, given that ethods are not particularly solid. Consequently, It is absolutely needed to highlight the novelty, in addition to perform the methods in a solid way. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Partly Reviewer #3: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Referee report for “Effects of Trust, Risk Perception, and Health Behaviour on Covid-19 Disease Burden: Evidence from a Multi-State US Survey Summary: This paper collects survey evidence from around 1000 individuals in Texas, Idaho and Vermont in November 2020, measuring demographic characteristics, social behaviours, risk perceptions, trust and political ideology. The paper assesses the relationship between these variables and case counts using SEM modelling. The paper finds that lower trust in rural locations offsets the natural advantages these places have for combating the spread of Covid-19. Assessment My main technical concern is with the implementation of the model. In many cases (e.g. Model 1A) the authors model behavioural intention as determining cases. The behavioural intention is then measured at the individual level, while case numbers are measured in the aggregate. But of course an individual’s behaviour is atomistic and has negligible effect on aggregate case numbers. So I don’t know how to interpret the estimates. I can’t recall ever seeing a model with an aggregate explained variable being driven by an individual-level explanatory variable. This approach should at least be discussed and supported. More subjective concerns are as follows: The main problem with this paper is the extremely small sample size. In each model the authors use 1000 observations to estimate 40 parameters. Obtaining significant and novel results in this way is only possible with rigid modelling. My technical concern above leads me to another criticism, that much of questions here would be better addressed using aggregate data variation over both location and time. For example the google data can be leveraged to obtain detailed aggregate measures of behaviour across locations and over time. Similarly social surveys could be leveraged to obtain aggregate measures of trust and political ideology over time. Of course, that’s a different study, but it feels that would answer the same questions much more convincingly. My final criticism is that the discussion is too focussed on the U.S. Of course, issues of the interaction of population density, behaviour and disease spread are of relevance across the globe. The paper should attempt to speak to this audience rather than to focus on the U.S. rather narrowly. Reviewer #2: This paper examines the relationship between individuals’ attitudes concerning COVID-19 and disease burden, and whether these relationships was significantly different in rural populations. To that end, the authors develop their own survey covering three states: Idaho, Texas, and Vermont. I think the authors have written an interesting paper on an important topic and while the literature is crowded, I do think that they make a contribution to it. However, I have some concerns and I feel like the current version comes up a bit short of robustness tests. 1) My first concern is about the measure of disease burden. The authors consider the number of cases from the beginning of the pandemic at county level as a proxy of disease burden. However, the prior epidemiology literature has used the number of deaths to better account for the spread of the COVID-19. Thus, I would suggest a robustness test where you make use of COVID-19 deaths. 1) The adoption of social distancing measures (including business closures and stay-at-home orders) may be influencing the trust-risk-behavior itself. As the authors highlight, the implementation of NPIs varied widely between communities within the US as they took place at distinct geographic levels (some at the county, others at the state) and for different periods of time. Thus, without appropriate controls it is hard to disentangle the effect of disease burden from the implementation of the NPIs. Can the authors control for the timing and intensity of the NPIs at county level? 3) I am not sure whether there is available data on the number of cases/deaths at city level. In this case, does it make sense to use disease data corresponding to the county instead of using city data when we know people addresses? If there is not available information I would suggest the authors to note that in the text. 4) I would suggest as another robustness test to amplify the set of demographic controls if it is possible. The authors describe the sample in terms of education and total household income; however, it is not clear to me whether these controls have been included in the model. 5) Are the estimates weighted? 6) I would suggest the authors to note the limitation of the low number of observations in the text. 7) Another important data limitation is that the survey period does not enable you to explore the existence of pre-trends during the months prior. Can the authors manage to assess that? Reviewer #3: This is a nice paper, but somewhat too long compared with the contents. I am also puzzled by Model 2 (Supplemental Appendix page 8) that assumes that economic and health risk perceptions are affecting trust. I don't see what mechanism would make this plausible, or how it could be tested with the current data. If using data collected in one survey, I am worried that even if running formally regressions in line with Model 2, we woulf effectively capture correlational patterns the causal effect of which would go the other way. I suggest either dropping Model 2, or arguing more convincingly why it deserves to be kept and can be tested. I think that streamlining the paper would improve its impact as potential readers would be more likely to read it through. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 8 Feb 2022 ***NOTE THAT THE TEXT BELOW IS GIVEN IN OUR COVER LETTER IN AN EASIER TO READ FORMAT*** Dear editor(s), We would like to thank the reviewers and editors for their helpful comments on our manuscript. In italic below, we give our responses to the comments provided by the reviewers after our responses to the editorial requests. In particular, we have attempted to clarify/respond to comments regarding our methods. In most of the cases where comments focused on variations of our model, we ran those analyses—as part of the exploratory phase—prior to our initial submission. We specifically now mention these prior analyses but are not presenting them in the revision. We do not present them in the revision for two reasons: First, they do not add to the results we already present, but they do act as a very weak form of cross-validation. Second, addition of numerous variations of the model (beyond the six we already investigate) will significantly add to the length of the manuscript and supplementary materials (which was already criticized by one reviewer); we also believe it would reduce the readability of our research by presenting various “dead ends” to readers. However, if the editors feel like the addition of more model variations is needed, we are happy to do so. Thank you again for the consideration of our work for publication in PLOS ONE. Best, Ben Ridenhour Editorial Requests: 1. We believe all of our materials match the PLOS ONE styles that are specified. 2. We are unclear as to the discrepancy between the acknowledgments sections funding statement and the funding statement in our submission. It looks like we’ve referenced the NIH grant in both places (and the identical grant number), and it looks like we mention the Renfrew Fellowship (internal to University of Idaho) to Dr. Johnson-Leung that paid for undergraduate assistance on the work in both places as well. Can you give more explanation where the discrepancy is? 3. Our data should appear at https://doi.org/10.5061/dryad.0cfxpnw4c on Dryad once the paper is published. (We requested them be withheld until publication.) 4. Like (3), our minimal data set will appear on Dryad at the same DOI. Reviewer #1: Referee report for “Effects of Trust, Risk Perception, and Health Behaviour on Covid-19 Disease Burden: Evidence from a Multi-State US Survey Summary: This paper collects survey evidence from around 1000 individuals in Texas, Idaho and Vermont in November 2020, measuring demographic characteristics, social behaviours, risk perceptions, trust and political ideology. The paper assesses the relationship between these variables and case counts using SEM modelling. The paper finds that lower trust in rural locations offsets the natural advantages these places have for combating the spread of Covid-19. Assessment My main technical concern is with the implementation of the model. In many cases (e.g. Model 1A) the authors model behavioural intention as determining cases. The behavioural intention is then measured at the individual level, while case numbers are measured in the aggregate. But of course an individual’s behaviour is atomistic and has negligible effect on aggregate case numbers. So I don’t know how to interpret the estimates. I can’t recall ever seeing a model with an aggregate explained variable being driven by an individual-level explanatory variable. This approach should at least be discussed and supported. RESPONSE: We are somewhat unclear about the reviewer’s comment on this. Virtually all statistical models take individual measurements and estimate a characteristic of the population (e.g., the mean, variance). Linear regression models typically take individual measurements and relate them to the mean of some other variable; a recent example of this from a similar COVID-19 epidemiological study would be Im and Kim (2021) [https://doi.org/10.3390/ijerph182312595]. From a higher-level perspective, we developed behavioral intention using the Health Belief Model (HBM) where an individual’s likelihood of adopting a given behavior is determined based on the perceived severity of disease and perceived effectiveness of recommended health behavior. In HBM behavioral intention is conceptualized as a psychological construct which is typically measured at the individual level and used to predict the group mean. More subjective concerns are as follows: The main problem with this paper is the extremely small sample size. In each model the authors use 1000 observations to estimate 40 parameters. Obtaining significant and novel results in this way is only possible with rigid modelling. RESPONSE: We were hoping to have a larger sample size given the number of surveys we originally mailed out (12,000). That being said, our sample size is still large enough to provide ample power to our statistical tests. For example, we found significant effect sizes (parameters) of ~0.05 in some of our models (see Tables S3.3 - S3.8). It is true that with greater sample size, we may have found even smaller effects to be significant (i.e., it is possible that we may missed some effects). In response to this comment and that of reviewer #2, we have added the following to the limitations paragraph in our discussion: “Finally, while we received 1,087 responses out of the 12,000 surveys we sent out, having a larger sample size may have allowed us to attribute significant effects to other factors than those discussed here. That being said, the smallest significant effects in our models are of magnitude around 0.05, which suggests that our analyses are strong enough to detect small effects.” My technical concern above leads me to another criticism, that much of questions here would be better addressed using aggregate data variation over both location and time. For example the google data can be leveraged to obtain detailed aggregate measures of behaviour across locations and over time. Similarly social surveys could be leveraged to obtain aggregate measures of trust and political ideology over time. Of course, that’s a different study, but it feels that would answer the same questions much more convincingly. RESPONSE: We disagree that using aggregate data would be a better measure of population behavior. In prior COVID modeling efforts, the authors have used Google data to help determine contacts rates in various locales (particularly within Idaho). While these data are useful, they lack any detail on how might change their behavior as risk levels change. Rather, they reflect the effects of various public health orders (e.g., stay-at-home or mask mandates), socio-economic drivers, and other effects. Our method uses characteristics of individuals (e.g., political affiliation, gender, willingness to change behavior) to support a health belief model which has some predictive value on disease burden. These are data that one would not get from Google (e.g., whether self-declared liberal would be willing to stop shopping). It is perhaps an important distinction that we are looking to support/test health belief models and not just the best way to predict disease burden (with which one could use something like machine learning to do a better job). There is no way to understand risk perception, trust, and behavioral intention (and their relationships) using something akin to publicly available Google data. We agree with the reviewer that their suggestion would be a different study. We now specifically add this statement in our introduction (line 47): “We emphasize that our model is not an attempt to produce the best predictive model of COVID-19 burden, an effort which has been done using many other better suited methods for that task. Rather, we wish to determine a model of human behavior that could augment such models and increase their value to public health officials.” We hope this clarifies the intention of our modeling efforts. My final criticism is that the discussion is too focussed on the U.S. Of course, issues of the interaction of population density, behaviour and disease spread are of relevance across the globe. The paper should attempt to speak to this audience rather than to focus on the U.S. rather narrowly. RESPONSE: We hesitate to speculate too much on regions outside of the US. Our survey respondents were only from 3 US states (Idaho, Texas, and Vermont), so even extrapolating to the entire US is perhaps speculative. However, we are more comfortable speaking about the US due to political, media, and cultural similarities within the US compared to say European countries where these factors may be totally different. We hope that, for example, our finding on politically conservative versus liberal individuals would hold for other regions/countries as well. If the editors feel like this speculation is warranted, we could add language to that effect. Reviewer #2: This paper examines the relationship between individuals’ attitudes concerning COVID-19 and disease burden, and whether these relationships was significantly different in rural populations. To that end, the authors develop their own survey covering three states: Idaho, Texas, and Vermont. I think the authors have written an interesting paper on an important topic and while the literature is crowded, I do think that they make a contribution to it. However, I have some concerns and I feel like the current version comes up a bit short of robustness tests. 1) My first concern is about the measure of disease burden. The authors consider the number of cases from the beginning of the pandemic at county level as a proxy of disease burden. However, the prior epidemiology literature has used the number of deaths to better account for the spread of the COVID-19. Thus, I would suggest a robustness test where you make use of COVID-19 deaths. RESPONSE: We agree that deaths due to COVID-19 are likely to be a better indicator of the severity of the pandemic impact in a given region. We further agree with the reviewer in that we did try a number of different disease measures for our models. For example, we tried using different cut-off dates (line 141 of the manuscript) to see if that had effects on our results, which they did not. We also tried using death counts instead of case counts, and we tried using the combination of the two (cases and death counts). In the latter case, the high degree of correlation between the two measures was prohibitive of their simultaneous inclusion into the model. Because many of the counties we worked with in this study are relatively small, there is a high degree of stochasticity and zero-observations when working with death counts which reduces statistical power. Thus, in the end, we opted to use case counts because of the reduced variance in the measure (and improved statistical inference). We did not include all of these exploratory analyses in our manuscript or appendix, but we would be happy to include them if so requested. We have added a short paragraph starting on line 149 that explains our choice of using case counts over death counts. 1) The adoption of social distancing measures (including business closures and stay-at-home orders) may be influencing the trust-risk-behavior itself. As the authors highlight, the implementation of NPIs varied widely between communities within the US as they took place at distinct geographic levels (some at the county, others at the state) and for different periods of time. Thus, without appropriate controls it is hard to disentangle the effect of disease burden from the implementation of the NPIs. Can the authors control for the timing and intensity of the NPIs at county level? RESPONSE: We agree with the reviewer’s points that a) recommendation by public health officials affects adoption and b) that adoption rates in turn affect disease burden. It is these very factors that we are trying to address with our models. In essence our model is using trust in public health, risk perception, and willingness to change behavior to predict adoption. We then use a measure of disease burden that is sufficiently far into the future (30 April 2021) to see the (predicted) effect on COVID-19 rates resulting from our health behavior model. That being said, if we actually knew (which we do not) when and where public interventions were enacted, we could probably increase the power of our model to predict disease burden. However, as mentioned before (see response to reviewer #1), the goal of the study was not to build the best predictive model of disease burden but to find a well-supported model for the behavioral side of human responses during the pandemic. 3) I am not sure whether there is available data on the number of cases/deaths at city level. In this case, does it make sense to use disease data corresponding to the county instead of using city data when we know people addresses? If there is not available information I would suggest the authors to note that in the text. RESPONSE: There are not city level data available for the area we studied. We have noted this in the manuscript (line 137; this was also mentioned in line 423 of the original manuscript, now line 436). 4) I would suggest as another robustness test to amplify the set of demographic controls if it is possible. The authors describe the sample in terms of education and total household income; however, it is not clear to me whether these controls have been included in the model. RESPONSE: We did originally include some of the other variables in our data set in the model (e.g., education and income). However, because they had little effect on the model as exogenous variables, they were not included in the “streamlined” models presented. As suggested by the reviewer, it is comforting to know that inclusion of such variables does not drastically alter the inferences made by our model. Again, we have not included these alternative models that were investigated along the way; if the editors feel like we should include these in the appendix in some way, we would be pleased to accommodate the request. To explain our choice, we have added the following statement on line 188: “Other control variables---such as education and income---were originally explored in structural regressions as well; however, due to a lack of significant impact, these variables have been dropped from the presented analyses.” 5) Are the estimates weighted? RESPONSE: They are not. (Though case burden was included as a rate per 100.) 6) I would suggest the authors to note the limitation of the low number of observations in the text. RESPONSE: See our response to reviewer #1 regarding sample size above. We have added text to the discussion regarding this point. 7) Another important data limitation is that the survey period does not enable you to explore the existence of pre-trends during the months prior. Can the authors manage to assess that? RESPONSE: We are unsure what the reviewer means by “pre-trends.” (Pre-trends in what?) We do mention the limitation of the survey period our limitations paragraph of the discussion (line 445). If the trends in attitude are what the reviewer is suggesting, it seems unlikely that we would be assess such a trend. Reviewer #3: This is a nice paper, but somewhat too long compared with the contents. I am also puzzled by Model 2 (Supplemental Appendix page 8) that assumes that economic and health risk perceptions are affecting trust. I don't see what mechanism would make this plausible, or how it could be tested with the current data. If using data collected in one survey, I am worried that even if running formally regressions in line with Model 2, we would effectively capture correlational patterns the causal effect of which would go the other way. I suggest either dropping Model 2, or arguing more convincingly why it deserves to be kept and can be tested. RESPONSE: We are happy the reviewer appreciates our work. Because we are using SEM to do our modeling, we specify the nature of the covariances within our models (i.e., which are zero and which must be fit) which results in the differences in pathways between the models. Thus, the worry about the causal effects “going the other way” is not warranted for the estimation procedure; that is not to say the arrows could not flow from trust to risk. In fact, that is exactly what models 1A and 1B are (Fig 1, Fig S3.1) and why we compare the fits of the models 1 to models 2 (and models 3). The premise behind model 2 is that, conditional on what individuals perceive as risks to their health or economic, they become more or less open to messaging from institutions such as CDC. For example, if one perceives their health or economic risk to be large, they may not care (trust) what any government agency tells them; conversely, if perceived risk is low, individuals may become much more open to messaging coming from the government. In order to provide more of an argument for the chosen models, we have reworded the paragraph that starts on line 192. We hope the wording now indicates that we are building on the work of others, such as the cited Siegrist (2019) paper which reviews the literature that explores the direction of causality between trust and risk perception. Hopefully the arguments presented in Siegrist (and the works therein) are convincing of the debate on the interplay between trust and risk perception. I think that streamlining the paper would improve its impact as potential readers would be more likely to read it through. RESPONSE: We are happy to cut portions of the paper if the editors feel it is too long and could be reduced in any particular/suggested way. Without particular suggestions as to what to remove, we are hesitant to do so (particularly given that other reviewers are asking for more material.) 15 Mar 2022
PONE-D-21-36310R1
Effects of trust, risk perception, and health behavior on COVID-19 disease burden: Evidence from a multi-state US survey
PLOS ONE Dear Dr. Ridenhour, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Apr 29 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, José Alberto Molina Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. Additional Editor Comments (if provided): Dear Author/s, I agree Rev 1. Sincerely [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: (No Response) Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: (No Response) ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: (No Response) ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: (No Response) ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: (No Response) ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The responses to my points are mostly fine. With regard to my first point, the response of the authors is a little unfortunate; The Im and Kim paper that they refer to clearly uses aggregate variable as both dependent and independent variables. More precisely Im and Kim state "These 77 cities and counties were used as study units for the regression models." Accordingly I should be more precise in my question about the paper at hand: Is the unit of observation the location or the individual? If it's the location then you have very few data points. If it's the individual then I'm still not fully convinced by having an aggregate variable as the dependent variable. On the other hand, the extended answer the authors give to my point is acceptable. But I would like to see a clear statement of the unit of observation. Reviewer #2: (No Response) ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
8 Apr 2022 Please see the attached cover letter for our response to the reviewer. Submitted filename: Response Letter-2-PLOS One .docx Click here for additional data file. 27 Apr 2022 Effects of trust, risk perception, and health behavior on COVID-19 disease burden: Evidence from a multi-state US survey PONE-D-21-36310R2 Dear Dr. Ridenhour, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, José Alberto Molina Academic Editor PLOS ONE Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: (No Response) ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No 12 May 2022 PONE-D-21-36310R2 Effects of trust, risk perception, and health behavior on COVID-19 disease burden: Evidence from a multi-state US survey Dear Dr. Ridenhour: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Professor José Alberto Molina Academic Editor PLOS ONE
  42 in total

1.  Worldviews, trust, and risk perceptions shape public acceptance of COVID-19 public health measures.

Authors:  Michael Siegrist; Angela Bearth
Journal:  Proc Natl Acad Sci U S A       Date:  2021-06-15       Impact factor: 11.205

2.  Potentially Excess Deaths from the Five Leading Causes of Death in Metropolitan and Nonmetropolitan Counties - United States, 2010-2017.

Authors:  Macarena C Garcia; Lauren M Rossen; Brigham Bastian; Mark Faul; Nicole F Dowling; Cheryll C Thomas; Linda Schieb; Yuling Hong; Paula W Yoon; Michael F Iademarco
Journal:  MMWR Surveill Summ       Date:  2019-11-08

3.  Modeling compliance with COVID-19 prevention guidelines: the critical role of trust in science.

Authors:  Nejc Plohl; Bojan Musil
Journal:  Psychol Health Med       Date:  2020-06-01       Impact factor: 2.423

4.  Prevalence of Disability and Disability Types by Urban-Rural County Classification-U.S., 2016.

Authors:  Guixiang Zhao; Catherine A Okoro; Jason Hsia; William S Garvin; Machell Town
Journal:  Am J Prev Med       Date:  2019-12       Impact factor: 5.043

5.  Motivations, barriers, and communication recommendations for promoting face coverings during the COVID-19 pandemic: Survey findings from a diverse sample.

Authors:  Rhyan N Vereen; Allison J Lazard; Simone C Frank; Marlyn Pulido; Ana Paula C Richter; Isabella C A Higgins; Victoria S Shelus; Sara M Vandegrift; Marissa G Hall; Kurt M Ribisl
Journal:  PLoS One       Date:  2021-05-07       Impact factor: 3.240

6.  Understanding face mask use to prevent coronavirus and other illnesses: Development of a multidimensional face mask perceptions scale.

Authors:  Matt C Howard
Journal:  Br J Health Psychol       Date:  2020-06-26

7.  The Role of Risk Perceptions and Affective Consequences in COVID-19 Protective Behaviors.

Authors:  Katie E Alegria; Sara E Fleszar-Pavlović; Dalena D Ngo; Aislinn Beam; Deanna M Halliday; Bianca M Hinojosa; Jacqueline Hua; Angela E Johnson; Kaylyn McAnally; Lauren E McKinley; Allison A Temourian; Anna V Song
Journal:  Int J Behav Med       Date:  2021-04-08

8.  Partisanship, health behavior, and policy attitudes in the early stages of the COVID-19 pandemic.

Authors:  Shana Kushner Gadarian; Sara Wallace Goodman; Thomas B Pepinsky
Journal:  PLoS One       Date:  2021-04-07       Impact factor: 3.240

9.  Beyond political affiliation: an examination of the relationships between social factors and perceptions of and responses to COVID-19.

Authors:  Berkeley Franz; Lindsay Y Dhanani
Journal:  J Behav Med       Date:  2021-04-20

10.  Elusive consensus: Polarization in elite communication on the COVID-19 pandemic.

Authors:  Jon Green; Jared Edgerton; Daniel Naftel; Kelsey Shoub; Skyler J Cranmer
Journal:  Sci Adv       Date:  2020-07-10       Impact factor: 14.136

View more
  1 in total

1.  Infection preventive behaviors and its association with perceived threat and perceived social factors during the COVID-19 pandemic in South Korea: 2020 community health survey.

Authors:  Woo In Hyun; Yoon Hee Son; Sun Ok Jung
Journal:  BMC Public Health       Date:  2022-07-19       Impact factor: 4.135

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.