Roberto Stefan Foa1, Mark Fabian2, Sam Gilbert1. 1. Department of Political Science and International Studies, Bennett Institute for Public Policy, University of Cambridge, Cambridge, United Kingdom. 2. Institute for Social Change, University of Tasmania, Hobart, Australia.
Abstract
We investigate how subjective well-being varied over the course of the global COVID-19 pandemic, with a special attention to periods of lockdown. We use weekly data from YouGov's Great Britain Mood Tracker Poll, and daily reports from Google Trends, that cover the entire period from six months before until eighteen months after the global spread of COVID-19. Descriptive trends and time-series models suggest that negative mood associated with the imposition of lockdowns returned to baseline within 1-3 weeks of lockdown implementation, whereas pandemic intensity, measured by the rate of fatalities from COVID-19 infection, was persistently associated with depressed affect. The results support the hypothesis that country-specific pandemic severity was the major contributor to increases in negative affect observed during the COVID-19 pandemic, and that lockdowns likely ameliorated rather than exacerbated this effect.
We investigate how subjective well-being varied over the course of the global COVID-19 pandemic, with a special attention to periods of lockdown. We use weekly data from YouGov's Great Britain Mood Tracker Poll, and daily reports from Google Trends, that cover the entire period from six months before until eighteen months after the global spread of COVID-19. Descriptive trends and time-series models suggest that negative mood associated with the imposition of lockdowns returned to baseline within 1-3 weeks of lockdown implementation, whereas pandemic intensity, measured by the rate of fatalities from COVID-19 infection, was persistently associated with depressed affect. The results support the hypothesis that country-specific pandemic severity was the major contributor to increases in negative affect observed during the COVID-19 pandemic, and that lockdowns likely ameliorated rather than exacerbated this effect.
The dramatic and widespread impacts of the COVID-19 pandemic make it imperative that we understand the efficacy of policy responses to it. Among the most prominent such policies are ‘lockdowns’–mandated or voluntary stay-at-home and shelter-in-place-orders that promote social distancing and reduce the spread of the virus. The evidence to date supports the view that lockdowns were good for physical health in that they reduced excess mortality associated with the virus [1, 2]. However, the psychological effects of lockdowns remain unclear.One of the most prominent such effects is upon subjective well-being (SWB), typically defined as a combination of experienced mood and evaluated life satisfaction, as well as feelings of meaning and purpose [3, 4]. Lockdowns could be expected to impact SWB negatively by, for example, reducing social interaction [5], increasing the burden of child care [6], or by exacerbating stress and boredom [7]. Conversely, lockdowns could also improve subjective well-being by, among other things, eliminating the necessity for long commutes to work [8], allowing more time to socialise with close family members [9], and alleviating anxieties caused by the spread of COVID-19.The majority of empirical studies that examine SWB under lockdown compare the results of surveys conducted before the global coronavirus pandemic with results from surveys that were fielded after lockdowns had been introduced. They typically find a deterioration in SWB and/or mental health between the two surveys. Yet this leaves ambiguous the cause of the observed decline. It could be that the imposition of lockdown measures had a negative effect on SWB. However, it could also be that the pandemic itself depressed wellbeing, for example as a result of fear or ill-health among vulnerable populations, and that lockdown policies had an attenuating role.This is frustrating because, from a policy perspective, we want to understand whether lockdowns in response to pandemic outbreaks worsen or improve citizens’ overall sense of wellbeing. Numerous commentators, including prominent politicians in the United States (US) and the United Kingdom (UK), have argued that lockdowns should be curtailed or ended owing to their negative impacts on SWB and mental health. Yet if the negative effects observed are principally driven by fear of the pandemic, bereavement, or the lasting symptoms caused by COVID-19 infection, then ending lockdowns prematurely may have the opposite of the intended outcome.In this study, we shed light on this question using high-frequency observational data covering the entire duration of the pandemic, including multiple separate coronavirus waves, as well as government response measures to them. Specifically, we use two years of weekly survey data from YouGov’s Great Britain mood tracker, together with two years of daily global search data from Google Trends from six countries, to provide insight into whether lockdowns in response to pandemic outbreaks can be expected to worsen or improve SWB. To date, ours is the first study that provides a comprehensive overview of the 2020–21 period, across multiple waves of infection and policy response. Our data includes both cases where lockdown measures were introduced yet prevented a widespread epidemic via community transmission (such as New Zealand and Australia in the spring of 2020), and instances where lockdowns were eased according to original schedules despite the onset of a new coronavirus wave (such as the United States in the summer of 2020, and the United Kingdom in summer 2021). While identification of the independent causal effect of pandemics and lockdowns upon SWB is necessarily frustrated by endogeneity, the existence of cases where pandemic waves and lockdowns occurred to some extent separately of one another offers an improved basis for prediction over single-country studies using individual pre- and post- surveys as evidence.Observation of the descriptive trends in affect reveals pronounced structural breaks in a negative direction following pandemic outbreak, and a positive direction following the imposition of lockdowns. These observations are mirrored by the results of statistical modelling, which show a consistent negative association between affect and pandemic severity, and a strong and steady recovery in affect after the first few days of lockdown. This association between lockdowns and affect is robust to controls for hedonic adaptation [10] and progress in containing the virus outbreak. Following the imposition of lockdown we also observe an especially strong recovery trend among the most vulnerable population (the over 65s), which suggests that reduced ill-health and anxiety among such groups may be a plausible explanation. While SWB is not identical to mental health, the two concepts are closely related statistically [11], suggesting that we should predict a worsening of mental health during pandemic outbreaks, and improvements following the implementation of lockdowns in response. Together with other recent studies, showing for example a decline in suicide rates across a large sample of countries during the pandemic [12], our results provide a valuable additional input into contemporary and future policy debates over when to ease lockdown restrictions in pandemics for the sake of SWB.
Literature review
Studies published around the time lockdowns were initially implemented in western countries raised concerns about the possibility of negative mental health effects related to SWB, including loneliness, depression, and suicide [13, 14]. There are several reasons why lockdowns could have such negative impacts. First, being quarantined reduces social interaction, an important correlate of SWB [5]. Secondly, the dramatic nature of lockdown policies could even further exacerbate stresses and anxieties concerning the threat posed by the pandemic [15]. In addition, similar feelings could be fuelled by the challenges associated with balancing work and home life in lockdown conditions, especially in small and/or crowded households [6, 16]. With many schools closed, parents were burdened with the duty of home schooling, while several studies note an uptick in reports of domestic violence [17].On the other hand, lockdowns are a measure taken in response to a pre-existing threat: namely, an ongoing or impending pandemic. As of the latest count, 0.8% of the entire elderly (75 and above) population of the United States has died while diagnosed with COVID-19 during 2020–21, with a case fatality ratio of 17% [18]. Against this background, lockdowns could have a positive effect on well-being insofar as they provide a forceful policy response to the pandemic that enhances people’s sense of security [19]. If lockdowns bring down the prevalence of the virus faster than voluntary self-isolation, they might also reduce the time individuals need to self-isolate and the associated negative psychological effects. In addition, many lockdowns were introduced with substantial economic support programs. In the UK for example, rental payments were delayed, debts were temporarily forgiven, welfare payments were increased, and the stringency of welfare under the British Government’s universal credit scheme was relaxed. Economic stress can lead to mental distress, and economic security is a well-established source of SWB [20]. As such, lockdowns and their associated economic support may boost SWB, especially among groups that are typically under economic and mental strain.Some empirical studies of the effects of lockdown appear to support the view that lock-downs were bad for well-being [21]. Studies using the United Kingdom Household Longitudinal Study, which is based on a probability sample, found that the prevalence of clinically significant levels of mental distress, measured using the General Health Questionnaire (GHQ-12), had increased by around 8 percentage points one-month into the UK lockdown compared to previous years [22, 23]. Similar results were found in similar studies from New Zealand [19] and the United States [24, 25]. Early studies from China also found modest declines in SWB and worsening psychological distress [26-28]. However, other empirical studies have found mixed effects varying by aspect of SWB and mental health, and heterogeneous effects by demographic characteristics [29]. One analysis using data from the COVID Social Study in the UK, which commenced after lockdown was in place, found that anxiety and depression did not worsen during lockdown [30]. Another study using the same data set found that lockdowns exacerbated loneliness among the already lonely but reduced it among the least lonely [31]. Similar, though milder, effects on loneliness were observed in a separate study from the United States [32]. A French study found that SWB, operationalised using questions about whether respondents felt nervous, low, relaxed, sad, or happy, improved during lockdown, except among Parisians [33].These studies are not, however, able to provide insights into the differential effects of the pandemic and subsequent lockdowns as separate albeit related events. They typically rely on measures of SWB and/or mental health taken well before the onset of the pandemic, and then follow-up surveys administered after lockdowns were introduced. Those that rely on data collected after lockdowns began have no baseline against which to measure changes in SWB and mental health, and must instead rely on people’s own assessments. Meanwhile, longitudinal studies that do possess a baseline typically do not have observations in the period between the advent of the pandemic and the introduction of lockdowns, and hence estimate only their joint impact. Yet a deterioration observed in the second period could be caused by pandemic onset and then be further exacerbated or potentially ameliorated by lockdowns. It is difficult if not impossible to separate the causal effects of the pandemic from those of lockdowns analytically, but precisely because the latter are introduced as a response to the former it is important that we do not lump these two events together as a single phenomenon. If SWB tends to decline with pandemic onset and improve when lockdowns are introduced in response, then avoiding lockdowns or ending them prematurely because we are worried about their psychological effects may have the opposite of the intended outcome.To our knowledge, there are only two papers on the impacts of lockdowns on SWB with data capable of at least temporally differentiating these two factors. The first is Zacher and Rudolph (2020) [34], who use a nationally representative, longitudinal sample of around 1,000 employed Germans surveyed on four separate occasions–December 2019, when the first COVID-19 cases were reported in China; March 2020, around the time of the first COVID-19 death in Germany; and then again in April and May 2020 during the initial months of the first national lockdown. Respondents were asked about their life satisfaction on a scale from 1–7, and about a host of a affective states drawn from the short form Positive and Negative Affect Schedule [35]. The positive affects were: inspired, alert, excited, enthusiastic, and determined; while the negative affects were afraid, upset, nervous, scared, and distressed. Using growth curve analysis, the authors find systematic declines (i.e. worsening) in life satisfaction and positive affect associated with the imposition of lockdown, but also declines (i.e. improvements) in negative affect. A second paper capable of providing some insights into the differential effects of the pandemic and lockdowns is Daly and Robinson (2021) [36]. They use a representative sample of US adults taken fortnightly from March to June 2020. They find an increase in psychological distress in March through April, but then a recovery. While not conclusive nor causal, these results would incline us to predict a deterioration in SWB at pandemic onset, and then a moderation or improvement in SWB following the imposition of lockdown.Our analysis has notable strengths and weaknesses relative to these papers. We contribute insights from the experience of lockdown in the UK using weekly data from YouGov, and daily data from six English-speaking countries using Google Trends. We also have a larger sample, higher-frequency data, and a longer time series, covering the entire period from July 2019 to June 2021. These allow us to estimate time-series and multi-level models to complement Zacher and Rudolph’s growth-curve analysis. However, our sample is cross-sectional rather than longitudinal, which means we can only assess changes in SWB in the aggregate, rather than at the individual level. Our analysis also relies overwhelmingly on mood variables, rather than a more complete set of SWB questions.
Data and methods
Our analysis utilises weekly data from YouGov’s Great Britain Mood Tracker poll and daily reports from Google Trends. For the United Kingdom, our sample covers both the 6 months before the pandemic as well as its first 18 months, which includes two major lockdown periods (April to June 2020 and January to April 2021), one briefer ‘mini-lockdown’ (November 2020), and several coronavirus waves (the initial wave in the spring of 2020, the ‘alpha’ variant waves in October 2020 and January 2021, and the spread of the ‘delta’ variant in summer 2021). For the broader global sample of countries using daily data from Google Trends, our sample period includes instances where lockdowns occurred without uncontrolled community transmission leading to a full epidemic (New Zealand and Australia in 2020), and also coronavirus waves that occurred in a context of reduced or continued easing of lockdown rules (in the United States in the summer of 2020, and across several countries in 2021).
The Great Britain weekly mood tracker survey
YouGov is one of the world’s most reputable polling and market research companies, and the source of the largest cross-country COVID-19 global tracking survey currently in use among public health researchers [37]. From June 2019 to date, they have also surveyed the feelings and well-being of more than 200,000 respondents across England, Scotland and Wales, in weekly samples of between 1,890 and 2,071 individuals. Respondents are drawn from a larger panel of over 1 million British participants recruited by YouGov since 2000, representing more than 1.5% of current UK population, from which individuals are selected for each cross-sectional poll so as to be representative by age, gender, social class and education. Sampling is continuously assessed for reliability and accuracy based on disaggregated census returns and predictive accuracy in national elections. As of December 2020, when the latest individual-level dataset was provided to us by YouGov, a total of 154,053 respondents had completed the Mood Tracker survey, with additional surveys continuing to be conducted on a weekly basis. Individuals were asked to complete a shortened variant of the Profile of Mood States (POMS) battery [38]. This asks whether participants had experienced any from a list of positive and negative mood states during the past week: happiness, sadness, apathy, energy, inspiration, stress, optimism, boredom, contentment, loneliness, and fear. In addition, a total of 13,954 respondents from within these surveys also completed a variant of the 11-point Cantril Scale, which reports life satisfaction on a 0 to 10 scale, with 0 being the worst possible level, and 10 the best possible. Unfortunately, these life satisfaction responses are all from April 2020, but we can use them to predict life satisfaction for the whole period over which we have mood data (see below).
Google trends
To validate the YouGov Great Britain weekly mood tracker results and facilitate cross-country comparisons, data was collected from Google Trends, which enables the relative popularity of Google searches to be analysed. Google Trends allows for a comparison of both search queries and ‘topics’ (clusters of related queries), and has previously been applied to research questions in the elds of public health [39, 40], economics [41-43], and political science [44], among others. Data for Google Trends topics was acquired for six English-speaking countries during the period from 30 June 2019 to 21 June 2020, corresponding to matching affective states in the YouGov weekly mood tracker: stress (‘psychological stress’), boredom, frustration, sadness, loneliness, feeling scared (‘fear’), apathy, happiness, contentment, energy, inspiration (‘artistic inspiration’), and optimism.
Descriptive statistics and trends
We begin our analysis with an overview of how the prevalence of specific mood states changed in the UK during different stages of the COVID-19 pandemic. Positive affect states–happiness, energy, inspiration, optimism, and contentment–show a very similar pattern (Fig 1). While there is clear stochasticity in the pre-pandemic trends, levels were relatively stable before the crisis. They then fell sharply during the virus breakout in March before reverting higher following the stay-at-home order.
Mean scores by week; rolling averages. The start of the pandemic is indicated by the dashed lines, while lockdown periods are indicated by the shaded portions on charts. Source: YouGov.
Mean scores by week; rolling averages. The start of the pandemic is indicated by the dashed lines, while lockdown periods are indicated by the shaded portions on charts. Source: YouGov.Negative mood states, in contrast, show more heterogeneous trends. In the period of the pandemic breakout from 5 March to 26 March 2020, feelings of fear, stress, sadness, and frustration all rose, presumably as individuals became attuned to the risks facing their health and livelihoods. However, there were also statistically significant falls in apathy (95% c.i. = -8.16% < x < -4.52%) and loneliness (95% c.i. = -5.92% < x < -2.05%). The first month of lockdown brought substantial falls (i.e. improvements) in fear and stress. Indeed, stress levels after one month of lockdown reached their lowest levels of the year. In addition, sadness also fell after reaching a peak during the first week of lockdown. These trends speak to the mortality risks associated with the virus and the lockdowns having a calming effect through their reduction in that risk. However, feelings of loneliness, apathy, frustration and boredom spiked higher, and while boredom, sadness, and loneliness fell back again in the second month, perhaps as people adapted to living at home, frustration continued upwards. This suggests that while we should expect lockdowns to coincide with an amelioration in moods that worsened during pandemic outbreak, some mood states will decline with lockdown. Taking all negative affect items together, negative affect rose sharply with the outbreak of the pandemic, and then continued to rise, albeit much more slowly, after the imposition of lockdown. Immediately prior to the pandemic (February 2020), the average negative mood state response among respondents (across all negative mood states) was 23.8%. This rose to 27.6% in March and 30.1% in April, 2020. The pre/post pandemic increase is statistically significant at the p < 0.001 level.On face value, these trends suggest an overall positive correlation between lockdown and mood, with positive affect recovering markedly during lockdown and increases in negative affect decelerating. However, due to the heterogeneity in trends across mood states, one can arrive at a more pessimistic conclusion depending on which mood states one regards as relatively important. To allow respondents to determine this weighting over mood states, we developed a summary index of ‘affective life satisfaction’ that estimates the independent association of each mood state with reported life satisfaction. We did this by regressing the individual mood states reported in the modified POMS question battery on the YouGov life satisfaction data from April 2020, and used this model to predict life satisfaction for all weeks for which mood data was available, thereby imputing life satisfaction in the manner of a wage equation [45]. This method weights each mood state by its contribution to reported life satisfaction, which is considered an effective global measure of SWB [3, 4]. All independent effects had the expected polarities, and coefficients for imputation models are shown Table A.1 in S1 Appendix. The largest effect magnitude for predicting life satisfaction was the mood state response for feeling ‘happy’, which accounted for 24% of the total variance in Cantril scale life satisfaction that could be explained by the mood state indicators. Feelings of loneliness accounted for a further 13% of explained variation, followed by sadness (13%), contentment (11%), stress (9%), optimism (8%), apathy (7%), fear (4%), frustration (4%), energy (4%), boredom (3%), and inspiration (2%).The ALS index estimates that portion of life satisfaction that is due to respondents’ positive and negative affective states. SWB consists of both ‘experienced’ well-being, which is made of affective states, and a cognitive component, typically referred to as ‘evaluative’ well-being [3, 4]. The ALS only directly captures the former. Nonetheless, it provides a reasonably close empirical approximation of overall life satisfaction: individual mood states could be used reliably to predict Cantril Scale life satisfaction at the individual respondent level (13,954 observations; R = 0.57), by sociodemographic group (48 observations, R = 0.88; see Fig A.2 in S1 Appendix), and almost perfectly in ALS-response clustered comparisons (63 observations, R = 0.99; see Fig A.1 in S1 Appendix). The cluster comparison involved organising the 13,954 individuals who answered both the profile of mood states battery and the Cantril scale (0–10) life satisfaction question into 63 groups using their scores on the affective life satisfaction measure rounded to one decimal place. Values ranged from 2.4 (the lowest group cluster) to 8.6 (the highest cluster). The mean average surveyed Cantril scale life satisfaction response for each group correlated almost perfectly (R = 0.99) with their mean affective life satisfaction scores (R2 of 0.97).Fig 2 shows the change in average affective life satisfaction in the UK from June 2019 to July 2021. This gives us a sense for the overall affect of British residents over the pandemic and the two major lockdown periods that occurred from March to July of 2020 and from January to April of 2021. A large and statistically significant drop in affect occurred before the implementation of lockdown measures during the period from Thursday 5 March, when the first diagnosed COVID-19 death in the United Kingdom occurred, to Thursday 26 March, when lockdown measures began. The low point for affect was recorded only three days after the announcement of the ‘stay-at-home’ order, and on the exact day that police enforcement measures came into effect, and thereafter affective life satisfaction rises steadily. A similar gradual decline in affect occurred concurrent with the spread of the Alpha variant from September 2020 to January 2021, with affect reaching another low on the eve of the announcement of the second major lockdown at the start of 2021, where after it recovered steadily, returning to the pre-pandemic baseline by April of this year.
Fig 2
Raw series trend: Affective life satisfaction, June 2019 to July 2021.
Mean scores by week, with 90% confidence intervals. Source: Affective life satisfaction index calculated from YouGov Great Britain mood tracker survey; COVID-19 data from Johns Hopkins University (2021).
Raw series trend: Affective life satisfaction, June 2019 to July 2021.
Mean scores by week, with 90% confidence intervals. Source: Affective life satisfaction index calculated from YouGov Great Britain mood tracker survey; COVID-19 data from Johns Hopkins University (2021).
Cross-country comparisons with search data
The geographic scope of the YouGov weekly mood tracker is limited to a single country. To enable cross-country comparisons, in June 2020 we supplemented British survey data from YouGov with Google Trends data on search-based equivalents of the affect measures for a wider range of cases during the initial months of the pandemic [46]. In this article we are able to extend this methodology to cover the entire two-year period from July 2019 to June 2021, as well as to confirm how well the original methodology has predicted subsequent YouGov survey observations. To determine how effectively changes in affect as measured in the YouGov data were proxied by Google Trends topics, Pearson’s R correlations were calculated for each mood state measured and corresponding Google Trends topic during the 50-week period under observation. These results are shown in Table 1. With the exception of ‘loneliness’, Google Trends topics were found to be a reasonable proxy for negative moods, but a poor proxy for positive moods.
Table 1
Mapping of YouGov mood states to Google trends topics.
YouGov
Corresponding
R
Accepted as proxy?
Mood State
Google Trends Topic
value
Negative Affect
Stressed
Psychological Stress
0.46
Yes
Bored
Boredom
0.85
Yes
Frustrated
Frustration
0.65
Yes
Sad
Sadness
0.55
Yes
Lonely
Loneliness
0.01
No
Scared
Fear
0.49
Yes
Apathetic
Apathy
0.44
Yes
Positive Affect
Happy
Happiness
-0.05
No
Content
Contentment
-0.41
No
Energetic
Energy
0.19
No
Inspired
Artistic Inspiration
-0.08
No
Optimistic
Optimism
-0.32
No
Notes: R-values calculated for the 50 shared weekly affective state observations in both the YouGov Mood Tracker survey and weekly Google search data.
Notes: R-values calculated for the 50 shared weekly affective state observations in both the YouGov Mood Tracker survey and weekly Google search data.In order to confirm the validity of the data, ‘Related Queries’ were also qualitatively reviewed to check for the extent of false positives, i.e. search queries which are lexically related, but do not imply the corresponding mood state. Google Trends describes the concept of Related Queries as follows: ‘Users searching for your term also searched for these queries’. False positives partially explained the weakness of Google Trends topics as a proxy for positive mood states. For example, the topic ‘energy’ contained queries relating to gas and electricity suppliers, while the topic ‘happiness’ included queries relating to ‘happy birthday’, possibly reflecting a UK government public health campaign encouraging citizens to wash their hands for as long as it takes to sing ‘Happy Birthday’ twice. Of the negative moods, only the topic ‘apathy’ contained obvious false positives, though not sufficient to eliminate covariance between weekly apathy-related searches and surveyed apathy levels in the YouGov data. Related Queries for apathy included esoteric searches such as ‘indifferent crossword clue’, but also substantive queries largely related to mental self-help and diagnosis.As Google Trends topics were a poor proxy for positive mood states, we developed our cross-country index using negative mood states only. To facilitate this analysis, we aggregate individual negative mood states in the YouGov data into a ‘negative affect index’ and the mood states in the Google Data into a ‘negative affect search index’. The negative affect index takes average mentions from the list of possible negative states–sadness, apathy, frustration, stress, boredom, loneliness, and fear–making it analogous to the negative affect component of the widely used Positive and Negative Affect Scale (PANAS) [47]. The original index was calculated for a 50-week period from July 2019 to June 2020, and has been subsequently extended until June 2021, allowing us to compare ‘in-sample’ observations (those available at the time of index construction) with out-of-sample performance (observations made in the year since the index was designed). To construct the ‘negative affect search index’, we weight the Google Trends topics by their R2 correlation coefficient with their matching YouGov survey mood state. At the time of the index construction in June 2020, the search-based negative affect index correlated highly (R = 0.92, R2 = 0.84) with the sum of negative mood states reported in the weekly polling data series, a correlation that has held at a similar level during the ‘out-of-sample’ period in the following year (Fig 3).
Fig 3
Comparison of survey and Google trend series, June 2019 to June 2020.
The shaded (left) portion of the chart shows the correlation between negative affect mentions in the YouGov survey versus the negative affect search index based on Google data, at the time of the index construction (June 2020). The unshaded (right) portion of the chart shows how the YouGov weekly survey data and the Google search index have continued to covary in sync with one another during the subsequent year. The Negative Affect Index is based on YouGov weekly polling data, for a representative sample of circa 2,000 respondents across England, Scotland and Wales (216,441 total). It comprises the sum of all negative affect states reported by respondents. The Negative Affect Search Index is based on Google Trends data for the United Kingdom, and includes corresponding matches for stress (‘psychological stress’), boredom, sadness, feeling scared (‘fear’) and apathy, weighted by their R2 correlation with their individual matching terms. A two-week smoothing function has been applied to the weekly data for both measures. Indexes standardised (mean 0, standard deviation 1) for comparison purposes.
Comparison of survey and Google trend series, June 2019 to June 2020.
The shaded (left) portion of the chart shows the correlation between negative affect mentions in the YouGov survey versus the negative affect search index based on Google data, at the time of the index construction (June 2020). The unshaded (right) portion of the chart shows how the YouGov weekly survey data and the Google search index have continued to covary in sync with one another during the subsequent year. The Negative Affect Index is based on YouGov weekly polling data, for a representative sample of circa 2,000 respondents across England, Scotland and Wales (216,441 total). It comprises the sum of all negative affect states reported by respondents. The Negative Affect Search Index is based on Google Trends data for the United Kingdom, and includes corresponding matches for stress (‘psychological stress’), boredom, sadness, feeling scared (‘fear’) and apathy, weighted by their R2 correlation with their individual matching terms. A two-week smoothing function has been applied to the weekly data for both measures. Indexes standardised (mean 0, standard deviation 1) for comparison purposes.Having constructed and validated a negative affect search index for the UK, we then compare UK trends with those in other parts of the world. These comparisons are shown in Fig 4, which displays trends in the negative affect search index in the UK together with a broader range of English-speaking countries: Ireland, Canada, the United States, Australia and New Zealand.
Fig 4
Negative affect, lockdowns and pandemic intensity: Cross-country comparisons, January 2020 to July 2021.
(a) United Kingdom (b) Ireland (c) United States (d) Canada (e) Australia (f) New Zealand. Cross-country comparisons on the negative affect Google Trends index. All countries set relative to their pre-pandemic baseline period (15 January to 15 February). Shaded portions indicate periods of lockdown.
Negative affect, lockdowns and pandemic intensity: Cross-country comparisons, January 2020 to July 2021.
(a) United Kingdom (b) Ireland (c) United States (d) Canada (e) Australia (f) New Zealand. Cross-country comparisons on the negative affect Google Trends index. All countries set relative to their pre-pandemic baseline period (15 January to 15 February). Shaded portions indicate periods of lockdown.Fig 4 shows that the trend observed in the British weekly survey data–of a sharp decline in affect before the lockdown as the COVID-19 pandemic accelerated, followed by a steady recovery after lockdown measures were put in place–is replicated across a wide variety of English-speaking countries globally. All cases experienced a spike in negative affect as the pandemic spread locally, and this appears synchronous with the country-specific timing of the outbreak. There is then an improvement in affect synchronous with the implementation of lockdowns, though this is more muted in 2nd and subsequent lockdowns, especially in Canada and Ireland. The only countries to avoid a renewed increased in negative affect during the period from late 2020 to mid-2021 were Australia and New Zealand, the two countries that implemented lockdowns that were successful in maintaining a zero Covid policy until their August 2021 wave (not shown). These countries also saw a much reduced spike in negative affect during their 2020 lockdowns in comparison to countries that experienced wide scale national epidemics. For example, in Australia negative affect rose to a peak of 2.5 times the pre-pandemic baseline during the first lockdown in April 2020. In comparison, in the United States the negative affect index reached 3.3 times and in the United Kingdom 4.3 times the pre-pandemic baseline.
Time-series cross-sectional models
In this section, we augment the clear descriptive trends outlined above with estimates of the association between lockdowns and mood from time series statistical models using the Google Trends data. These allow us to control for some confounding factors. In particular, time series analysis allows us to address three issues. First, we must establish that the timing of the negative affect spike (and its subsequent decline) across countries is associated with the country-specific timing of coronavirus outbreaks. Second, we must demonstrate that the recovery of mood during lockdown is not wholly explained by the subsidence of the pandemic, to which lockdown was only one potential contributor alongside behavioural change and rising population immunity (though early evidence suggests immunity is mild at best [48]). Third, we need to show that the return to baseline during lockdown was more than a simple hedonic adaptation effect [10]–that is ‘mean reversion’ to set-point levels of good mood–as this too would imply that mood recovery was possible in the absence of lockdown measures. Such adaptation is a well-established phenomenon in the SWB literature [49].We therefore estimate time-series models that control for the severity of the pandemic over time among countries for which comparative negative affect estimates can be calculated, as well as hedonic adaptation. Data on the severity of the COVID-19 pandemic is taken from the Johns Hopkins University Covid-19 Tracking Project [50]. As there is wide variation between countries and over time in the quality and effectiveness of COVID-19 testing, we use data on COVID-19 fatalities, which is less susceptible to measurement error. We follow a relatively new approach to estimating hedonic adaptation in the happiness economics literature, which is to include lag/s of the dependent variable, in our case the negative affective search index [51]. Most studies of adaptation instead use lags of the independent variable/s of interest, typically a shock like divorce, that the researchers are trying to estimate adaptation to [52]. This is inappropriate in our case because the independent variables of interest, namely pandemic severity and lockdown, are ongoing and varying over the period in question, rather than one-off events. We thus focus on whether the trends we observe can be explained simply by the general tendency of mood to adapt back to a baseline level over time [49]. The coefficient on the lagged negative affect search index can be interpreted within a difference-equation framework. If it is between 0 and 1, it implies that some portion of the past value of negative affect is carried over into the present period. The closer the coefficient is to 1, the longer this effect takes to decay, implying slower adaptation.Models are estimated in the form:
Where NA refers to the negative affect index in time t and country c, NA to the one week lagged negative affect index in time t and country c, L to whether a country c is in a lockdown period at time t, F to the log number of daily fatalities per million in time t and country c, LD to the cumulative number of days (t − l) since the onset of hard lockdown restrictions in country c, E to whether a country c is in a post-lockdown period at time t, ED to the cumulative number of days (t − e) since the easing of lockdown restrictions upon small businesses and retail, by country, and μ is the error term. All models are estimated using robust standard errors clustered by country, so as to account for serial autocorrelation, and also include both country fixed effects (not shown) and period fixed effects (by month of observation) to account for seasonal variation in subjective wellbeing [53].We present the results of our analysis in Table 2. Models 1–3 are estimated using a longer sample period beginning in July 2019, well prior to the pandemic, whereas models 4–6 are estimated using a sample covering only the first eighteen months of the pandemic. Using two sampling windows in this way illuminates how the association between lockdown and mood differs statistically depending on whether it is assessed relative to a pandemic-free world (models 1–3) or to a world with pandemics but no lockdown (models 4–6). Columns 2 and 5 introduce the lagged term that we use to estimate adaptation effects; columns 3 and 6 then add an interaction term for days under lockdown multiplied by new fatalities. Note that positive coefficients imply a worsening of negative affect, while negative coefficients imply an ameliorating association.
Table 2
Negative affect under lockdown: Time-series models.
Dependent variable: Negative Affect
Sample frame:
Since July 2019 (i.e. pandemic-free period)
Since February 2020 (i.e. post-pandemic but pre-lockdown)
(1)
(2)
(3)
(4)
(5)
(6)
Daily COVID-19 fatalities per million, log
0.136†
0.11†
0.117
0.194*
0.167*
0.166†
(0.06)
(0.052)
(0.064)
(0.067)
(0.058)
(0.07)
Lockdown period (0/1)
0.854*
0.694*
0.681*
0.339
0.261
0.261
(0.226)
(0.17)
(0.191)
(0.244)
(0.198)
(0.205)
Days since lockdown start, log
-0.268*
-0.244*
-0.245*
-0.308*
-0.278*
-0.277*
(0.078)
(0.066)
(0.062)
(0.086)
(0.07)
(0.069)
Post-lockdown period (0/1)
0.181
0.109
0.100
-0.365
-0.350
-0.349
(0.291)
(0.23)
(0.246)
(0.245)
(0.203)
(0.206)
Days since lockdown eased, log
-0.196
-0.148
-0.149
-0.204†
-0.157†
-0.157†
(0.106)
(0.082)
(0.086)
(0.092)
(0.07)
(0.072)
Negative affect index, lagged one week
-
0.232**
0.232**
-
0.214*
0.214*
-
(0.049)
(0.05)
-
(0.053)
(0.053)
Log days under lockdown * log new fatalities (p. m.)
-
-
0.000
-
-
0.000
-
-
(0.001)
-
-
(0.001)
Constant
1.084*
0.923*
0.931*
1.887**
1.636**
1.638**
(0.364)
(0.279)
(0.265)
(0.33)
(0.234)
(0.232)
Observations
4817
4768
4768
3341
3334
3334
Adjusted R2
0.218
0.26
0.26
0.255
0.29
0.289
Notes: All models use robust standard errors, clustered by country, together with fixed effects for country and month of survey to control for seasonal effects. Models are shown using two different sample frames.
all observations in the dataset (since July 2019), and b) all observations since the onset of the pandemic (February 2020).
†p<0.1;
*p<0.05;
**p<0.01;
***p<0.001.
Notes: All models use robust standard errors, clustered by country, together with fixed effects for country and month of survey to control for seasonal effects. Models are shown using two different sample frames.all observations in the dataset (since July 2019), and b) all observations since the onset of the pandemic (February 2020).†p<0.1;*p<0.05;**p<0.01;***p<0.001.The model coefficients suggest the following inferences. First, the results support the view that country-specific pandemic severity was a major contributor to the elevated levels of negative affect observed during the period in question. The coefficient for log new fatalities per million is large and significant in all but model 3, such that a moderate increase in pandemic severity from 0 daily fatalities per million to 0.15 daily fatalities per million (as occurred in New Zealand or Australia) would raise estimated negative affect from 50th to 57th percentile of the distribution (0.18 standard deviations), whereas a much larger increase towards 10.5 daily deaths per million (as occurred in the United States) raises negative affect from the 50th all the way up to the 90th percentile (calculations derived from Model 4). These estimates are robust to the inclusion of the one-week lagged dependent variable for negative affect (columns 2–3 and 5–6), suggesting that levels of pandemic intensity can explain changes in affect even over relatively short periods.Second, lockdown is statistically associated with a reduction in negative affect. Establishing this requires a comparison of the results from models 1–3 and models 4–6. In models 1–3, the dummy variable for being in lockdown has a large, positive, and significant correlation with negative affect. However, looking at the results more broadly, this appears to be because lockdowns are introduced amid pandemic outbreaks rather than because lockdowns themselves are associated with worsening mood. The coefficient for being in lockdown is not statistically significant if the sample frame is restricted to the pandemic period, as in models 4–6. This suggests that the result in models 1–3 is driven by comparison to a pandemic-free world. Lockdowns, however, are instituted in response to pandemic outbreak, so a pandemic-free world is arguably an unreasonable comparison case for the effect of lockdown. When the more reasonable comparison case of a pandemic outbreak is used, lockdowns do not seem to have deleterious impacts, broadly speaking. Indeed, the coefficient on days since lockdown began is large, negative, and statistically significant, suggesting that lockdowns reduce negative affect, as observed in our graphical analysis above. Fig 5 plots this ameliorating association between lockdowns and negative affect over time. While negative affect is above zero at the onset of lockdowns, this association attenuates to zero following a period of between 4 (Model 6) and 19 days (Model 1), where after negative affect falls below baseline. It is unclear why negative affect is above zero initially. It may simply be because deaths are spiking around this time. Or it could be an anticipation effect. Respondents might expect more deaths if the pandemic is severe enough to warrant a lockdown. They may also expect the experience of lockdown to be very unpleasant. As the reality of lockdown emerges, both in terms of its effect on death rates and its liveability, negative affect declines.
Fig 5
Joint effect: Lockdown initiation and duration.
Controlling for natural mean reversion effects and the severity of the COVID-19 pandemic, negative affect is found to decline significantly during lockdown periods. Locally-estimated (loess) line of fit between points, with 95% confidence interval bound displayed. Regression coefficients used to estimate the component-plus-residual derived from Model 3.
Joint effect: Lockdown initiation and duration.
Controlling for natural mean reversion effects and the severity of the COVID-19 pandemic, negative affect is found to decline significantly during lockdown periods. Locally-estimated (loess) line of fit between points, with 95% confidence interval bound displayed. Regression coefficients used to estimate the component-plus-residual derived from Model 3.Taken together, these results suggest that, if anything, we should expect negative affect to improve following the implementation of lockdowns in response to pandemics. The decline in SWB observed in the studies we reviewed earlier would then be driven by the residual effects of the pandemic, which lockdowns have ameliorated rather than exacerbated. We observe no statistically significant relationship between transition to a post-lockdown period and negative affect. However, we note that days since lockdown eased is associated with improvements in negative affect, and is statistically significant at α = 0.1 in models 4–6. This suggests that we should predict a negative association between lockdowns and affect in the absence of pandemic threat.Finally, we find evidence of dynamics in negative affect that suggest adaptation, but this adaptation does not wholly explain the recovery of affect during lockdown. The coefficient on the one-week lagged term is significant but modest in size at 0.2. This implies that four- fifths of the effect of present circumstances decays within two weeks. Even with these variables in the model, days since lockdown remains significant and negative, while pandemic severity is significant and negative. This suggests that we have distinct forces in operation here, and the rebound in negative affect observed at the onset of lockdowns in our graphical analysis is not only the product of adaptation.
Testing for group-specific effects: Multilevel models
The time series models in the previous section provide an overall picture of how affect varied in the population at large over the course of the pandemic and subsequent lockdown periods. This information is important to policymakers who are principally interested in the broad effects of the pandemic and lockdowns. However, these models obscure the heterogeneous effects of these events on various sub-groups. In our earlier study [46], we used the YouGov weekly mood data to estimate multilevel models with random slopes and intercepts by week of observation for key demographic groups in Great Britain to glean insights into these heterogeneous effects. We examined sub-groups by age, gender, ethnicity, socioeconomic status, and other life circumstances. As the updated individual-level dataset provided by YouGov (up until December 2020) contains a more reduced set of demographic variables, here we focus specifically on a comparison of those aged over 65 and those aged between 18–25 as this provides an important robustness check to our main results.A priori, we might expect people aged 18–24 to be relatively more affected by lockdowns relative to pandemic outbreak than those over 65. Younger cohorts would have faced the stress of distance learning and the deprivation of face-to-face socialization, notably in indoor social venues, that resulted from government-imposed policy restrictions while being less concerned by the personal health consequences of COVID-19, which has much higher mortality among the elderly. During the whole of 2020, only 124 British residents under the age of 30 were estimated to have died from COVID-19 infection, compared to over 60,000 among those aged 75 and above [54]. In contrast, elderly citizens would have been relatively more concerned about mortality risk and thus more relieved by the imposition of lockdowns. A comparison of these sub-groups thus provides an opportunity to explore whether lockdowns or the pandemic have a relatively stronger association with negative affect. If we see an association between pandemic outbreak and worsening mood for both groups but no such association for lockdown, for example, this would further support the hypothesis that lockdowns ameliorate the negative impacts of pandemic outbreak. Furthermore, if we see our main results mirrored in the specific experiences of those over 65, who are no longer in the workforce, this suggests that our main results are not driven by furlough and similar economic supports made available to individuals of working age.Multilevel models are commonly used in longitudinal analyses where period-specific events or processes may alter the relationships between individual attributes and outcomes of interest [55, 56]. Our data is highly appropriate for this sort of analysis. With around 2,000 observations drawn from a nationally representative sample by age, gender, social grade and region per each of 50 observation weeks, we have sufficient variation within and between weeks to enable relatively complex model specification among combinations of fixed and random effects.We estimate multilevel models according to the standard specification:
Where SWB represents the score of subject i on the subjective well-being measure in period j, X0 denotes the random effects design matrix consisting of ones in the first column (corresponding to the estimation of random slope intercepts) and second-level variables in the other columns, subscript B0 to the set of random slope coefficients for each time period j, A to a matrix of first-level independent variables including a constant term, for which time-invariant coefficients are provided by the vector β1.Fig 6 shows both the sociotropic (or period effect) as well as the random slopes estimated for the elderly and youth cohorts, with statistical significance highlighted in white. The sociotropic effect is derived from the random intercept term for each of the data and mirrors our main results above. affective life satisfaction declines during the spread of each pandemic wave, but recovers following the implementation of lockdowns in 2020 and 2021.
Fig 6
Multilevel model random effects for key demographics, by survey week: Youth (18–24) versus elderly (65+) respondents.
Random effect slopes for socio-demographic variables, clustered by week of survey. Includes rolling average slope over the two prior and succeeding weeks. Note that individual demographic effects are over and above the period effect, and hence indicate that while youth wellbeing fluctuated in line with the national average, an especially sharp drop occurred among the elderly. 90% bootstrap estimated confidence intervals. Periods with statistically significant positive or negative effects highlighted.
Multilevel model random effects for key demographics, by survey week: Youth (18–24) versus elderly (65+) respondents.
Random effect slopes for socio-demographic variables, clustered by week of survey. Includes rolling average slope over the two prior and succeeding weeks. Note that individual demographic effects are over and above the period effect, and hence indicate that while youth wellbeing fluctuated in line with the national average, an especially sharp drop occurred among the elderly. 90% bootstrap estimated confidence intervals. Periods with statistically significant positive or negative effects highlighted.Among elderly individuals (those aged 65 and above), the random slope coefficients by survey week suggest an especially strong negative association between pandemic severity and affect in each pandemic wave, over and above the sociotropic effect common to all demographic groups. Elderly respondents exhibited levels of affect that were 0.02–0.04 points above the societal baseline before the onset of the pandemic, consistent with a longstanding literature which finds that life satisfaction rises as individuals reach the latter stages of life [3, 4]. However, as the novel coronavirus spread globally during the initial months of 2020, the surplus affect of the elderly turned to a significant (-0.04, p < 0.001) deficit. This recovered immediately from the start of the first lockdown as the pandemic was steadily contained, and by August the affect of elderly UK respondents had returned to the societal baseline. It then declined again to reach a significant (-0.025, p < 0.01) deficit as the spread of the more infectious Alpha variant resulted in a second wave of infections in September and October. Notably, in both instances the affect deficit of elderly respondents peaked as the virus was spreading, then began to recover almost immediately following the implementation of lockdown measures. In contrast, the trend in affect for young (18–24) survey respondents is basically flat despite this demographic being, intuitively at least, the most perniciously affected by lockdowns. This lends further credence to our hypothesis that the depressed subjective wellbeing observed by studies comparing pre-pandemic and post-lockdown survey responses is driven by the deleterious effects of the pandemic rather than those of lockdowns. If lockdowns were exacerbating negative mood, we would see a more pronounced decline among young people. Our results suggest that we should predict declines in the general population’s affect with pandemic outbreaks and improvements following lockdowns introduced in response to those outbreaks.
Discussion and limitations
We have alluded throughout our analysis to the difficulty of making causal inferences, due to the fact that pandemic waves and lockdown measures occur synchronously and endogenously. Time series models allow only for the inference of (predictive) Granger causality rather than strict causal inference. However, the sample space of our current study does offer some noteworthy analytical leverage for sharpening our intuitions in the case of future pandemics. During the first wave of the global coronavirus pandemic, we observe that the increase in negative affect in New Zealand–a country that implemented lockdown without widespread community transmission–never rose more than 1.9 standard deviations above the pre-pandemic mean. Whereas in Great Britain, where a nationwide epidemic led to a significant number of COVID-19 fatalities, negative affect spiked to 4.3 standard deviations above baseline (p < 0.002). Similar patterns can be observed in Australia and the US, which had similar experiences to New Zealand the UK respectively, with affect worsening over the course of pandemic outbreaks but not over the course of lockdowns. Intuitively, this pattern suggests that the pandemic had a pronounced negative effect on affect, to which lockdowns were perceived to be an effective response, in particular when some of their negative side effects were mitigated through economic support measures. The nature of our method of analysing Google search data limits us to English-speaking nations. Extending our methodology to encompass a wider sample of countries would allow for the identification of additional cases where lockdown policies and pandemic severity occurred in a divergent manner, as would within-country analysis taking advantage of subnational policy variation between U.S. states or constituent nations of the United Kingdom.While we are unable to account for all mechanisms directly in our analysis, such as the burden of home-schooling, our results support the hypothesis that death rates are a major driver of trends in negative affect during conditions of pandemic and lockdown. Changes in death rates and affect mirror each other, broadly speaking, in all the countries we analysed. This suggests that lockdowns work on mood by reducing deaths, as substantiated by studies of excess mortality during COVID [1, 2]. An additional nuance to this view provided by our analysis is that the mood effects were most pronounced among the elderly, who were not affected by economic supports but were most at risk from COVID. During the first month of the pandemic, the proportion of young people (18–24) who reported feeling ‘scared’ during the past week to YouGov rose by 10 percentage points, whereas among the older respondents (65+) this figure was 19 percentage points. Lockdowns reduced deaths in this older demographic and, by association, reduced feelings of fear among this demographic and their loved ones.There is at least one important confounding factor that we are unable to control for: the extent of socialisation during lockdown, both within and across households, potentially in ways that defied lockdown orders. People may have adapted their socialisation to suit lockdown conditions over time, such as by using video-call technology. This is consistent with the observed decline in feelings of loneliness after an initial spike at the start of lockdown. While we are unable to isolate the significance of these effects with our data, they do not alter our conclusions as they are inherent to lockdowns. However, individuals may also have returned to their normal patterns of socialisation with people from other households as lockdown went on. In that case, the rebound in affect would be a function of disobeying lockdown. Addressing this issue could be done using mobility data as a measure of the extent of voluntary isolation, as well as COVID-19 tracking surveys that ask questions regarding lockdown compliance.A further issue is that our data is not longitudinal, in that the same individuals are not repeat-sampled across polling weeks, and so our results might be biased by sample variation. For example, people heavily affected by care burdens during lockdown may not have responded to the survey. This is unlikely as YouGov surveys are sampled so as to be nationally representative across survey weeks by age, gender, region and social grade, and the associated percentages are consequently broadly stable for the entire duration of the survey. Nonetheless, longitudinal studies would provide a complement to our analysis by further controlling for any potential selection bias across survey periods. Longitudinal studies that include niche sub-groups of the population, such as those with pre-existing mental health conditions, would be especially complementary to our study in assisting policymakers when designing lockdown policies. Our results speak most clearly to the impact of lockdowns on the affect of the population as a whole, but people with certain characteristics may have acute experiences.
Conclusion
Our results suggest that we should expect pandemic outbreaks to be associated with a worsening of affect across the population, especially among those most at risk. The more fatal and widespread the pandemic is, the more pronounced this worsening of affect will be. Furthermore, we should expect lockdowns introduced in response to such life threatening pandemics to be associated with an improvement in affect, at least in the medium term. While we do observe an increase in negative affect at the very beginning of lockdown, countries revert to baseline within 3 weeks at most, and thereafter see a net decrease. An intuitive explanation is that this is due to the mitigating effect lockdowns have upon the direct health impact of infection and broader anxieties among vulnerable groups. Our results suggest that these trends are not entirely a function of adaptation to pandemic and lockdown conditions, nor can they be explained by furlough and other economic supports, as most pronounced trends are evident among those over 65 years of age who are out of the workforce.(DOCX)Click here for additional data file.25 May 2021PONE-D-21-09713Subjective Well-Being During COVID-19: Separating the Effect of Lockdowns from the PandemicPLOS ONEDear Dr. Fabian,Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.Please submit your revised manuscript by Jul 05 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.Please include the following items when submitting your revised manuscript:A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript.Kind regards,Eugenio ProtoAcademic EditorPLOS ONEAdditional Editor Comments:I have received two excellent reports, both reviewers found the paper very interesting and valuable, but both feel that the paper has to go through a substantial revision.The main point they make is very similar: the main result cannot be interpreted as it is now, namely as the causal effect of lockdown on wellbeing, as a separated one from the general effect of the pandemic.Furthermore, they both strongly recommend to tone-down the claim of causality. In the light of this the authors might also want to reconsider the title to some extent.I therefore recommend a revision, where the authors should address each single point raised by the reviewers.Journal Requirements:When submitting your revision, we need you to address these additional requirements.1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found athttps://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf andhttps://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf2. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.In your revised cover letter, please address the following prompts:a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.We will update your Data Availability statement on your behalf to reflect the information you provide.3. Please ensure that you refer to Figure 2 in your text as, if accepted, production will need this reference to link the reader to the figure.4. We note a previous version of your study was published by Bennet Institute for for Public Policy: https://www.bennettinstitute.cam.ac.uk/media/uploads/files/Happiness_under_Lockdown.pdf.Please kindly clarify the following points:a. Please clarify if the Bennet Institute for for Public Policy article was peer reviewed.b. Please also clarify if the previously published article has been copyrighted. For your reference, PLOS ONE publishes all content under a CC BY 4.0 license (https://creativecommons.org/licenses/by/4.0/) which means that all material on our website is freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. If the figures or text have already been published and copyrighted, authors must provide proper attribution, referencing the source clearly, and obtain permissions if the content is copyrighted.[Note: HTML markup is below. Please do not edit.]Reviewers' comments:Reviewer's Responses to Questions
Comments to the Author1. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: PartlyReviewer #2: Partly********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: YesReviewer #2: Yes********** 3. Have the authors made all data underlying the findings in their manuscript fully available?The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: YesReviewer #2: No********** 4. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: YesReviewer #2: Yes********** 5. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Report on “Subjective Well-Being during COVID-19: Separating the Effect of Lockdowns from the Pandemic”This paper claims to separate the effect of the pandemic from lockdowns on subjective wellbeing using YouGov data for Great Britain in combination with Google Trends. According to the authors, the severity of the pandemic correlates negatively with subjective well-being (Table 2: positive adjusted “correlation” with the dependent variable NA (Negative Affect)) and the days under lockdown correlate positively with subjective wellbeing (Table 2: negative adjusted “correlation” with NA). The paper also shows that the relationship between lockdown and subjective wellbeing in the UK is heterogeneous (Figure 8): “the elderly, the affluent, and women living alone had especially negative experiences”, while “underemployed men saw a marked increase in their SWB during the lockdown” (p. 29).The paper contains an empirical analysis on predictors of wellbeing using time series data, and it can speak of “Granger causality” at most, and not causality in a counterfactual sense (as the authors are well aware). I think the paper provides very valuable information, but the authors should reshape their presentation and discussion of findings (the key exhibits should be in the main text and the remaining ones in the appendix). As I argue below, I do not think it makes sense to talk about disentangling the effect of lockdowns from the effect of the pandemic (point 1). I also provide feedback on issues that need to be expanded and carefully revised (remaining points).1. Causality vs. predictabilityI have to say that I am totally baffled by statements such as “We are able to separate the effects of the pandemic from those of the lockdowns by utilising weekly data […]”. The lockdown does not occur in a vacuum, but as an endogenous response to an “exogenous shock”, the pandemic. Without clear assumptions stated either graphically (e.g. via DAG) or in equations (e.g. via structural models) is very difficult to assess how such an endeavour is even possible (https://journals.sagepub.com/doi/full/10.1177/2515245917745629). Given the way the authors present their findings, a natural question is: Should we expect a positive effect in the case of a “lockdown” without a pandemic?Identification of causal effects: To claim that one can separately identify the effect of the “lockdown” (D) on subjective wellbeing (Y) from the effect of the “pandemic” (Z) seems a very strong, incredible claim. Any identification of causal effects must rely on assumptions.One can identify the effect of the “pandemic” (Z) using a before-after strategy so long as seasonal effects accounted for (e.g. comparing Y before March 2020 vs. post March 2020 against before March 2019 vs. post March 2019); this is a “difference-in-differences” (DID) strategy with two time dimensions (year and month). The key assumption is that there are no other confounding factors changing in between periods (e.g., “parallel trends assumption”): in other words, the average change in Y would have been the same between pre-March 2019 and post-March 2019 that the one between pre-March 2020 and post-March 2020 in the absence of the pandemic. Unfortunately this paper does not have YouGov data before June 2019 (this is an important limitation of this paper).But things are more cumbersome if one wants to identify the effect of the lockdown (D). For one thing, D is clearly a response to Z, so I wonder how one can identify the effect of D separately from the effect of Z. While one can think of Z as an exogenous shock, and try to get rid of seasonal effects via DID, D is an endogenous response to the pandemic. In this regard, it seems to me that the best one can hope for in this setting is to use Z as an instrumental variable for D and assume that the exclusion restriction is satisfied, i.e., that Z affects Y only via D. Effectively, then, one can at most think of identifying the effect of D thanks to Z. In practice, however, one will need to relax the exclusion restriction (e.g., https://www.jstor.org/stable/41349174)In any case, if one is interested in causal effects, and disentangling causal effects, assumptions must be made, carefully stated, and discussed. This manuscript does not provide such causal inference approach, and it belongs to the realm of prediction studies.Predictability (correlational) exercise: Do lockdowns predict subjective-wellbeing? One needs to be crystal clear about the fact that predictability is a complete different thing than causal inference (e.g. umbrellas don’t cause rain). I think the authors of this manuscript should be very careful in the way they present their findings and their interpretation of the empirical analysis in light of existing work. In general, the fact that a correlational analysis shows a strong positive relationship does not tell much, if anything at all, about the sign of the causal relationship between two variables. Moreover, things are quite complicated in the present setting, where there are important dynamics and different groups which generate a host of heterogeneities: the severity of the pandemic varies over time; its impact varies by time and (demographic) group; and the impact of the lockdown varies by time (or is a function of time) and (demographic) group, too.The sentence “Alas, most empirical studies … of the pandemic” is confusing, since it seems the study will focus on causal effects, while indeed it will focus on correlations: “Were lockdowns associated with an improvement or worsening in subjective wellbeing?” Whether this is “essential input into contemporary and future policy debates” is debatable.While the paper is in generally well written, there are a few paragraphs that generate a bit of confusion. On page 4 the authors write: “A shortcoming of these studies is an inability to distinguish empirically the effects of the pandemic from the effects of lockdown policies […] this confounds the effects of the two events. It is important that we identify the effects of lockdowns independently”. However, on page 2 the authors stress that “our results are primarily descriptive rather than causal”, so both the previous critique to existing work and arguing that “it is important that we identify the effects of lockdowns independently” are frankly problematic.Many of the existing papers on the pandemic and mental wellbeing are well suited to identify the causal effect of the pandemic (under more or less explicit assumptions). However, the current paper claims to be descriptive and at the same time be concerned about papers which are able to identify well-defined causal effects (pandemic – including lockdown – effects). This requires a diligent revision and reformulation.2. Previous researchPage 3 does not seem to provide a thorough review of existing work, omitting relevant references, of published (and unpublished) studies trying to understand the effects of the COVID-19 pandemic and the lockdown on mental wellbeing. While some papers focus explicitly on lockdowns, many others acknowledge that they cannot disentangle between the effects of the pandemic and the lockdown. The following studies seem relevant to the present paper:Pandemic and mental health:• Daly M, Sutin AR, Robinson E. Longitudinal changes in mental health and the COVID-19 pandemic: Evidence from the UK Household Longitudinal Study. Psychological Medicine.• Daly M, Robinson E. Psychological distress and adaptation to the COVID-19 crisis in the United States. Journal of Psychiatric Research.• Davillas A, Jones AM. The COVID-19 pandemic and its impact on inequality of opportunity in psychological distress in the UK. ISER Working Paper Series 2020-07.• Ettman CK, Abdalla SM, Cohen GH, Sampson L, Vivier PM, Galea S. Prevalence of Depression Symptoms in US Adults Before and During the COVID-19 Pandemic. JAMA Netw Open.Pandemic and mental health by ethnicity:• Proto E, Quintana-Domeque C. COVID-19 and mental health deterioration by ethnicity and gender in the UK. PLoS ONE.Pandemic and mental health by gender:• Etheridge B, Spantig L. The gender gap in mental well-being during the Covid-19 outbreak: Evidence from the UK. Covid Economics 33.• Oreffice S, Quintana-Domeque, C. Gender inequality in COVID-19 times: Evidence from Prolific participants in the UK. Journal of Demographic Economics.Lockdowns and mental health:• Adams-Prassl A, Boneva T, Golin M, Rauh, C. The Impact of the Coronavirus Lockdown on Mental Health: Evidence from the US. Human Capital and Economic Opportunity Working Group.• Banks J, Xu X. The mental health effects of the first two months of lockdown and social distancing during the Covid-19 pandemic in the UK. Covid Economics 28.• Niedzwiedz CL, Green MJ, Benzeval M, et al. Mental health and health behaviours before and during the initial phase of the COVID-19 lockdown: longitudinal analyses of the UK Household Longitudinal Study. Journal of Epidemiology & Community Health. 2020.3. Interpretation of resultsThere are two key issues which are important in interpreting the results and which require additional work: compositional effects and anticipation (overshooting) effects.The first relates to compositional effects. The data the authors use is cross-sectional, not longitudinal. This opens the possibility that the lockdown has a mechanical effect on the type of respondents. With the YouGov data it is possible to look at a bunch of demographic characteristics. The authors should plot the mean of different demographic characteristics of the respondents over time and look for patterns.The second is about anticipation (or overshooting) effects. To what extent the correlation captured is not just reflecting an anticipation effect? In other words, people anticipate that a lockdown is coming, and the closer to the lockdown date the lower the reported mood (the higher Negative Affect) is. Indeed, very tough lockdowns were implemented in Italy (9 March 2020) and in Spain (13 March 2020). The earliest lockdown analysed by the authors is the one in the US (21 March 2020). Perhaps once people understand that the lockdown is not as tough as the ones in Spain and Italy, then mood improves. This is a different explanation for the findings documented in this paper, and requires further clarification and discussion.4. Measuring the severity of the pandemic: cases vs. deathsThe authors should use alternative measures of the severity of the pandemic. Cases depend on whether widely testing was available (which varied across countries, in particular, early in the pandemic). The authors should present their analysis using deaths (instead of cases) too. Of course, there are also measurement issues with deaths, but it is another complementary way of looking at the severity of the pandemic.5. Lack of dataIt is unfortunate that the authors cannot replicate Figures 1 and 2 with data for January-June 2019 to account for seasonality effects (since the YouGov data are only available from June 2019). Nevertheless, and for completeness, it would be interesting to plot the data from June 2019 to June 2020.6. Regression analysisIn addition to adding the analysis using deaths rather than cases, I think it is important to display the following set of results:• Not controlling for lagged NA• Controlling for country FE• Controlling for month FE• Interacting “days in lockdown” * severity of pandemic• Interacting “days since easing of lockdown” * severity of pandemicWhat is the interpretation of the coefficient on “days since easing the lockdown”? Is easing the lockdown a bad thing?7. Other demographic characteristicsDo the authors have information on ethnicity and key worker status? I think it would be really interesting to see an analysis along these characteristics in Figure 8.8. Data availability:The authors write: "The raw data cannot be shared publicly as they are the commercial property ofYouGov. However, the time series of the data is published weekly here and is sufficientto verify our analysis: https://yougov.co.uk/topics/science/trackers/britains-moodmeasured-weekly. We will make all do files available to any researcher who wishes toreplicate our analysis." However, the link "" ext-link-type="uri" xlink:type="simple">https://yougov.co.uk/topics/science/trackers/britains-moodmeasured-weekly" does not seem to be working.Minor comments• Page 4, paragraph 1: it is not Good Health Questionnaire, but General Health Questionnaire.• Page 4, paragraph 2: heterogeneous effects by demographic CHARACTERISTICS.• Page 10 (OECD 2013). –space needed—All• Figure 7: Does the lockdown starts when the line changes from “black” to “white” or is this indicated by the vertical line?• There are 8 countries and 50 weeks of data: 400 observations. Column 5-6 display 376 observations, so 47 weeks of data are being used?• Clustering standard errors with 8 countries is problematic. Wild Bootstrap should be used to ensure statistical inference with desirable properties: https://journals.sagepub.com/doi/full/10.1177/1536867X19830877• Should the equations contain error terms?Reviewer #2: This paper addresses an important and challenging topic within the context of the Covid-19 pandemic and shows how people’s mood changed in response to lockdowns during the pandemic. The paper’s relevance for policy is indisputable and the amount of data work that has been completed is noteworthy.There are several major issues I’d like to draw attention to and it is very important that some of these are addressed given that the policy conclusions drawn from this research can influence people's lives.1-It is not possible to accept the main argument that the paper distinguishes lockdown effects from the pandemic. The analyses are very rigorous but they cannot do much about the mere fact that lockdowns were introduced (and will always be introduced) in response to a crisis and therefore, there are no effects of lockdowns without the effects of pandemic. The results, therefore, are better interpreted as showing that the lockdowns attenuate some of the negative changes in mood that occurred during the pandemic, most plausibly by providing a sense of security and safety. This interpretation is supported by this basic intuition, as well as the main results reported in the paper which shows a rise in mood that follows from a reduction in mood. This interpretation is also supported by the corollary finding “that the seeming effectiveness of lockdowns in improving mood is conditional upon pandemic severity.” The authors also acknowledge this idea somewhere in the discussion by stating “ lockdowns improve SWB by ameliorating stress and fear associated with pandemic outbreaks. If there is no serious viral outbreak and thus little stress and fear to ameliorate, then lockdowns won’t improve SWB.” These arguments should be more central to the paper and dictate the title, abstract, and intro.2-It is very important that the short timeframe for the estimates and the patterns in data that follow after this short-frame (how the data looks after the 1 month) are reminded to the reader in the abstract and throughout the manuscript. These are patterns observed early on during the lockdown and they mostly seem to disappear later on. Can the Figures 3 and 4 show later time periods in the graph to indicate this?3-The authors should consider referring to ‘mood’ as opposed to ‘subjective well-being’ as this more accurately represents their data. Mood is a critical component of SWB and this link can be emphasized by citing literature and correlations with the life satisfaction measure but given that both of their outcomes measure mood (with some links to life satisfaction in one of the data), the paper seems to study mood and this should be reflected in the title, abstract, text etc. Relatedly, the efforts to link mood data to life satisfaction in the Yougov analysis doesn’t seem convincing. Using the lower quality life satisfaction data and so much imputation and complication doesn’t seem to be justified. Why not just use a simple index for positive and negative moods, and a composite mood index? The authors already use a negative affect index from Yougov data in their follow-up analysis and show trends that support their main finding in Figure 4, why not use this index (in addition to positive mood) in the main analysis too?4-The negative affect trend in Figure 2 seems to indicate the complete opposite of the main findings throughout the manuscript. There are striking increases in boredom, loneliness, and apathy after the lockdown. This is very important and directly contradicts the main arguments in the paper. On the other hand, the reductions in stress and scare are very relevant and support the main mechanism that the lockdown increases a sense of security. Making these more central could enrich the theoretical contributions of the paper, which is currently not very strong. A follow-up question is: does this discrepancy in mood items emerge in the Google data too (i.e., results are different for stress, scare vs. boredom etc.) ? No matter if it does replicate in Google data or not, these diverging results need to be discussed very explicitly, they are important to acknowledge and they even help explain the results.5-The causal language needs to be removed from the manuscript. Although the authors admit they are estimating associations, there is a heavy use of the words such as ‘effects’ or ‘impacts’. These words are better omitted.6-The analyses do not completely control for the effects of the economic support from a causal inference point of view and the argument that the lockdowns always entail economic measures from a policy perspective is not convincing. It is possible that a stay-at-home order is not accompanied by economic measures and the degree of economic support can also change within a lockdown. I would recommend explicitly reporting this as a limitation. Authors can remind the findings for +65 adults as evidence that economic measures do not necessarily play a role, although it is possible that the economic support would contribute to the findings. Was there any way of controlling for this in the data, for example, by including or making references to the timeline of economic packages?Minor commentsThe discussion on Sweden in the intro is better placed in limitations.1-I recommend that the following work on the topic and other recent studies that may have come out during the last months are integrated into the introduction and that the findings are discussed in light of this evidence:Aknin, L., De Neve, J. E., Dunn, E., Fancourt, D., Goldberg, E., Helliwell, J., ... Amour, Y. B. (2021). A review and response to the early mental health and neurological consequences of the COVID-19 pandemic.Giurge, L. M., Whillans, A. V., Yemiscigil, A. (2021). A multicountry perspective on gender differences in time use during COVID-19. Proceedings of the National Academy of Sciences, 118(12).VanderWeele, T. J., Fulks, J., Plake, J. F., Lee, M. T. (2021). National well-being measures before and during the COVID-19 pandemic in online samples. Journal of general internal medicine, 36(1), 248-250.2-"Underemployed men saw a marked increase in their SWB during lockdown" this could also be because of norm effects as more people become unemployed, unemployment could hurt less because there is less stigma, identity effects3-Please temper the positive picture argument here “While our results paint a positive picture of the impact of lockdowns on SWB, “4-Please provide more justification and explanation about why restricting sample space is necessary and what it entails “rising to -17% in models where the sample space is restricted to the period following lockdown onset (Models 3-4).” It is unclear what the sample in this sentence is.5-The authors can be more accepting of the limitations of the cross-sectional nature of the data and present more information and discussion about the representativeness of both samples and whether and how the sample composition may have changed over time. This doesn't invalidate the importance of the findings but it is important to acknowledge and report in a study who has such strong population-level policy implications.6-Are there any descriptive statistics that show how prevalent the specific population groups are in the data?How is controlling for lagged values of the outcomes tackling hedonic adaptation? Please explain the mechanics/rationale for this new approach.7-The negative trend in Figure 4 graph doesn’t match the results for Figure 2 ‘negative affect average’. Please acknowledge and/or reconcile this discrepancy.8-The paper goes back and forth with the dataanalysis match, starting with YouGov data, then cross-country data, and then time-series analysis in cross-country data, and then finishing off with subgroup analysis in YouGov data again. It could simplify the paper if the authors start with the cross-country data (it only shows negative mood anyways) and time-series in this data, and then finish up with the Yougov analysis on negative and positive mood+subgroup analysis.********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose “no”, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: NoReviewer #2: No[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.15 Oct 2021Please see attached response to reviewersSubmitted filename: Response to reviewers - PLoS One.docxClick here for additional data file.22 Nov 2021
PONE-D-21-09713R1
Subjective well-being during the 2020–21 global coronavirus pandemic: Evidence from high frequency time series data
PLOS ONE
Dear Dr. Fabian,Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.Please submit your revised manuscript by Jan 06 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-emailutm_source=authorlettersutm_campaign=protocols.We look forward to receiving your revised manuscript.Kind regards,Eugenio ProtoAcademic EditorPLOS ONEJournal Requirements:Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.Additional Editor Comments:The two reviewers have reacted very positively to the review, R1 only suggests that the author proofreads the manuscript, while for R2 some additional minor changes are necessary before the paper can be publishable.[Note: HTML markup is below. Please do not edit.]Reviewers' comments:Reviewer's Responses to Questions
Comments to the Author1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressedReviewer #2: All comments have been addressed********** 2. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: YesReviewer #2: Partly********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: YesReviewer #2: Yes********** 4. Have the authors made all data underlying the findings in their manuscript fully available?The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: YesReviewer #2: Yes********** 5. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: YesReviewer #2: Yes********** 6. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Thank you for preparing a careful revision. The revised paper is clearer and more effective.I would suggest proofreading the article one more type for typos. For instance, on page 32, instead of . you should probably use : after "to control for":"There is at least one important confounding factor that we are unable to control for. the extentof socialisation during lockdown, both within and across households, potentially in ways thatdefied lockdown orders."Reviewer #2: I congratulate the authors for addressing the comments and rewriting their paper with valuable new analysis. The paper now more accurately captures people’s experiences during Covid-19 with new data, is stronger on potential theoretical and mechanistic explanations of the findings and literature review (in the introduction not so much in discussion) and uses a balanced language and adequately addresses the limitations.A few important points remain to address - I believe these points can be addressed in writing by providing more justifications for methods and discussions of the findings:1-The initial spike in negative affect at the onset of the lockdown --- how do authors explain this? I am not entirely sure if authors attribute this to the death rates or the lockdowns. It is possible to observe in graphical evidence that the Covid deaths also spike at exactly around this period (even in Australia and New Zealand, although to a lesser extent). It seems more plausible that this initial spike is a result of this jump in death rates as opposed to the introduction of lockdowns, but I can’t easily think of how this can be conclusively determined in the models. Given that the dummy for entering the lockdown period is not significant in models 4-6, it is plausible that the negative spike is not a lockdown effect. The robust relationship between death rates and affect could be evidence that the likely explanation for the initial spike is probably death rates, although it is imperfect evidence since it measures the general relationships with death rates at all levels and affects – not a spike. Given that there is already too many analyses in the paper, it would be difficult to ask for more analysis to explain this. At least, the authors can include these in the discussions – how do authors explain this spike? What do the analysis imply about the sources of this spike?2- There is a very good discussion of the mechanisms in the literature review, but the authors don’t come back to this to discuss their findings. There is so much in the paper now about the potential mechanisms. Now with the mortality data, it is possible to see how death rates drop after the lockdowns in such a strong way and in the figures, the changes in death rates and the affect seem to mirror one another in all countries, which suggests that the declines in negative affect may be related to lockdowns stopping deaths. There are probably other papers showing this – that the lockdowns reduced Covid-related deaths? May be important to cite in the discussion. Then there is the evidence of elderly being the dominant group to carry the effects (how striking it is that the youth don’t show these changes), which suggests that lockdowns most probably reduced fear of death among those who fear the most OR the lockdowns decreased the rates of loved ones dying - which must be higher in older age populations. Again, it could be too much to ask for these explanations via analyses but these are some of the insights that can be at least discussed.3-I think the strongest counterargument to the narrative of the paper is the hedonic adaptation effect- that Covid killed people those who stayed alive but at risk became sad and fearful people adjusted. I don’t believe this mechanism because the data is strong in showing how, in each instance, death rates dropped with lockdown and affect followed. But it is not possible to fully distinguish this. I do appreciate the authors using a new method to control for hedonic adaptation by controlling for the lagged DV. I understand that the coefficient shows the transmission of the well-being from one period to the next and indicates a timeline, yet, is this really adequate to control for hedonic adaptation? I can’t be sure. This is a novel method and I couldn’t find any example of this in the papers cited. In the papers cited, the controls are for lagged values of the independent variable (not DV). I can’t offer much insights into what the authors should do methodologically but I think they should be careful in this method/argument and triple check and explain how exactly that this method indeed controls for hedonic adaptation. The authors can also talk about what is the usual length of adaptation to bereavement or other death in the literature, if the adaptation occurs in shorter timeframes than what is observed with lockdowns, this can be used to support the findings. These are some suggestions to ease the doubts.4- Some of the new analysis raise new questions, especially the split between pre-pandemic and post-pandemic in models 1-3 and 4-6 in the results. I present the questions that they raise below, it would be helpful to provide more answers to these questions in the manuscript.• How is it that there was a period without the pandemic but with lockdowns? From July 19 till Feb 2020. Didn’t the lockdowns start only after the pandemic?• What is the purpose of this distinction? What value does it add to the paper? “Using two sampling windows in this way illuminates how the association between lockdown and mood differs statistically depending on whether it is assessed relative to a pandemic-free world (models 1-3) or to a world with pandemics but no lockdown (models 4-6).”• It is unclear what authors mean in this sentence and why we need this finding/approach, please explain in the manuscript: “The coefficient for being in lockdown is not statistically significant if the sample frame is restricted to the pandemic period, as in models 4-6. This suggests that the result in models 1-3 is driven by com-parison to a pandemic-free world.” Please include more explanations and justifications for this analysis and findings.• How is the evidence for youth supports the hypothesis as indicated by the following sentence – if anything, it reads like the result for youth is calling the hypothesis into question and requires more explanation of why the results don’t hold for youth: “In contrast, the trend in affect for young (18-24) survey respondents is basically flat despite this demographic being, intuitively at least, the most perniciously a affected by lockdowns. This lends further credence to our hypothesis that we should predict declines in the general population’s affect with pandemic outbreaks and improvements following lockdowns introduced in response to those outbreaks.”This association between lockdowns and affect is robust to controls for hedonic adaptation [10] and progress in containing the virus outbreak.Minor comments:Intro• Page 2: replace comma with period after “introduced,”• Page 3: Unclear what the following sentence means with easing timetables: “where easing timetables were maintained, despite the onset of a new coronavirus wave (such as the United States in the summer of 2020, and the United Kingdom in summer 2021).”Descriptive results:• Figure 1 – is it possible to indicate the onset of the pandemic in the graph? Realize the graphs are already populated but it was not possible to detect the main argument in the text in the graph: “They then fell sharply during the virus breakout in March before reverting higher following the stay-at-home order.” It even looks like there was a decline with the onset of the lockdowns.• The weighting of the affect states is very clear now and the value and benefits of this method is well-explained.• Could the authors put some numbers to the following comment? How was this conclusion reached? What are the sizes and statistical significance of these changes? “Taking all negative affect items together, negative affect rose sharply with the outbreak of the pandemic, and then continued to rise, albeit much more slowly, after the imposition of lockdown.• Is it possible to put any statics to this claim – what is the size of this difference: “These countries also saw a much reduced spike in negative affect during their 2020 lockdowns in comparison to countries that experienced wide scale national epidemics.”• In Fig 4, the affect changes in the second and third lockdowns seems smaller than the firsts. In Canada and Ireland, hard to observe a decline in affect the second and third lockdowns on later. This may be worth mentioning and explained.Results• Table 2: within the table, can the authors indicate “Pandemic free period” for models 1-3 and “Pandemics but no lockdown” in models 4-6. Otherwise, hard to follow the results.• Table 2: please put coefficient of interest in the top row, so it is the first coefficient readers can see and follow.• Page 24 – 25: In the first paragraph after Table 2, when describing the results, please direct readers to the right column or model. This could be achieved by putting column names in parentheses (Column 1) for the corresponding model/result. Otherwise, hard to follow the results.• Better to detach these two sentences to avoid confusion--- it reads as if the first sentence summarizes the second (which wouldn’t be correct), but it becomes clear later that the first sentence summarizes the full paragraph: Second, lockdown is statistically associated with a reduction in negative affect. In models 1-3, the dummy variable for being in lockdown has a large, positive, and significant correlation with negative affect.Discussion• The discussion and limitation section currently includes only limitations. Please provide an overall discussion of the findings and explain how the findings relate to or complement existing literature.• Page 32: There is a period after “control for”: There is at least one important confounding factor that we are unable to control for. the extent of socialisation during lockdown, both within and across households, potentially in ways that defied lockdown orders.********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose “no”, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: NoReviewer #2: No[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
10 Jan 2022See attached.Submitted filename: Response to Reviewers, 2nd RR.docxClick here for additional data file.24 Jan 2022Subjective well-being during the 2020–21 global coronavirus pandemic: Evidence from high frequency time series dataPONE-D-21-09713R2Dear Dr. Fabian,We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.Kind regards,Eugenio ProtoAcademic EditorPLOS ONEAdditional Editor Comments (optional):The referee is now satisfied, I am delighted to recommend the publication of this manuscript.bestsReviewers' comments:Reviewer's Responses to Questions
Comments to the Author1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #2: All comments have been addressed********** 2. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2: Yes********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2: Yes********** 4. Have the authors made all data underlying the findings in their manuscript fully available?The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #2: No********** 5. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #2: Yes********** 6. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #2: The authors addressed all comments and feedback. I have no further revisions to request. Best of luck.********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose “no”, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #2: No26 Jan 2022PONE-D-21-09713R2Subjective well-being during the 2020–21 global coronavirus pandemic: Evidence from high frequency time series dataDear Dr. Fabian:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.If we can help with anything else, please email us at plosone@plos.org.Thank you for submitting your work to PLOS ONE and supporting open access.Kind regards,PLOS ONE Editorial Office Staffon behalf ofProfessor Eugenio ProtoAcademic EditorPLOS ONE
Authors: Lara B Aknin; Jan-Emmanuel De Neve; Elizabeth W Dunn; Daisy E Fancourt; Elkhonon Goldberg; John F Helliwell; Sarah P Jones; Elie Karam; Richard Layard; Sonja Lyubomirsky; Andrew Rzepa; Shekhar Saxena; Emily M Thornton; Tyler J VanderWeele; Ashley V Whillans; Jamil Zaki; Ozge Karadag; Yanis Ben Amor Journal: Perspect Psychol Sci Date: 2022-01-19