Literature DB >> 35783344

Macroeconomic expectations, central bank communication, and background uncertainty: a COVID-19 laboratory experiment.

Luba Petersen1, Ryan Rholes2.   

Abstract

This paper explores the robustness of laboratory expectation formation and public signal credibility to external uncertainty shocks and online experimentation. We exploit the recent pandemic as a source of exogenous background uncertainty in a New Keynesian learning-to-forecast experiment (LtFE) where participants receive projections of varying precision about future inflation. We compare results from identical LtFE completed immediately before the onset of the pandemic, soon after (online), and well after (online and in-person). Baseline LtFEs with no communication are robust to both factors. However, both background uncertainty and online experimentation impact how subjects use public signals. The pandemic led to a decreased appetite for and tolerance of overly precise communication while increasing the efficacy of projections that also convey uncertainty. Subjects became more averse to central bank forecast errors after the onset of the pandemic if the central bank conveyed a precise outlook but not if it conveyed forecast uncertainty.
© 2022 Published by Elsevier B.V.

Entities:  

Keywords:  COVID-19; coordination; credibility; expectations; experimental macroeconomics; inflation communication; laboratory experiment; monetary policy; online experimentation; strategic; uncertainty

Year:  2022        PMID: 35783344      PMCID: PMC9233881          DOI: 10.1016/j.jedc.2022.104460

Source DB:  PubMed          Journal:  J Econ Dyn Control        ISSN: 0165-1889


Introduction

The COVID-19 pandemic has been a major social, health, and economic shock that changed how we work, study and interact, and-depending on the country-either shaken or restored confidence in public institutions. The pandemic led to an unprecedented level of financial, economic and political uncertainty (Baker, Bloom, Davis, Terry, 2020, Coibion, Gorodnichenko, Weber, 2020, Fetzer, Hensel, Hermle, Roth, 2021). Communication played a primary role in how institutions responded to COVID-19. The World Health Organization relied heavily on transparent communication to clarify the risks of contracting the disease and to educate the public on best practices for personal protection from infection. In many countries, health recommendations were very precise from the onset of the pandemic. For example, in Canada, the Public Health Agency made very strong and focused recommendations for the public to wear face masks to prevent the transmission of COVID-19. Likewise, many central banks took the strong step of updating existing communication strategies to accommodate the unique economic uncertainty brought about by the onset of the pandemic to bolster public confidence, allay concerns about impending economic hardship, or better clarify their respective outlooks on the balance of risks. There was no consensus on how communication should respond to the pandemic. The Bank of Japan and Bank of Canada communicated more uncertainty by temporarily moving away from their point estimates to publishing range projections in their Spring 2020 monetary policy reports. Other central banks made efforts to communicate their outlooks with more confidence. The Bank of England, for instance, temporarily dropped its uncertainty bands around their inflation point estimates, and instead published alternative scenarios. These quick changes in the construction of published projections highlight the integral role communication plays in modern monetary policy frameworks and introduce new questions about how to both preserve credibility and manage expectations in times of uncertainty. Pandemic-related public health restrictions simultaneously ground in-person interactions to a halt, thereby forcing education, socialization and digitally-oriented work to move online. This abrupt adoption of online engagement has potentially important implications for how we learn and absorb information. Researchers have not been immune to the shift online. Hundreds of behavioural and experimental economics labs around the world were forced to close their physical labs at the onset of the pandemic. Scientists had to adapt their in-lab protocols for online experimentation and data collection. The robustness of this large-scale effort to continue research with human subjects online is unclear, especially as COVID-19 has altered people’s time spent and ease with online learning and interaction. We exploit this once-in-a-lifetime aggregate uncertainty shock to ask: how robust are learning-to-forecast experiments (LtFE) to both background uncertainty and online experimentation due to COVID-19? Specifically, we ask, do such external uncertainty shocks influence forecasting behavior or the perceived credibility of internally-credible public signals received within a laboratory economy? And does the experimental setting in which they learn to engage with these signals matter? Answers to these questions are important from the perspective of designing and drawing inferences from LtFEs. Macroeconomists have used LtFEs to study expectation formation and equilibria selection (Adam, 2007, Bao, Hommes, Sonnemans, Tuinstra, 2012), how various monetary policy rules and targets affect expectation formation (Assenza, Heemeijer, Hommes, Massaro, 2013, Cornand, M’baye, 2018, Hommes, Massaro, Salle, 2019, Hommes, Massaro, Weber, 2019, Pfajfar, Žakelj, 2014, Pfajfar, Žakelj, 2016, Pfajfar, Žakelj, 2018), and the design of central bank communication (Ahrens, Lustenhouwer, Tettamanzi, 2018, Arifovic, Petersen, 2017, Cornand, M’baye, 2018, Kryvtsov, Petersen, 2013, Kryvtsov, Petersen, 2021, Mokhtarzadeh, Petersen, 2021). Because macroeconomists increasingly use these experimental frameworks as an apparatus for testing policy and communication, it is imperative to understand the role that background uncertainty plays in shaping laboratory expectation formation. The Bank of Canada, for example, actively uses New Keynesian (NK) LtFEs to inform policy design. See Amano et al. (2011) and Kostyshyna et al. (2021) for applications to the Bank’s 2011 and 2021 mandate renewal, Kryvtsov and Petersen (2021) for evidence on expectation formation under different types of forward guidance, and Amano et al. (2014) for a survey on the application of such experiments to the design of monetary policy. LtFEs have also been used to study expectation formation in other market settings such as financial assets and housing markets (Bao, Hommes, 2019, Bao, Hommes, Makarewicz, 2017, Hennequin, Hommes, 2019, Kopányi, Rabanal, Rud, Tuinstra, 2019, Kopányi-Peuker, Weber, 2021). Heterogeneous, boundedly-rational expectations are increasingly being incorporated into theoretical models to generate more realistic macroeconomic dynamics. Evidence of social learning and forecast heuristic switching has been used to motivate expectation formation in liquidity traps (Arifovic, Grimaud, Salle, Vermandel, 2020, Arifovic, Schmitt-Grohé, Uribe, 2018) and on endogenous credibility of inflation-targeting central banks (Hommes and Lustenhouwer, 2019). De Grauwe and Macchiarelli (2015) develop a model of the banking sector and show that behavioral expectations can amplify the business cycle via self-fulfilling moments of pessimism and optimism. The authors ground their behavioral modeling in recent experimental evidence of cognitive limitations and adaptive learning. Experimental evidence is also used to develop behavioral models in international finance. Bertasiute et al. (2020) develop the first multi-country New Keynesian model of currency unions to understand the importance of economic integration on macroeconomic stability. They study the dynamics of this framework under a large set of different forecasting heuristics observed in the lab. The authors compare the predictive performance of the behavioral currency union model that embeds four-heuristic reinforcement learning with that of a rational expectations version. While rational and behavioral expectations produce similar policy implications, the quantitative predictions of behavioral models are considerably more accurate. In this paper, we study expectation formation and forecaster confidence in standard NK LtFEs. Participants are tasked with repeatedly forming incentivized one- and two-period ahead inflation forecasts in a simple three-equation experimental macroeconomy. Their aggregated expectations and exogenous demand shocks drive inflation, output, and interest rate dynamics. Crucially, we conduct our experiments in three waves spanning nearly two years. This allows us to cleanly disentangle the effects of the initial shock of the pandemic, the evolution of the pandemic, and online experimentation on participants’ forecast behavior in LtFEs. In each wave of data collection, we systematically varied the communication of central bank inflation projections in a between-subjects design. Our baseline, NoComm, involved no supplementary public signals. Subjects in our Point treatments received a precise five-period inflation path forecast. Subjects in our Point&Density treatments received the same inflation path forecasting surrounded by a symmetric distribution capturing forecast uncertainty. Our first wave was conducted from October to November 2019 in an in-person laboratory setting with American and Canadian undergraduate participants. We designed our Fall 2019 Pre-Covid experiment to study the effects of higher-order moments in central bank communication on expectation formation. We observed that the precise point projections successfully managed expectations and increased forecaster confidence. Communicating uncertainty around point projections reduced the relative efficacy of the projections. Interested readers can find these results in Rholes and Petersen (2021). Shortly after the CDC declared a pandemic, we reran our experiments to observe the effects of the heightened aggregate background uncertainty on participants’ expectation formation and usage of publicly communicated information. This second wave of data collection during Early COVID took place from April to June 2020 during periods of lock-down in B.C. and Texas. Experiments during this second wave were conducted online due to university campus closures. Though we implemented nearly identical experimental protocols before and during the pandemic, online participation may have been made learning more challenging and led to reduced attention Alpert et al. (2016). We control for this potential experimental setting effect by conducting a final wave of experimental sessions from October to December 2021 in both laboratory and online settings. The ability to conduct in-person experiments was made possible by the re-opening of campuses in Fall 2021. Our Late Covid (lab) and Late Covid (online) data collection were conducted nearly two years after the onset of COVID-19, allowing us to observe how forecasting behavior evolved as the background aggregate uncertainty associated with the pandemic dissipated. We find that our baseline NoComm New Keynesian LtFEs are highly robust to online experimentation and moderately robust to the pandemic. Neither heightened background uncertainty associated with COVID-19 nor the experimental setting significantly influence forecaster performance or disagreement. If there was any externally-generated cognitive overload associated with the pandemic and the associated lock-downs, it did not influence our participants’ ability to forecast. Likewise, we do not observe any evidence of background uncertainty decreasing participants’ confidence in their forecasting abilities. Rather, we see a small decrease in conveyed uncertainty during COVID-19, especially in the extreme ends of the distribution. COVID-19 did, however, influence participants’ responsiveness to public signals. In particular, the onset of the pandemic reduced participants’ willingness to use overly precise signals in our Point treatment. Deviations from the central bank’s projected values increased by 40% for one-period ahead forecasts and 50% for two-period ahead forecasts. Moreover, forecasters penalized the central bank more for their forecast errors following the onset of COVID-19. Deviations from the central bank’s forecast were three times greater for the same sized central bank error in our COVID samples than in pre-COVID. COVID-19 had the opposite effect on subjects in our Point&Density treatments. The onset of the pandemic led to a significant increase in the credibility central bank’s one-period-ahead inflation projections, with mean deviations from the projected values falling by an estimated 55%. This better anchoring of expectations led to accompanying decreases in one-period-ahead absolute forecasts errors of about 22% and an estimated decrease in forecast disagreement of about 65%. Unlike in our Point treatments, subjects in Point&Density treatments did not react more strongly to central bank forecast errors relative to their Pre-COVID counterparts. The progression of the pandemic attenuated many of the effects of Early COVID. Absolute forecast errors in both information treatments reverted toward their pre-COVID means. Likewise, much of the changes in central bank credibility were muted in Late COVID. This almost entirely eliminated the improvement in the coordination of expectation observed in our Early COVID, Point&Density treatment. Online experimentation also had notable effects on our information treatments. In the Point treatment, online participants’ forecasts were significantly more aligned with the central bank’s projections for both forecast horizons. Credibility increased by more than 50% for both horizons, which in turn led to improvements in forecast accuracy. However, moving online worsened forecast performance in our Point&Density as subjects were significantly less anchored on the central bank’s projections. This un-anchoring led to a 58% increase in one-period ahead forecast disagreement and an 18% increase in one-period ahead forecast errors. The online environment appears to interact with the added complexity of communicating uncertainty. With greater opportunities for distraction in non-laboratory environments, participants seem to have more difficulty coordinating their forecasts when presented with more complex projections. The opposite is true when online participants are presented with very simple-to-use precise projections. Despite potential distractions, the precise point projections are an effective focal point that coordinates expectations. All three waves of our data collection took place simultaneously in Texas and British Columbia. These two regions had very different experiences during the onset of the pandemic. Per capita, Texas’ new cases were 16-fold and the death rate four-fold that of British Columbia. We exploit this regional variation to show how the severity of the pandemic drove our COVID-19 results. We show that the changes in central bank credibility attributed to the onset of the pandemic were much stronger in Texas than in British Columbia, consistent with differences in background uncertainty. Our results demonstrate the impressive robustness of baseline New Keynesian LtFEs to increases in background uncertainty, online experimentation and subject pool. Our finds complement those of Cornand and Hubert (2019), who show that experimental participants’ forecasts reasonably match inflation forecasting patterns observed in surveys of households, firms, and professional forecasters. They also observe a high degree of consistency in terms of forecasting heuristics across independently conducted experiments. At the same time, we find that the processing of more complex information is not necessarily robust. This paper also contributes to the literature studying whether and how the pandemic shifted people’s preferences in various domains. Harrison et al. (2022) compare the same participants’ a-temporal risk preferences elicited pre-COVID (May and October 2019) and post-COVID (May to October 2020). Within a Rank-Dependent Utility framework, they find that pre-pandemic participants were overall risk neutral if not borderline risk-loving. By contrast, during the pandemic, these same participants exhibited overall risk aversion. The authors also elicit time preferences identical experimental designs collected from independent samples of participants in 2013 and 2020 drawn from the same sample. The distribution of estimates of exponential discounting () is impressively stable over time. The effects of COVID-19 on hyperbolic discounting are mixed, with more variability in hyperbolic discounting estimates () during the pandemic. Cognitive abilities have also been impacted by the social changes associated with the pandemic. De Pue et al. (2021) note that 8–10% of older adults self-reported declines in cognitive function such as memory, concentration, multi-tasking, recall, and forgetfulness. To the best of our knowledge, this is the first paper to explicitly compare expectation formation in learning-to-forecast experiments (LtFEs) across two time periods, and in particular, before and during COVID. Our findings suggest that large external events can shape how people respond to information in LtFEs. This paper is organized as follows. Section 2 outlines our experimental design and procedures. Section 3 presents our experimental findings and Section 4 concludes.

Experimental design

The experimental framework is based on Rholes and Petersen (2021). Participants in the experiment were incentivized to act as forecasters tasked with predicting the future path of inflation in an endogenously evolving economy. Each experimental session consisted of seven subjects who formed individual inflation expectations privately using common information. Each experimental session comprised two independent sequences of 30 sequential decisions. In each period , participants submitted forecasts about and inflation. Subjects also provided measures, each period, of their own forecast errors, which provided a measure of their subjective uncertainty.

Data-generating process

The experimental economy’s data generating process arises from a representative-agent NK framework log-linearized around the zero-inflation steady state. We eliminate the need for expectations about the output gap by assuming the law of iterated expectations. This yields a dynamic system driven by one- and two-period-ahead inflation expectations and aggregate disturbances. We begin with the linearized three-equation NK model given by Equation (1) is the New Keynesian Phillips curve and describes how inflation, , evolves based on aggregate inflation expectations, , and the output gap, . Eq. (2) is derived from the intertemporal tradeoff condition of a representative household and describes how the output gap evolves based on expectations of future output gaps, and deviations of the real interest rate, from the natural rate of interest, . Finally, Eq. (3) is the central bank’s reaction function and describes how nominal interest rates respond to deviations of inflation and the output gap from their targeted values of zero. We first isolate in Eq. (1) so that depends only on inflation and expectations of inflation. We then iterate this forward and apply the law of iterated expectations to obtain as a function of one- and two-period ahead inflation expectations. We then substitute this into Eq. (2): We next substitute Eq. (3) into Eq. (4) to eliminate . Re-isolating for and substituting into Eq. (1) yields where we use This produces a system of equations that can be closed using , and . The demand shock follows an AR(1) process where is i.i.d. and is a persistence parameter. We calibrate the data-generating process to match moments of Canadian data following Kryvtsov and Petersen (2013); , , , , , , and bps. With these parameters, we have where aggregate expectations are sourced from our experimental participants. We employ the median forecasts of one- and two-period ahead inflation as our measures of aggregate expectations because the mean forecast is vulnerable to extreme outliers. We discuss some examples of how mean forecasts would have generated potentially unrealistically unstable dynamics in the Online Appendix. This design decision minimizes the influence of any individual participant on inflation dynamics, which better aligns with theory and facilitates the implementation of theory-driven experiments in the laboratory with a small number of subjects. Note in Eqs. (9) and Eq. (10) that one-period-ahead expectations relate positively to current-period inflation but two-period-ahead expectations do not. This counter-balancing of expectations makes sense from the perspective of consumption smoothing: if an agent expects inflation two days from now, then the agent will require more money to spend tomorrow than otherwise so that she can avoid paying higher prices two days from now. This puts downward pressure on spending today. Importantly, some of the assumptions required for log-linear approximation and our re-formulation of the 3-equation model may not always hold in the experiment. Expectational errors may not be small and unbiased. However, the assumption of rationality simplifies the data-generating process and the complexity of the forecasting problem for our subjects, and is standard in the experimental literature (examples include Assenza, Heemeijer, Hommes, Massaro, 2013, Hommes, Massaro, Weber, 2019, Pfajfar, Žakelj, 2014, Pfajfar, Žakelj, 2018, and Mauersberger, 2021). The reformulation also allows us to concentrate on variations in central bank communication about inflation rather than on the confluence of inflation and output. We show computationally in Rholes and Petersen (2021) that reformulating the DGP in this manner produces identical inflation dynamics under rational expectations and more stable dynamics under various forms of non-rational expectations. Consequentially, from an experimental perspective, using this modified DGP makes it more challenging for us to observe the level effects associated with our treatment variations.

Payoffs

We incentivized expectations using Eq. (11), which exhibits exponential decay in the absolute forecasting error. Notice from Eq. (11) that subjects received payment for expectations formed in and expectations formed in . Subjects also provided measures of the uncertainty surrounding their inflation forecasts, which we denote here as . Thus, we collected subject-level density forecasts in each period for both forecast horizons. We assume each subject’s uncertainty measure is symmetric around her point forecast, and restrict subjects to non-negative values for this uncertainty measure. We use the scoring rule given in Eq. (12), similar to Pfajfar and Žakelj (2016), to incentivize subjects to accurately convey their forecast uncertainty. Subjects earn nothing for uncertainty measures if realized inflation values fall outside their confidence bounds. However, subjects’ payoffs are decreasing in this uncertainty measure. Thus, conditional on actually capturing realized inflation in their confidence bounds, subjects were incentivized to create the smallest bounds possible. Further, we randomly paid either or in each period to prevent hedging.

Treatments

This study explores how forecasters’ expectations respond to varying degrees of precision in central bank projections both before and during the pandemic. Our primary goal is to determine whether the external uncertainty shocks induced by the pandemic altered expectations formation and forecaster confidence in public signals in the lab. Table 1 summarizes our three-leg between-subject experimental design.
Table 1

Treatments and Experimental Design.

Panel A: Treatments
Timing TreatmentInformation TreatmentDatesSessions
Pre-COVIDNoCommOctober 15–22, 20196
PointOctober 17-November 1, 20196
Point&DensityOctober 17–24, 20196
Early COVIDNoCommApril 16-May 28, 20206
PointApril 22-May 29, 20206
Point&DensityApril 27-June 25, 20206
Late COVIDNoCommNovember 15-December 11, 20216
PointOctober 22-October 29, 20216
Point&DensityOctober 20–29, 20206

Panel B: Experimental Setting
ProceduresIn-PersonOnline
(Pre- and Late Covid)(Early and Late Covid)

RecruitingSONA and ORSEESONA and ORSEE
Check-inIn-person with R.A.Online with R.A. over Zoom
Consent formReceived once in labMailed in advance
InstructionsPaper copyDigital link, no downloads
Instructions readAloud in personAloud over Zoom
PaymentCashE-transfer (Interac and Venmo)
Show up fee$7$7
Average payment$21$22
Session length1.5 h1.5 h
Treatments and Experimental Design. Participants interacted in an online platform that featured a single screen that updated as new information became available. Figure 1 presents an example screenshot from the experiment. In all treatments, the screen displayed in the top left corner a subject’s identification number, the current decision period, time remaining to make a decision, and the total number of points earned through the end of the previous period. The interface also featured three horizontal history plots. The topmost plot displayed past interest rates, and both past and current shocks. The second panel displayed the subject’s one-period-ahead inflation forecast (blue dots), the subject’s uncertainty surrounding this one-period-ahead forecast (blue shading), and all realized values of inflation (red dots). The third history panel displayed the subject’s two-period-ahead inflation forecast (orange dots), the subject’s uncertainty surrounding this two-period-ahead forecast (orange shading), and all realized values of inflation (red dots).
Fig. 1

Screenshot of participants’ screen during the experiment.

Screenshot of participants’ screen during the experiment. The first leg of our experimental design involved studying the effects of central bank projections on expectation formation. Information treatment variation appeared in the second and third history plots. In NoComm, participants received no supplementary information about the central bank’s outlook for inflation. In Point, the second and third history plots displayed the central bank’s evolving, five-period inflation path point forecast as green connected dots. In Point&Density, the second and third history plots contained the central bank’s evolving point forecasts with its corresponding level of uncertainty (green shading) as shown in Fig. 1. The central bank’s point projections assumed ex-ante rationality and in the Point&Density treatment, provided a symmetric one standard-deviation band centered around its point forecast. The second leg of our experimental design involved studying expectation formation before the pandemic, during the onset of COVID-19, and almost two years later when the shock of Covid-19 had dissipated. We use earlier collected data discussed in Rholes and Petersen (2021) for the pre-COVID sample and newly collected data from April 16, to June 25, 2020, for the Early COVID sample and from October 21, to December 21, 2021 for the Late COVID sample. The third and final leg of our experimental design investigates the effects of the experimental environment on expectation formation. In the Late COVID sample, we collect data both in-person (lab) and online to control for the effects of the laboratory environment on behavior and to compare with pre-COVID (lab) and Early COVID (online) data. We discuss differences in procedures in lab and online environments in the following subsection.

Procedures

We begin by describing the procedures that were common to both the pre- and post-COVID sessions. We recruited participants through online subject databases at Simon Fraser University (SONA) and Texas A&M University (ORSEE) (Greiner, 2015). We used the first 7 registrants that arrived at each session while later arrivals received a standard $10 show-up fee and were invited to participate in a later session. Average payoffs were $21 pre-pandemic and $22 during the pandemic. We paid subjects immediately following each experimental session. We conducted six sessions of each of the three information treatments and four timing treatments for a total of 72 experimental sessions. Each session involved seven participants forecasting for two sequences of 30 rounds each. Each sequence employed a different variation of the shock sequence so that subjects did not repeat an identical game in the second block of decisions. We pre-drew shock sequences (one per session in a given treatment) so that we could hold these constant across treatments. We drew all sequences from a mean-zero normal distribution with the same standard deviation. We disseminated and read-aloud instructions at the beginning of each experimental session. The instructions can be found in the Online Appendix. Our instructions included detailed information about subjects’ inflation forecasting task, forecast uncertainty task, how we would incentivize forecasts and uncertainty, and how the experimental economy evolved in response to expectations and aggregate shocks. Participants knew they could use the computer’s calculator or spreadsheets if desired. We encouraged subjects to ask clarifying questions at any time and allowed them to refer to the instructions at any point during the experiment. Following the instructions, subjects played four unpaid practice periods during which they could ask questions and then played through the two incentivized sequences. Subjects had 65 s in the first 9 periods, and 50 s for the remaining 21 periods, of each 30-period sequence. Subjects submitted inflation forecasts and corresponding uncertainty measures in basis points using only integer values. Inflation forecasts could be any real value while uncertainty measures had to be non-negative. We elicited forecasts in terms of basis points, which allowed subjects to forecast with a precision of th of 1%. The experiment progressed immediately to the next decision period after all participants submitted decisions or once time expired. Panel B of Table 1 presents the key experimental procedures in each wave of the experiment. The procedures in the lab and online sessions differed in two meaningful ways. First, sessions in the Early COVID (online) and the Late Covid (online) treatments were conducted remotely as opposed to in-person in the lab because of imposed health restrictions. This may have led to some loss of control as we were unable to monitor whether a subject was distracted or communicating through some other means (e.g. cell phone, email) with other people. We did, however, insist that participants keep their cameras, speakers and microphones on at all times to mimic the laboratory environment as best as possible. In none of the sessions did we suspect participants of communicating with others. Second, participants in the lab sessions had paper instructions while those in the pandemic sessions received the instructions through an online link where the file could not be downloaded. In both lab and online settings, the instructions were distributed after the experimenter made opening remarks and participants completed their consent forms. The form of payments also differed across the environment treatments. Participants were paid in cash pre-pandemic and through electronic transfers (e-Transfer in Canada, Venmo or PayPal in the U.S.) during the pandemic. Show-up fees remained the same during the pandemic even though there was potentially less time, transportation and effort cost for participants to come to the laboratory.

COVID-19 in Texas and British Columbia

All three waves of our data collection took place at the Experimental Economics Laboratory at Texas A&M University in College Station, Texas and the Simon Fraser University Experimental Economics Laboratory in Burnaby, British Columbia. In both institutions, participants consisted of undergraduate students from a wide range of disciplines. Wave 2 data collection took place from April 16, to June 25, 2020. During this time, Texas averaged 5.61 new cases and 0.11 new deaths per 100,000 residents while British Columbia averaged 0.35 new cases and 0.027 new deaths per 100,000 residents (Dong et al., 2020). In other words, Texas had 16 times the daily cases and 4 times the deaths of British Columbia. Wave 3 data collection took place from October 12, to December 21, 2021. The severity of the pandemic declined somewhat in Texas and increased substantially for British Columbia. During this time, Texas’ average new cases jumped to 10.55 per 100,000 and new deaths to 0.23 per 100,000. British Columbia had 8.41 cases per 100,000 and 0.10 new deaths per 100,000. The differences in the severity of the pandemic between Texas and British Columbia shrunk in our third wave of data collection, with Texas having only 25% more daily cases and 2.3 times the deaths. Initial containment measures were comparable during Wave 2 of our data collection. Texas A&M University (TAMU) cancelled classes from March 16–20, 2020 and resumed online March 23-April 28, 2020. Students returned to in-person classes in Fall 2020. TAMU began conducting some face-to-face courses in the Fall of 2020 but with strict social distancing and cleaning protocols, a mandatory masking policy, and while offering a remote alternative for all courses. Simon Fraser University (SFU) suspended in-person classes from March 17, 2020 and moved online immediately. Students returned completely to in-person classes in Fall 2021. There was a mandatory masking policy, some social distancing and cleaning protocols, and no vaccination requirement. In general, there were no remote options for classes. Containment measures during Wave 3 of our data collection were also comparable across the two institutions, with both campuses operating in-person in Fall 2021. The key difference is that TAMU students had returned to in-person classes one year before SFU students and, thus, had more exposure to the background health risks associated with in-person learning. We also observe differences between regions in how institutional trust changes in response to the pandemic. In British Columbia (B.C.), trust in public institutions remained relatively stable during the pandemic. In a May 2021 Leger poll, only 21% of British Columbians agreed with the statement that their “trust in the provincial government eroded a lot” during the pandemic (Leger, 2021). At the start of the pandemic, trust levels were quite high in B.C. Aggregate trust in B.C. increased from 37% to 40% between 2019 and 2020. While B.C. trust in politicians was very low (10%), trust in doctors (69%), scientists (62%), and educators (58%) were notably high. Trust in doctors and scientists rose to 85% by May 2021. At the national level, overall trust in the Bank of Canada increased from 42% in February 2019 to 48% in February and May 2020, and 50% in September 2020 (BankofCanada, 2020). Though direct measures of institutional trust for Texas specifically are seemingly unavailable, we assert that trust in medical scientists, scientists, and public institutions likely decreased among TAMU students over the onset of the pandemic. We base this on two things. First, The PewResearchCenter (2022) shows that trust in scientists, medical scientists, and public institutions fell markedly among conservatives between April and November 2020. Second, the large majority of TAMU students are conservatives. Demographics were also notably different in the two institutions. Roughly 2% of the student population at TAMU were international students (Niche, 2022) while 20% were at SFU. Newcomers who have been in a country for fewer than 15 years tend to be more trusting. In Canada, for instance, the trust level of newcomers is 10 percentage points higher than for those born in the country (ProofStrategies, 2022).

Hypotheses

It is difficult to say how the COVID-19 shock should have manifested itself in the COVID-era sessions. During this time, people in Canada and the United States faced relatively greater health and economic uncertainty, as well as increased social isolation. Approval of federal and provincial leaders improved during COVID-19 in Canada (Grenier, 2020), while there was no significant change in the United States among individuals who faced lock-down (Coibion et al., 2020). Still, the experimental economy we implement is independent of the real world. None of the features of our experiment changed. The only thing that might impact the accuracy and credibility of the central bank projections is participants’ own usage of the projections. As our experiment was exploratory in nature, we are hesitant to form any hypotheses about the effects of the COVID-19 shock on participants’ behavior. At the onset of the pandemic, cases and deaths were significantly greater in Texas than in British Columbia. As such, we expect that any effects from COVID-19 would be more pronounced in our Texas samples. We expect relatively smaller differences between the two samples in our Late Covid wave. Earlier experimental work has shown that participants have greater difficulty learning and exhibit less attention in online environments (Alpert, Couch, Harmon, 2016, Shachat, Walker, Wei, 2020). Paper instructions together with digital instructions have been found to improve comprehension and performance in pre-experiment quizzes as well as reduce non-money-maximizing behavior (Freeman et al., 2018). For these reasons, we hypothesize that forecast accuracy, coordination, and confidence to be significantly lower in our online sessions.

Results

We present summary statistics of forecast performance by information conditions, procedures, and timing in Table 2 . This table shows participants’ mean absolute forecast errors, deviations from the REE forecast, mean disagreement (measured as the interquartile (IQR) range each period for each session) and mean uncertainty for one- and two-period ahead inflation forecasts.
Table 2

Forecast statistics.

(1)(2)(3)(4)(5)(6)(7)(8)
Abs. Forecast Errors
Dev.from REE
IQR
Uncertainty
Ei,tπt+1Ei,tπt+2Ei,tπt+1Ei,tπt+2Ei,tπt+1Ei,tπt+2Ei,tπt+1Ei,tπt+2
Pre COVID (lab)

NoComm35.942.6734.3132.528.5233.4226.6932.89
(55.87)(54.80)(55.44)(49.88)(21.89)(24.88)(36.95)(91.62)
Point30.8434.8613.7713.0222.0421.6617.3321.1
(27.67)(27.29)(20.45)(17.06)(17.04)(16.40)(17.13)(23.93)
Point&Density33.7537.918.22,16.57,28.7928.25,30.3534.83
(31.07)(35.47)(24.17)(26.10)(22.25)(21.60)(29.36)(32.32)

Early COVID (online)

NoComm33.2840.2927.5829.6329.8132.7219.6523.61
(32.96)(41.25)(29.91)(36.18)(22.09)(21.03)(16.35)(23.08)
Point32.5837.5519.4619.5626.1727.3716.3817.35
(34.25)(40.28)(31.21)(36.96)(23.00)(24.53)(27.38)(24.36)
Point&Density32.6837.4117.9118.2427.0926.8323.1824.57
(41.05)(55.67)(37.08)(50.90)(19.57)(20.59)(32.02)(28.23)

Late COVID (lab)

NoComm30.6233.1727.0525.5635.7734.7721.7925.11
(33.72)(34.51)(31.64)(32.32)(35.06)(28.23)(26.15)(38.18)
Point34.9440.4221.4221.3629.5927.2620.8725.42
(44.31)(56.73)(46.49)(57.24)(28.47)(21.9)(52.92)(83.23)
Point&Density31.7835.417.0116.8725.1523.5328.4228.03
(30.9)(31.73)(27.8)(26.49)(23.62)(16.98)(453.6)(324.9)

Late COVID (Online)

NoComm29.833628.9128.1828.8733.6322.6125.43
(31.26)(29.12)(31.81)(26.95)(18.33)(21.23)(21.08)(26.41)
Point29.8933.5914.2413.5824.6623.118.4920.85
(26.52)(28.04)(20.49)(20.39)(18.06)(16.66)(16.31)(25.35)
Point&Density37.8437.4226.4821.9342.1731.4622.1824.17
(42.67)(39.15)(40.69)(35.98)(42.17)(27.66)(26.82)(29.37)

This table presents one- and two-period ahead inflation forecast statistics. Data from Repetitions 1 and 2 are pooled together. Columns (1) and (2) present the mean absolute forecast errors, Columns (3) and (4) present the mean deviations from the REE solution, Columns (5) and (6) present the mean interquartile range of forecasts, and Columns (7) and (8) present the mean perceived forecast errors. Standard deviations are displayed in parentheses.

Forecast statistics. This table presents one- and two-period ahead inflation forecast statistics. Data from Repetitions 1 and 2 are pooled together. Columns (1) and (2) present the mean absolute forecast errors, Columns (3) and (4) present the mean deviations from the REE solution, Columns (5) and (6) present the mean interquartile range of forecasts, and Columns (7) and (8) present the mean perceived forecast errors. Standard deviations are displayed in parentheses. Our initial interest was in whether or not the laboratory insulates the LtFE framework against external shocks. This is a fundamental question since LtFEs assume that macroeconomists are able to take full experimental control in the lab so that results are influenced by neither correlated nor idiosyncratic external factors. To do this, we ran a wave of experimental sessions immediately following the onset of the COVID-19 pandemic. However, pandemic conditions necessitated that we run these sessions online rather than in the lab, which introduced a possible confound. To address this, we ran a third wave of experimental sessions collecting data both online and in the lab simultaneously. Temporal proximity in this final wave should net out pandemic-induced effects so that differences in results obtained in each experimental setting (online and lab) are due entirely to the experimental setting. This extensive collection effort allows us to offer three main results. First, we demonstrate the effect of the initial shock of COVID. Next, we show how the effect of COVID has changed as the virus and its resultant background effects have evolved with time. Finally, we show the effect of conducting LtFEs in an online rather than an in-person setting. To disentangle the effects of COVID from the experimental setting, we estimate a series of random effects panel regressions, pooling data from all treatments. The general specification is given by: refers to our key dependent variables related to forecasting behavior. We focus on four key variables. At the participant level, we study participants’ absolute forecast errors, deviations from rationality, and elicited uncertainty. At the session-level, we study disagreement, measured as the inter-quartile range of forecasts. is a dummy variable that takes the value of 1 for data collected in the EarlyCOVID and LateCOVID waves, and zero otherwise. is a dummy variable for data collected in the LateCOVID wave while is a dummy variable indicating data collected in EarlyCOVID (online) and LateCOVID (online) sessions. For subject-level specifications, the random effects, , controls for the deviation of participant from the sample average. For session-level specifications, instead refers to deviations of the session from the session-level average.1 The estimated constant captures mean pre-COVID data as our baseline so that estimate effects relative to our pre-COVID results. estimates the additional effect COVID had on expectations. estimates the effect that late COVID had on expectation formation relative to Early COVID. Finally, estimates the additional effect of participating in an online laboratory environment relative to an in-person setting. We present results from these regression in Table 4 . Auxiliary panel regressions detailing the individual treatment comparisons in our Online Appendix.
Table 4

COVID and Procedural Effects.

Forecast Error
Dev.from REE
IQR
Uncertainty
Ei,tπt+1Ei,tπt+2Ei,tπt+1Ei,tπt+2Ei,tπt+1Ei,tπt+2Ei,tπt+1Ei,tπt+2
Panel A: NoComm

Covid-2.137-5.718-8.622*-5.5377.764-0.075-7.576*-9.325*
(4.29)(4.32)(4.49)(4.49)(6.83)(7.47)(4.34)(5.66)
LateCovid-3.140-3.804*1.354-1.415-0.5171.4022.6601.541
(2.08)(2.30)(2.24)(2.59)(3.23)(4.00)(2.29)(2.76)
Online-0.5982.8801.7742.364-6.726-1.1490.513-0.055
(2.18)(2.06)(2.40)(2.48)(5.74)(5.37)(2.54)(3.36)
α35.901***42.690***34.321***32.513***28.528***33.447***26.703***32.894***
(3.26)(3.17)(3.34)(3.00)(2.83)(4.24)(3.20)(4.15)

N94439079977797771414141498289828
χ24.4979.9483.9433.9301.5570.2444.4634.257

Panel B: Point

Covid6.887*9.604**12.958***14.416***8.9159.794*1.5500.960
(4.03)(4.67)(4.52)(5.13)(6.79)(5.88)(4.97)(6.87)
LateCovid-2.705-3.964*-5.221**-5.999**-1.342-4.1942.0763.460
(1.96)(2.20)(2.38)(2.54)(4.03)(4.23)(2.18)(2.59)
Online-5.144-6.918*-7.263*-7.873*-4.968-4.252-2.454-4.677
(3.58)(4.14)(3.94)(4.54)(5.19)(3.90)(4.48)(6.26)
α30.844***34.862***13.767***13.024***22.042***21.662***17.326***21.105***
(0.82)(1.00)(0.99)(0.97)(2.58)(2.59)(1.49)(2.19)

N96409307997399731438143899739973
χ23.4605.36010.0410.612.2162.8741.5793.425

Panel C: Point&Density

Covid-7.329**-2.782-10.065**-3.601-18.609**-9.508-0.971-6.436
(3.48)(3.63)(4.10)(4.03)(8.43)(6.82)(14.02)(11.51)
LateCovid5.214*0.0988.650**3.72515.034**4.876-0.971-0.381
(3.00)(3.01)(3.51)(3.35)(7.15)(5.55)(3.28)(3.62)
Online6.283**2.3089.761***5.27016.702**7.798-6.225-3.861
(2.90)(2.82)(3.53)(3.36)(7.50)(5.20)(13.58)(10.87)
α33.746***37.896***18.216***16.569***28.789***28.248***30.353***34.833***
(1.28)(1.40)(1.32)(1.31)(3.15)(3.10)(2.51)(2.75)

N94549118978597851433143398679867
χ25.1621.9717.8693.3435.2502.8256.83810.17

This table presents results from a series of random effects panel regressions. The dependent variables are indicated at the top of each column. Covid, LateCovid, and Online are dummy variables that take the value of 1 if the session data was collected in either the Early or Late COVID waves, the Late COVID wave, and online, respectively. denotes the estimated constant and is the mean estimate from the pre-COVID wave. Robust standard errors are reported in parentheses. *, **, and ***.

Forecasting Heuristics. Models of expectations as functions of exogenous or historical data. , and in increments of 0.1. COVID and Procedural Effects. This table presents results from a series of random effects panel regressions. The dependent variables are indicated at the top of each column. Covid, LateCovid, and Online are dummy variables that take the value of 1 if the session data was collected in either the Early or Late COVID waves, the Late COVID wave, and online, respectively. denotes the estimated constant and is the mean estimate from the pre-COVID wave. Robust standard errors are reported in parentheses. *, **, and ***. We also make use of two sets of figures throughout our results section. The first set of figures fixes treatment and plot kernel density functions of one- and two-period-ahead absolute forecasts errors, deviations from RE, forecast disagreement, and individual-level forecast uncertainty for each possible mix of the time period and experimental setting. These are Figs. 2 , 3 , 4 , and 5 , respectively. Our second set of figures instead fixes the time period and experimental setting and provides kernel density plots for the same outcomes for each treatment. Following the same outcome order as above, these are Figs. 6 , 7 , 8 , and 9 .
Fig. 2

Absolute inflation forecast errors, and .

Fig. 3

Absolute deviations from RE, and .

Fig. 4

IQR, inflation forecasts, and .

Fig. 5

Expected forecast error, and .

Fig. 6

Absolute inflation forecast errors, and .

Fig. 7

Deviations from RE, and .

Fig. 8

Inter-quartile ranges, inflation forecasts, and .

Fig. 9

Expected Forecast Errors, and .

Absolute inflation forecast errors, and . Absolute deviations from RE, and . IQR, inflation forecasts, and . Expected forecast error, and . Absolute inflation forecast errors, and . Deviations from RE, and . Inter-quartile ranges, inflation forecasts, and . Expected Forecast Errors, and . Following Rholes and Petersen (2021), we classify all participants into one of five general classes of forecasting heuristics listed in Table 3 by identifying the heuristic that produces the lowest absolute mean-squared error. The distribution of inexperienced and experienced one-period ahead forecasting heuristics are presented in Figs. 10 and 11 , respectively.
Table 3

Forecasting Heuristics.

ModelHeuristic NameModel
M1Ex–Ante RationalEi,tπt+1=f(rt1n,ϵt)
M2Cognitive DiscountingEi,tπt+1=αf(rt1n,ϵt)
M3Constant GainEi,tπt+1=Ei,t1πtγ(Ei,t2πt1πt1)
M4Inflation TargetEi,tπt+1=0
M5Trend ChasingEi,tπt+1=πt1+τ(πt1πt2)

Models of expectations as functions of exogenous or historical data. , and in increments of 0.1.

Fig. 10

Inflation forecasting heuristics - Repetition 1.

Fig. 11

Inflation forecasting heuristics - Repetition 2.

Inflation forecasting heuristics - Repetition 1. Inflation forecasting heuristics - Repetition 2.

NoComm treatment

Early COVID had a modest effect on forecaster performance in NoComm. Forecast errors did not change meaningfully between the first and second waves of our experiment. Forecast errors decreased by an average of 3 bps following the onset of COVID, and the differences are not statistically significant. Participants’ one-period ahead forecast deviations from RE improved by approximately 9 bps. This is associated with a 12 percentage point increase in the number of participants classified as forming model-consistent expectations (ex-ante rational). We show this in Fig. 10. Forecaster confidence also improved during COVID. Participants’ own expected errors decreased by between 7 and 9 bps. Both effects are significant at the 10% level after controlling for the online interface. This suggests that the well-documented increase in general economic uncertainty associated with COVID-19 did not transmit to individual-forecast uncertainty in the lab (Baker et al., 2020). Finally, we observe that forecast disagreement, measured by the IQR, changed relatively little from the pre-COVID (lab) wave to the Early COVID (online) wave. In Table 13 of Appendix B, we find that the differences in disagreement between the two waves are not statistically significant. When we control for the effects of participating online in Table 4, we can separate the effects of COVID from those of changing the experimental environment. In doing so, we find these two factors have opposing effects on forecaster disagreement. Moving online reduced one-period ahead forecaster disagreement significantly by roughly 7 bps, while the onset of COVID increased disagreement by 8 bps. However, neither effect is statistically different from zero at conventional levels of significance. This is consistent with estimates obtained by directly comparing our two Late COVID treatments, which we show in Table 10 in Appendix B. The progression of COVID neither amplifies nor attenuates the effects of Early COVID on forecast behavior. Forecast behaviour did not change significantly or quantitatively in nearly all of our estimates. The one exception is two-period ahead inflation forecast errors which improved by roughly 4 bps (). We provide a direct comparison of Early and Late COVID (online) results in Table 14 in Appendix B. Our findings suggest that forecasting behavior in the baseline LtFE framework is largely robust to online experimentation and to the COVID-19 shock.

Point projection treatment

We generally find that information provision treatments are less robust to COVID than the baseline NoComm treatment. In the Point treatment, forecast errors increased by 2 and 3 bps for one- and two-period ahead forecasts, respectively, with the onset of COVID. After controlling for the online experimental environment in Panel B of Table 4, we find that COVID increased forecast errors by an estimated 7 and 10 bps, respectively. This increase in forecast errors is offset by a simultaneous improvement in forecast accuracy as participants interact online in the Early COVID wave. We also observe similar increases in deviations from rationality. Mean deviations from RE increased between 40 and 50% between Pre-COVID (lab) and Early COVID (online). Disentangling the effects of participating online from those of Early COVID reveals that COVID actually led to much stronger effects. The estimated effect of COVID is an increase in deviations from RE of between 13 and 14 bps, or roughly doubling of the pre-COVID results. The effect of COVID on disagreement in our Point sessions is quite similar to that in NoComm. The exception is that COVID led to a marginally-significant increase in two-period-ahead forecast disagreement of about 10 bps. This is unsurprising, given the increases observed in absolute forecast error and absolute deviations from RE. The differences between the estimated effect of Early COVID in Table 4 and in our direct comparison of Pre COVID and Early COVID Table 8 are due to the fact that moving online also has significant effects on rationality and forecast disagreement that run opposite of those induced by Early COVID. Moving online leads to a 7 to 8 bps decrease in deviations from RE and a 4 to 5 bps decrease in forecast disagreement. We observe similar findings when we compare Late COVID (lab) and Late COVID (online) forecasting behavior in Table 10 in Appendix B. Point projections perform somewhat better in an online setting than in an in-person setting. Overall, these results suggest that the initial shock of COVID weakened the ability of saliently communicated point projections to coordinate expectations generally and on the rational benchmark specifically. Not surprisingly, this is accompanied by an increase in absolute forecast errors. However, the negative effects of COVID on forecast behavior dissipated somewhat in our Late COVID samples. Nonetheless, when we compare Pre-COVID (lab) to Late COVID (lab) and thus keep the experimental environment constant, we find our COVID results remain robust. Forecast errors, deviations from RE, and disagreement in Late Covid (lab) remain relatively high compared to Pre-COVID.

Point&Density projection treatment

Our findings in the Point&Density projection treatment are in complete contrast to those reported for the Point projection treatment. We observe in Panel C of Table 4 that one-period ahead inflation forecast errors, deviations from RE, and forecast disagreement all decreased substantially with the onset of COVID. As in the Point treatment, when we control for the effects of being online, we find that, relative to the pre-COVID wave, one-period ahead errors decreased by an estimated 7 bps (21%), deviations from RE decreased by about 10 bps (55%), and disagreement decreased by 19 bps (65%) due to the onset of COVID. That is, the noisier projection was much more effective at managing and coordinating expectations in the presence of COVID. As in the Point treatment, the effect of moving online served to attenuate the observed Point&Density Early COVID effects. Interacting online in Early COVID actually worsened forecast performance when participants were exposed to the less precise projections. Comparisons of laboratory and online forecasting performance in Table 10 during Late COVID support these results. One-period ahead forecast errors, deviations from RE, and disagreement were made significantly worse due to online interaction. One-period ahead forecast errors increase by an estimated 6 bps (19%), deviations from RE by 10bps (54%) and IQR increased by 19 bps (64%). For reference, moving our experiment online increased IQR by four-fifths of a standard deviation. The sizeable increase in short-term forecast disagreement induced by moving online aligns with Altig et al. (2020) who show large and statistically significant increases in forecast disagreement in both the U.S. and U.K. following the onset of the pandemic. Landier and Thesmar (2020) show a similar increase in earnings forecast disagreement among professional forecasters, where disagreement is significantly higher over the short term. Armantier et al. (2021) also show that the onset of the pandemic led to a significant increase in short-term inflation forecast disagreement among participants in the NY Fed’s Survey of Consumer Expectations. Thus, the effect that the large-scale uncertainty shock of COVID had on real-world forecast disagreement is mirrored in our Late COVID (online) results. Likewise, the improvements in forecast performance due to the onset of COVID dissipated in our Late COVID (online) sample. Forecast errors, deviations from RE, and disagreement in late COVID (online) all increased relative to Early COVID (online). Comparing Pre-COVID (lab) to late COVID (lab), we observe no statistically significant difference in errors or deviations from RE confirming this attenuation of COVID effects over time. Disagreement still remained significantly lower in Late COVID (lab) by roughly 4 and 5 bps for one- and two-period ahead forecasts.

Central bank credibility and COVID

COVID appears to have notable effects on participants’ credibility in central bank projections in our Point and Point&Density treatments. We next explore whether participants reacted differently to central bank errors following the onset of COVID. refers to participant ’s absolute deviation from the central bank’s RE projection about period inflation, where , and is our key measure of central bank credibility. denotes the absolute forecast error of the central bank about period inflation formed in period . By period , the participant will have observed how accurate the central bank’s most recent one- and two-period ahead forecasts were. We interact the central bank’s absolute forecast errors with dummy variables for , , and to control for variation in the time period and experimental environment. We show estimates from these regressions in Table 5 .
Table 5

Central bank credibility.

Dep. var.Point
Point&Density
DevfromREEEi,tπt+1Ei,tπt+2Ei,tπt+1Ei,tπt+2
FE1,t2CB0.125***0.100***
(0.03)(0.02)
FE1,t2CB× Covid0.261***-0.033
(0.07)(0.07)
FE1,t2CB× LateCovid-0.0700.052
(0.05)(0.06)
FE1,t2CB× Online-0.163***0.062
(0.05)(0.04)
FE2,t3CB0.110***0.163***
(0.03)(0.04)
FE2,t3CB× Covid0.224***-0.048
(0.07)(0.07)
FE2,t3CB× LateCovid-0.090*0.041
(0.05)(0.05)
FE2,t3CB× Online-0.098*0.020
(0.05)(0.05)
α11.889***11.409***15.883***13.124***
(1.06)(1.42)(1.03)(0.93)

N9308897590888752
χ2150.999.0376.6483.80

This table presents results from a series of random effects panel regressions. The dependent variables are indicated at the top of each column. and refer to the central bank’s absolute forecast errors for one- and two-period ahead inflation, respectively. Online is a dummy variable that takes the value of 1 if the session data was collected online. denotes the estimated constant. Robust standard errors are reported in parentheses. *, **, and ***.

Central bank credibility. This table presents results from a series of random effects panel regressions. The dependent variables are indicated at the top of each column. and refer to the central bank’s absolute forecast errors for one- and two-period ahead inflation, respectively. Online is a dummy variable that takes the value of 1 if the session data was collected online. denotes the estimated constant. Robust standard errors are reported in parentheses. *, **, and ***. In both information treatments, an increase in the central bank’s absolute forecast error Pre COVID led to a statistically significant decrease in credibility at both forecast horizons. For every 10 bps increase in the central bank’s absolute forecast error, mean deviations from RE increased by about 1 bps at both horizons in Point sessions, and between 1 and 2 bps in at both horizons in our Point&Density sessions. However, the initial COVID shock affected central bank credibility in our two information treatments starkly different ways. Point participants responded three times as strongly to central bank forecast errors relative to Pre-COVID counterparts. This increased fragility in credibility did not fully dissipate in our late COVID samples. By contrast, COVID did not impact central bank in our Point&Density treatment. We observe small and statistically insignificant differences in deviations from RE in response to recent central bank forecast errors.

Regional differences in response to COVID-19

Texas and British Columbia had very different experiences with the pandemic. At the onset of the pandemic, Texas had 16 times the daily cases and four times the deaths per capita of British Columbia. Consequently, we hypothesized that the effects of the pandemic were more pronounced with participants from Texas than British Columbia. We extend Eq. (13) to estimate the differential effects of the pandemic and interacting online across institutions. Specifically, we interact a dummy variable, , which takes the value of 1 for data sourced from TAMU sessions, with each of our explanatory variables.We report estimates from Eq. (15) for each of our forecasting metrics in Table Eq. (6 ). We also report estimates of Eq. (13) for TAMU and SFU participants separately in Table 14 and Table 15 of the Online Appendix.
Table 6

COVID and procedural effects with regional controls.

Forecast Error
Dev.from REE
IQR
Uncertainty
Ei,tπt+1Ei,tπt+2Ei,tπt+1Ei,tπt+2Ei,tπt+1Ei,tπt+2Ei,tπt+1Ei,tπt+2
Panel A: NoComm

TAMU1.0471.034-1.093-0.639-0.9782.1172.2672.813
(6.53)(6.33)(6.68)(5.99)(5.66)(8.46)(6.39)(8.30)
Covid1.7370.269-7.034*-1.2688.7496.277-0.9280.460
(4.17)(4.73)(4.23)(4.50)(6.85)(9.12)(5.63)(8.73)
Covid × TAMU-8.020-12.474-3.486-8.983-2.000-12.736-13.465-19.902*
(8.53)(8.46)(8.82)(8.70)(13.08)(13.93)(8.61)(11.14)
LateCovid-4.864**-6.964***1.372-2.935-3.156-3.1501.008-1.494
(2.21)(2.28)(2.28)(2.48)(3.73)(4.71)(3.13)(4.21)
LateCovid × TAMU3.7216.8210.2733.4855.3069.1373.4746.401
(4.07)(4.28)(4.17)(4.72)(5.13)(5.90)(4.52)(5.30)
Online0.3474.412**5.241**5.570**-2.3381.934-1.711-2.526
(2.41)(2.12)(2.62)(2.71)(3.87)(5.03)(4.10)(5.94)
Online × TAMU-1.934-3.152-7.007-6.485-8.813-6.2024.4424.903
(4.31)(4.00)(4.62)(4.71)(11.31)(10.50)(5.00)(6.59)
α35.378***42.173***34.867***32.833***29.017***32.389***25.569***31.488***
(3.09)(3.73)(2.95)(2.99)(4.90)(7.05)(3.31)(5.81)

N94439079977797771414141498289828
χ228.8955.0041.8260.5221.6629.9417.9127.79

Panel B: Point

TAMU-8.424***-9.709***-10.586***-10.383***-15.750***-14.420***-9.840***-13.064***
(1.35)(1.69)(1.60)(1.57)(2.35)(3.03)(2.78)(4.14)
Covid-5.941***-2.1321.7993.983-6.2810.460-11.881***-16.807***
(1.94)(2.41)(2.76)(2.99)(4.33)(5.11)(4.18)(5.60)
Covid × TAMU25.526***23.340**22.202**20.768**30.392**18.668*26.916***35.569***
(7.86)(9.17)(8.90)(10.14)(12.51)(10.66)(9.67)(13.43)
LateCovid2.948**1.568-1.220-2.3884.9780.7811.0683.276
(1.31)(1.51)(1.93)(2.01)(3.54)(3.72)(3.16)(3.73)
LateCovid × TAMU-11.176***-10.931***-7.888*-7.124-12.640*-9.9511.9630.335
(3.65)(4.19)(4.65)(4.98)(7.62)(8.20)(4.37)(5.18)
Online-2.547**-5.760***-4.596**-5.771***-1.348-5.2715.789**5.481
(1.25)(1.59)(1.81)(2.11)(3.00)(3.54)(2.74)(3.87)
Online × TAMU-5.209-2.324-5.295-4.182-7.2412.036-16.546*-20.376*
(7.13)(8.26)(7.85)(9.07)(10.26)(7.55)(8.71)(12.28)
α35.056***39.716***19.060***18.215***29.917***28.872***22.246***27.637***
(1.21)(1.54)(1.40)(1.39)(1.77)(2.57)(2.46)(3.52)

N96409307997399731438143899739973
χ277.8081.1890.6891.6862.3244.6033.7420.37

Panel C: Point&Density

TAMU-3.037-0.6585.143**4.403*3.5294.8631.5130.787
(2.55)(2.81)(2.59)(2.58)(6.22)(6.05)(5.01)(5.51)
Covid-10.656**-2.861-9.660-1.490-15.025**1.657-13.328*-15.302*
(5.13)(5.06)(6.40)(5.95)(6.52)(5.14)(7.13)(7.81)
Covid × TAMU6.388-0.075-1.067-4.401-7.361-22.209*23.64616.866
(6.82)(7.15)(8.03)(7.96)(16.54)(13.19)(27.28)(22.45)
LateCovid10.243**4.07614.591***7.55915.618***0.346-0.633-1.155
(4.42)(4.21)(5.55)(5.04)(5.61)(3.12)(5.75)(6.41)
LateCovid × TAMU-9.673-7.577-11.463*-7.335-1.1589.059-0.2181.936
(5.90)(5.95)(6.90)(6.66)(14.08)(10.87)(6.44)(7.15)
Online9.308**3.65413.378**6.99817.601***2.91011.557**10.328**
(4.58)(4.42)(5.74)(5.42)(5.82)(3.39)(4.58)(5.08)
Online × TAMU-5.819-2.505-7.014-3.321-1.6069.655-34.635-27.649
(5.64)(5.49)(6.90)(6.63)(14.75)(10.13)(26.39)(21.16)
α35.264***38.224***15.645***14.367***27.024***25.817***29.596***34.439***
(1.79)(1.85)(1.89)(1.76)(2.70)(3.46)(3.68)(3.88)

N94549118978597851432143298679867
χ222.0719.2322.2314.9313.429.60432.4631.46

This table presents results from a series of random effects panel regressions. The dependent variables are indicated at the top of each column. Covid, LateCovid, and Online are dummy variables that take the value of 1 if the session data was collected in either the Early or Late COVID waves, the Late COVID wave, and online, respectively. denotes the estimated constant and is the mean estimate from the pre-COVID wave. Robust standard errors are reported in parentheses. *, **, and ***.

COVID and procedural effects with regional controls. This table presents results from a series of random effects panel regressions. The dependent variables are indicated at the top of each column. Covid, LateCovid, and Online are dummy variables that take the value of 1 if the session data was collected in either the Early or Late COVID waves, the Late COVID wave, and online, respectively. denotes the estimated constant and is the mean estimate from the pre-COVID wave. Robust standard errors are reported in parentheses. *, **, and ***. In NoComm, we find little difference across the two institutions in terms of basic forecasting ability and coordination before the pandemic. The estimate of on the TAMU dummy variable is not statistically different from zero for any of our variables. COVID-19 appears to have reduced forecast errors, deviations from RE, disagreement, and uncertainty more in our TAMU subject pools than SFU ones, but the effects are noisy and not precisely estimated. We do observe a notable large decrease in uncertainty for two-period ahead forecasts among TAMU participants but not SFU participants (Table 6). The differences in behavior across subject pools is more pronounced when we consider our information treatments. The effects of the pandemic are much stronger and precise for the TAMU participants in the Point treatment. TAMU participants decreased their use of the precise point projection significantly more than SFU participants after the onset of the pandemic. This led to significantly larger forecast errors and disagreement among TAMU subjects. By allowing for institution-specific effects, we find that the pandemic significantly boosted the confidence of SFU subjects and lowered it for TAMU participants. This observation is consistent with survey evidence that Canadian residents slightly increased their confidence in government institutions at the start of the pandemic, while the opposite is true in the United States. We do not observe much differences between institutions in the Point&Density treatment. Pre-COVID, TAMU participants’ forecasts deviated by about 5 bps more from the RE than SFU participants, suggesting a slightly lower level of credibility in the projection. TAMU disagreement was significantly lower after the onset of COVID-19, suggesting better coordination of expectations. However, we do not find that forecasts were much more in-line with the REE. We also consider institutional effects on central bank credibility by extending Eq. (14) to estimate institutional effects as we did in Eq. (15). We report results from this estimation exercise for both information treatments and forecast horizons in Table 7 .
Table 7

Central bank credibility with regional controls.

Dep. var.Point
Point&Density
DevfromREEEi,tπt+1Ei,tπt+2Ei,tπt+1Ei,tπt+2
TAMU-6.604***-6.092***3.7561.624
(1.67)(1.68)(2.58)(2.57)
Covid-6.363**-5.599*-12.317*-4.805
(2.82)(3.34)(6.41)(5.70)
Covid × TAMU26.152***27.915**2.6661.597
(8.06)(10.94)(7.99)(7.41)
FE1,t2CB0.175***0.073***
(0.04)(0.03)
FE1,t2CB× TAMU-0.114**0.062
(0.05)(0.05)
FE1,t2CB× Covid0.307***0.042
(0.09)(0.09)
FE1,t2CB× Covid × TAMU-0.155-0.094
(0.13)(0.14)
FE2,t3CB0.159***0.133***
(0.04)(0.04)
FE2,t3CB×TAMU-0.122***0.077
(0.04)(0.08)
FE2,t3CB× Covid0.296***0.045
(0.10)(0.08)
FE2,t3CB× Covid × TAMU-0.232*-0.169
(0.14)(0.14)
α13.596***12.865***13.058***9.741***
(1.48)(1.45)(2.06)(1.90)

Controls
Onlineyesyesyesyes
LateCovidyesyesyesyes
N9308897590888752
χ2235.2202.3123.9126.8

This table presents results from a series of random effects panel regressions. The dependent variables are indicated at the top of each column. and refer to the central bank’s absolute forecast errors for one- and two-period ahead inflation, respectively.TAMU is a dummy variable that takes the value of 1 if the session data was collected at Texas A&M University. We include controls and a complete set of interactions for experimental setting and LateCovid waves. denotes the estimated constant. Robust standard errors are reported in parentheses. *, **, and ***.

Central bank credibility with regional controls. This table presents results from a series of random effects panel regressions. The dependent variables are indicated at the top of each column. and refer to the central bank’s absolute forecast errors for one- and two-period ahead inflation, respectively.TAMU is a dummy variable that takes the value of 1 if the session data was collected at Texas A&M University. We include controls and a complete set of interactions for experimental setting and LateCovid waves. denotes the estimated constant. Robust standard errors are reported in parentheses. *, **, and ***. We find that credibility in the Point projection was approximately 6 bps higher among TAMU participants in our pre-COVID wave. The onset of the pandemic led to a 6 bps increase in credibility for SFU subjects and a 20 bps decrease in credibility for TAMU participants. The differences across institutions is highly significant. Central bank forecast errors mattered more to SFU participants than TAMU ones, especially during Covid. We find similar effects for two-period ahead forecasts. We do not observe any notable differences in TAMU and SFU students in the Point&Density treatment. TAMU participants do not react in a sizeable or significantly different manner to the projections in any of our experimental waves or in response to central bank forecast errors.

Conclusion and discussion

The COVID-19 pandemic has significantly altered how we think, interact, and live. Society has became exposed to significantly greater health, economic, and political uncertainty. Many professional, educational and social interactions have moved online due to physical distancing requirements. These changes have also forced many experimental economics laboratories to transition from in-person to online experiments. In this paper we explore the robustness of New Keynesian learning-to-forecast experiments (LtFEs) to the evolving background uncertainty associated with COVID-19 and to online experimentation. These questions are important as LtFEs become more widely used to understand expectation formation and for the design of policy and central bank communication (Kostyshyna et al., 2021). We find that simple New Keynesian LtFEs are largely robust to the onset of COVID-19 and online experimentation. This is a reassuring finding as many experimentalists have been forced to combine in-person and online generated data due to physical lab closures. At the same time, many researchers have come to appreciate the benefits of conducting experiments online (e.g. easier to recruit a broader pool of subjects and ability to conduct experiments during evenings and weekends, no space or subject restrictions due to lab space, convenience of e-transfers over cash payments, larger sample sizes) and would prefer to continue conducting sessions online when the pandemic ends. We also explored how participants’ inflation forecasts respond to projections of inflation communicated either as a precise 5-period ahead path with or without a one-standard deviation confidence interval around the projection and whether expectation formation in these environments have changed due to the pandemic and online experimentation. We find that COVID-19 has led to a significant change in how participants in LtFEs respond to information provision. Precise projections of future inflation are less effective at managing expectations after the onset of the pandemic. In particular, we find that, since the start of the pandemic, participants are more skeptical of such projections and their forecasts have become more un-anchored in response to erroneous projections. By contrast, imprecise forecasts that convey uncertainty around a point projection have become more effective at managing expectations. Moreover, participants are willing to continue using these noisier projections even when the forecasts become more erroneous. Most of our observed effects of COVID on expectation formation have persisted even as the shock of the pandemic has worn off. We attribute this increase in comfort with imprecise outlooks and reluctance towards overly-precise outlooks to the dramatic increase in background uncertainty our participants have been exposed to outside of the lab since 2020. Our experiments were conducted in both Texas and British Columbia. Texas experienced a much more severe onset of the pandemic, with 16 times the daily cases and four times the deaths per capita. Consistent with this vast difference in background uncertainty, we find that Texas participants exhibited significantly greater changes in their willingness to use the different projections than their British Columbia counterparts. Our paper provides new evidence that external exposure to high levels of uncertainty have significant immediate and persistent effects on expectation formation in experimental settings. Our findings provide some potentially valuable insights into how to communicate during an economic crisis. During times of heightened uncertainty, central banks may be well-served to convey forecast uncertainty rather than convey absolute certainty and be proven wrong by history. This policy recommendation is somewhat bolstered by the fact that the pandemic did not amplify the transmission of forecast uncertainty in our experiments to participants’ own uncertainty. Information provision can be made more or less effective depending on the experimental environment. Point projections are better able to manage expectations when participants interact online while point and density projections are more effective in laboratory settings. We suspect that online participants are more distracted than our in-lab participants. While we ask subjects not to browse the internet and to turn off their cell phones in both laboratory and online environments, we have significantly less ability to enforce attention in our online settings. With increased distractions online, simpler precise projections can serve as a more effective focal point and better manage expectations. Likewise, more distractions may make it more difficult for participants to focus on where in the projected range to forecast. It is possible that additional information, if sufficiently easy to understand, becomes an even stronger anchor for behavior in an online setting whereas sufficiently complex information in the same setting instead creates confusion. This would align with evidence that subjects are less reflective and attentive when experimenting online (Arechar, Rand, 2021, Shachat, Walker, Wei, 2020) and that learning outcomes are worse online (Alpert, Couch, Harmon, 2016, Bettinger, Fox, Loeb, Taylor, 2017, Cacault, Hildebrand, Laurent-Lucchetti, Pellizzari, 2021). Our experiment is presented in a highly contextualized manner. We present public signals to subjects as forecasts from a central bank and fully describe to them the economic system, including the underlying behavior of firms and households. Most learning-to-forecast experiments provide context to better align with macroeconomic modeling and to study central bank communication in the lab. A notable exception is Duffy and Heinemann (2021) who reframe unemployment and inflation in their forecasting experiment as containers holding varying amounts of water. An open question is the effects of context on participants’ overall expectation formation and their willingness to employ publicly provided projections. Given the relatively lower levels of institutional trust in Texas, we would anticipate lower overall willingness to forecast in a model consistent manner or to adopt the central bank’s projections in private forecasts if this contextualization mattered. We find no evidence of this in either our NoComm or Point treatments. It is unclear in a post-pandemic world which of our experimental settings best informs real-world policy. The knee-jerk reaction is to always consider in-person, laboratory experiments as the yardstick against which we measure the efficacy of other approaches and from which we draw our most meaningful inference. However, we should not simply ignore the online study of central bank communication or, more generally, of using communication and information provision to guide economic behavior. It is reasonable to assume that an average household obtains most or all of its economic information - the information informing its real decisions - in a digital setting. We obtain much of our information by reading online articles, watching video clips, listening to audio clips, and having online discussions. It may be that little of the information about economic activity that influences the average person’s decisions comes from face-to-face interaction. This has likely become more true since the pandemic forced us to move so much of our life online. From this perspective, it makes sense to draw inference from both settings. For example, results obtained via laboratory experimentation in our experiment perhaps provide benchmarks against which we can compare online results. Differences in outcomes between experimental settings then themselves become a meaningful research topic – how can we improve the presentation of online information so that we close the gap in outcomes across experimental environments? Future research on these questions will provide valuable insight into the design of both experiments and public communication.
  6 in total

1.  The impact of the COVID-19 pandemic on wellbeing and cognitive functioning of older adults.

Authors:  Sarah De Pue; Céline Gillebert; Eva Dierckx; Marie-Anne Vanderhasselt; Rudi De Raedt; Eva Van den Bussche
Journal:  Sci Rep       Date:  2021-02-25       Impact factor: 4.379

2.  Into a new decade.

Authors:  Marc Brysbaert; Zsuzsa Bakk; Erin M Buchanan; Denis Drieghe; Andreas Frey; Eunsook Kim; Victor Kuperman; Christopher R Madan; Marco Marelli; Sebastiaan Mathôt; Dubravka Svetina Valdivia; Melvin Yap
Journal:  Behav Res Methods       Date:  2020-10-19

3.  An interactive web-based dashboard to track COVID-19 in real time.

Authors:  Ensheng Dong; Hongru Du; Lauren Gardner
Journal:  Lancet Infect Dis       Date:  2020-02-19       Impact factor: 25.071

4.  Coordinating expectations through central bank projections.

Authors:  Fatemeh Mokhtarzadeh; Luba Petersen
Journal:  Exp Econ       Date:  2020-11-24
  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.