Literature DB >> 35952443

Playing defense? Health care in the era of Covid.

Edward N Okeke1.   

Abstract

Health workers have to balance their own welfare vs. that of their patients particularly when patients have a readily transmissible disease. These risks become more consequential during an outbreak, and especially so when the chance of severe illness or mortality is non-negligible. One way to reduce risk is by reducing contact with patients. Such changes could be along the intensive or extensive margins. Using data on primary care outpatient encounters during the early months of the Covid-19 pandemic, I document important changes in the intensity of provider-patient interactions. Significantly, I find that adherence to clinical guidelines, the probability that routine procedures such as physical examinations were completed, and even the quality of information given by health providers, all declined sharply. I present evidence that these effects likely reflect risk mitigation behavior by health providers.
Copyright © 2022 Elsevier B.V. All rights reserved.

Entities:  

Keywords:  Covid-19; Health workers; Quality; Risk compensation

Mesh:

Year:  2022        PMID: 35952443      PMCID: PMC9358334          DOI: 10.1016/j.jhealeco.2022.102665

Source DB:  PubMed          Journal:  J Health Econ        ISSN: 0167-6296            Impact factor:   3.804


Introduction

The health-related effects of the Covid-19 pandemic have been extensively documented (Bayani et al., 2021, Lopez-Leon et al., 2021, Giuntella et al., 2021, Andrasfay and Goldman, 2021, Khalil et al., 2020, Okeke et al., 2021, Poudel et al., 2021, Raina et al., 2021, Shapira et al., 2021, Pfefferbaum and North, 2020).1 A key issue, however, that has not attracted sufficient attention is the tradeoff faced by health care workers.2 The risk associated with caring for patients has both increased and become more salient during the pandemic, creating a difficult tradeoff for health providers who now have to balance their own welfare against that of their patients: on the one hand providing proper care to their patients; on the other protecting themselves. These risks are clear (Gómez-Ochoa et al., 2020, Shah et al., 2020, Nguyen et al., 2020, Ing et al., 2020). It is also clear that health care workers are aware of, and concerned about, this risk (Ayub et al., 2020, Sahashi et al., 2021, Malik et al., 2021).3 Might health care providers attempt to compensate for this risk in ways that are relevant for policy? For example, might they change how they interact with patients to reduce exposure? Such changes might occur on the extensive margin, e.g., by reducing in-person contacts with patients or, more subtly, on the intensive margin by changing the nature of the interaction itself. This paper presents evidence that the quality of health care interactions changed in important ways during the early months of the pandemic. The data used in this paper come from the first phase of the pandemic (April to November 2020) and consist of routine primary care encounters in health clinics in Nigeria. The context is particularly salient because many health care workers in developing country settings did not have ready access to proper personal protective equipment (McMahon et al., 2020).4 Many also lacked easy access to testing meaning that health workers could not readily distinguish between infected and non-infected patients. Additionally, while more developed countries saw a rapid increase in telemedicine during the pandemic (Cantor et al., 2021, Patel et al., 2021), which helped to reduce risk to health care workers from in-person contacts, telemedicine continues to be vastly under-utilized in developing countries (Combi et al., 2016). The developing country context is important for another reason: these are settings that were already characterized by low levels of quality even prior to the pandemic (Das and Hammer, 2014, Powell-Jackson et al., 2020); layering on the shock of a pandemic might have even more deleterious consequences. I analyze the effect of the pandemic on the quality of primary care encounters using a fixed effects strategy that compares patients seen in the same health center before and after the start of the pandemic, controlling for seasonality and patient characteristics. I have data on more than 5000 such encounters that took place between January 2019 and November 2020. The data come from interviews conducted with patients and caregivers, on-site, shortly after the encounter. I observe various measures of the quality of the interaction including whether the attending provider asked questions recommended by clinical guidelines, whether they performed various routine/diagnostic procedures such as performing a physical examination, and whether they provided relevant health information to the patient at the end of the visit. In addition to the fixed effects approach, I also use an alternative double-difference strategy. As there is no contemporaneous unexposed group, I define exposed and unexposed encounters based on whether they occurred before or after the start of the pandemic. Specifically, encounters between January and March 2020 are defined as unexposed, and encounters between April and November 2020 are defined as exposed. I then calculate the change between exposed and unexposed encounters in 2020 (first difference) and take a second difference over the same change in 2019, the comparator period. Both models produce similar results. I find that the quality of routine health care interactions significantly deteriorated during the first phase of the pandemic. Based on the most conservative model estimates: health worker adherence to clinical history-taking guidelines decreased by about 15%, physical examinations decreased by about 33%, blood pressure checks decreased by 50%, and examinations with a stethoscope decreased by about 30%. The quality of provider–patient communication also deteriorated: patients were 32% less likely to be told what their diagnosis was and 31% less likely to receive any health education related to their diagnosis. These large differences remain even after controlling for patient demographic and clinical characteristics. What explains these effects? I present evidence that risk mitigation by healthcare providers provides a likely explanation. In the context of an outbreak with a potentially lethal virus that is spread by direct contact, where health care workers have limited options for protecting themselves – only 3% of health centers in the sample, for example, reported having N95 masks – an obvious strategy to reduce exposure risk is by reducing contact with potentially infected patients.5 However, if health care workers cannot easily identify patients that are infected – none of these health centers had the capability for Covid testing – they will apply a broad brush. I start by showing that health workers were very worried about getting infected: when asked how worried they were on a scale from 0–10, 1 in 5 workers chose the maximum score 10. I also present evidence that some health centers stopped providing care to patients with symptoms associated with Covid-19; behavior that is suggestive of risk mitigation on the extensive margin. Health workers not willing (or unable) to turn away patients might attempt to reduce risk of exposure by reducing the duration and intensity of contact with sick patients, i.e., an intensive margin response. The effects observed – health workers asking fewer questions, providing less health information, and reducing use of procedures like physical examinations that require close contact with patients – are suggestive of intensive margin risk reduction. To put this on a firmer footing, I devise two tests. The first test is motivated by two related facts: (i) health workers cannot readily identify who is infected, and (ii) exposure risk increases with the fraction of patients that are likely to be infected. This suggests that risk mitigation will be more likely, and thus the negative effects on quality more pronounced, when health workers are faced by patients they believe are more likely to be infected. In a survey administered to health workers in the sample, fever and cough were the two symptoms most commonly associated with Covid-19 infection. Consistent with risk mitigation I find more pronounced negative effects when patients present with these symptoms. The main idea behind the second test is that patient screening (turning away suspicious patients) reduces the need for other Covid precautions. An illustration is with malaria prevention. To reduce the risk of getting malaria, one can screen out mosquitoes, literally, by using a mosquito net. If one can exclude all (or most) mosquitoes, then that reduces the need for additional costly precautions such as burning a mosquito coil or using a spray (both are costly not just because of the financial costs, but also because of usage costs: they produce a bad odor). The same intuition applies here: if healthcare workers can screen patients then it reduces the need for additional measures which are costly. I argue that this will map non-linearly to the level of worry about getting infected. When individuals are not as worried about the virus they will be less likely to take any precautions – a phenomenon observed in the US and elsewhere. On the other extreme, those who are most worried will be more likely to screen patients which reduces the need for other precautions. This suggests that the most pronounced changes in quality will be seen at moderate levels of worry. I find evidence in line with this. I examine and find little support for alternative explanations. I show that the results cannot be explained by changes in health worker composition: a health worker fixed effects specification produces similar results. Data on Covid case numbers in Nigeria casts some doubt on a second explanation: increased healthcare provider workloads/burdens during the pandemic. The idea being that overworked, fatigued providers might provide worse care (Salyers et al., 2017). As of November 2020, the end of my data series, Nigeria had recorded 67,412 confirmed cases and 1,173 deaths in a country of over 200 million people (Nigeria Centre for Disease Control, 2020).6 Though this is almost certainly an undercount what seems clear is that Nigeria, like many other African countries, did not see the kinds of large numbers of hospitalizations and deaths seen in the US and Europe during the first wave of the pandemic.7 Using health center administrative data I can rule out increased provider workloads: I find no increase in monthly patient caseloads. I can also rule out a reduction in the number of available health workers in health facilities. If caseloads did not increase and staff strength did not decrease, this significantly undercuts the overburdened health worker hypothesis. The immediate contribution of this paper is to show that the quality of routine health care interactions declined sharply during (and because of) the pandemic, at least in the early months. More broadly the paper informs our understanding of provider behavior, and specifically how providers might respond to risk. We know that healthcare providers are influenced by both intrinsic and extrinsic incentives (Ashraf et al., 2014, Brock et al., 2018, Okeke, 2021, Bjorkman and Svensson, 2009, Kolstad, 2013); this paper highlights the role of job-related health risks. A salient dimension of this risk is the possibility of contracting an illness from a patient.8 During an infectious disease outbreak, as with the current Covid-19 pandemic, this risk not only rises but becomes increasingly salient. Health workers may respond to this risk in ways that may be socially costly. The key issue is that health workers do not fully internalize the costs of their responses. There is a connection here to the defensive medicine literature, which examines how exposure to litigation risk – and the fear of malpractice claims – leads healthcare providers to change how they practice (Kessler and McClellan, 1996, Dubay et al., 1999). Kessler and McClellan (1996), the canonical paper in this literature, show that they become more likely to administer precautionary treatments that deliver little medical benefit.9 This paper highlights the potential role of health/mortality risk, as against liability risk, and the defensive behavior that may arise as a result. This paper also connects to the literature on compensatory behavior in response to (information about) health-related risk. This has been shown in multiple contexts. Dupas (2011) showed that teenage girls in Kenya, when provided with information on the relative risk of HIV by age and gender, reduced unsafe sex with older men. In another HIV-related example, Godlonton et al. (2016) showed that, on learning about the relationship between circumcision and HIV transmission, uncircumcised men in Malawi become more likely to practice safer sex. Oster et al. (2013) showed that individuals who found out that they carried the genetic mutation for Huntington disease, a rare inherited degenerative disorder that lowers life expectancy, decreased investment in their human capital – completing less schooling and job training – than individuals with the same ex ante risk but a negative realization. Madajewicz et al. (2007) showed that individuals in Bangladesh, upon learning that their drinking was contaminated by arsenic, became more likely to switch to a different well, even though this was more costly. Closer to this context, Shrestha (2020) showed that work migrants in Nepal adjusted their migration decisions in response to information about mortality incidences of Nepali workers in their destination locations. Drawing the connection to this context, health workers appear to be adjusting their behavior in response to new information about the risk of caring for patients. As far as I am aware, this is the first empirical evidence of compensatory behavior by health workers in response to job-related health risk. The rest of the paper proceeds as follows: Section 2 provides relevant institutional details, Section 3 describes the data, Section 4 presents the analysis and results, in Section 5 I discuss the findings, and in Section 6 I offer concluding remarks.

Background

Covid-19 in Nigeria

The Covid-19 outbreak was declared a pandemic by the World Health Organization on March 11, 2020. In March 2020, Nigeria had registered only a few confirmed cases of Covid-19 primarily among international travelers in the city of Lagos, a commercial center with an international airport. In response international flights were suspended on March 23. By April there was evidence of community spread with cases confirmed in multiple states (Amzat et al., 2020). Stay-at-home orders were imposed around the country in April. Essential workers, such as health workers and security personnel, were exempt. These orders began to be lifted in May, and were replaced by less restrictive night-time curfews. By September/October 2020, all restrictions had largely been lifted and life in Nigeria had essentially returned to normal. International flights resumed in September 2020. By this point, only a few hundred daily cases were being reported nationwide (see Figure A.1). Based on the epidemic trajectory in Nigeria, this paper uses April 2020 as the start month. Figure A.1 shows three distinct waves of the epidemic in Nigeria. My data cover the entirety of the first wave of the pandemic, up to November 2020.

Study design

This study uses data collected in 288 public health centers distributed across four states (Nigeria has 36 states and one Federal Capital Territory) and three (out of six) geographic regions.10 To provide some context, Nigeria operates a tiered health care system with primary health centers forming the base of the pyramid. They are the point of entry for most patients into the health care system. These health centers provide a broad range of preventive, outpatient and inpatient services, and maternal and child health care services. Patients who need more advanced care are referred to General Hospitals (these occupy the middle of the pyramid), though patients can also seek care directly at these hospitals. The most advanced care is provided at Teaching or Specialist Hospitals (these occupy the apex of the pyramid). The participating health centers were selected with the help of government health officials. They were chosen so as to be broadly representative of each state. The list was finalized in the Fall of 2018. The health centers constitute approximately 12% of all available primary health facilities (including smaller health posts and dispensaries which provide only a limited range of services). Combined they provide health care services to more than one million individuals. These health centers are the main source of care for households living in the surrounding communities (these communities form what is known as the catchment or service area of the health center). These health centers were part of an ongoing research study on service delivery.11 As part of this study one-third of the health centers were randomly selected to participate in an intervention in which they were encouraged to improve service delivery and were provided with small discretionary grants. Another third of health centers received only the encouragement part of the intervention. The remaining third served as a control group. This paper utilizes data collected as part of this study. After the pandemic started, new modules were added to collect information about how the pandemic was affecting health centers. This paper will draw on these data. To account for the interventions in the parent study all of the analytical models in the paper include health center fixed effects, which hold fixed any facility-level factors including any interventions provided. I will also test for heterogeneity in the effect of the pandemic by intervention arm (I am unable to reject the null of homogeneous effects). Ethical approval for the research study was granted by RAND’s Human Subjects Protection Committee and the Ethics Committee of Aminu Kano Teaching Hospital, Kano, Nigeria. All health centers gave written consent to take part.

Data

The primary data used in this paper consist of acute care patient visits (patients visiting the health center for treatment of an illness). These data were collected over nearly two years. The source of the data is patients and caregivers. They were interviewed on-site shortly after the consultation ended. The research assistants that conducted these interviews were employed by a local university. Using a checklist they verified whether questions recommended by clinical guidelines were asked by the attending provider. They also collected information about whether they were physically examined by the treating health provider, whether their blood pressure and temperature was measured during the visit, whether they were examined using a stethoscope during the consultation, and whether the attending provider talked to them about their diagnosis and provided related health education. They also collected information about laboratory tests and whether they were prescribed any medicines. These interviews were conducted with the aid of computer tablets. Lastly, they collected data on patient characteristics including their age, gender, and ownership of five household assets – a mobile phone, a refrigerator, a radio, a generator, and a car or truck – used as a measure of socioeconomic status, and information about their symptoms and how long they had been sick. These interviews were conducted during visits to the health centers. The study team visited each health center four times between December 2018 and November 2020. The baseline visit (Visit 1) was conducted between December 2018 and March 2019. Subsequently the team visited each health center approximately once each quarter. See Table 1 for the visit schedule. During each visit they attempted to interview five patients in each health center. Generally these were the first five patients seen in the health center after the arrival of the research team. Visit 3 started a few months before the outbreak began in Nigeria and extended into the pandemic, and Visit 4 took place wholly during the pandemic (between August and October 2020). For context, at the time Visit 4 was carried out Covid restrictions had been lifted and life had mostly returned to normal. The same protocols were followed during each visit. For visits conducted during the pandemic the research assistants followed public health guidelines including wearing masks and maintaining appropriate distance during the interviews. All patients consented to the interview. Henceforth I will refer to this as the Patient Data.
Table 1

Schedule of data collection.

Visit no.TimingInterviews conducted during visit
In-chargeHealth WorkerPatient
1Dec 2018 - Mar 2019XXX
2Aug 2019 - Oct 2019X
3Jan 2020 - Apr 2020X
4Aug 2020 - Oct 2020XXX

Table shows the schedule of data collection and which interviews were conducted during each visit. Visit 1 is the baseline visit. For Visit 2, five visits were conducted after October, for Visit 3, four visits were conducted after April, and for Visit 4, one visit was conducted after October. The in-charge is the senior health provider responsible for managing the operation of the health center. April 2020 is the start month of the pandemic in Nigeria.

In addition to the patient interview data I will draw on several additional sources of data; notably data from surveys of health workers in each health center (conducted during Visits 1 and 4) and data from health center administrative records (collected during Visits 2, 3 and 4). The administrative data include data on staffing and monthly patient counts from patient registers. The health worker surveys were administered at baseline, and again during Visit 4. Each time the team surveyed all health workers present in the health center. Each health worker gave written consent. The Visit 4 questionnaire included additional modules to assess the impact of Covid. One module collected information about perceptions of Covid-associated mortality — specifically we asked how many people (out of 100) they thought would die if all 100 got infected with the virus. Another module measured how worried health workers were about getting infected. Specifically we asked: “On a scale of 0–10 where 0 is not worried at all and 10 is extremely worried, how worried are you about getting Coronavirus or Covid-19?” A third module collected information about knowledge of Covid symptoms. A fourth module, administered to the senior health worker in charge of the health center or the ‘in-charge’, collected information about availability of personal protective equipment and any changes made in the health center in response to the pandemic. For example if they instituted masking requirements, changed patient intake protocols, or changed what services were offered. To recap, there are three main datasets: (i) Patient Data refers to the patient survey; (ii) Health Worker Data refers to the health worker survey; (iii) In-charge Data refers to the survey administered to the senior health worker in charge of the health center.

Key variables

The outcome of interest is the quality of the primary care encounter. I have multiple indicators of quality. The first indicator is the fraction of recommended questions asked by the attending health provider. This varies based on patients’ symptoms and age. For patients presenting with fever, for example, there are five recommended questions; for patients presenting with cough there are five questions; and for patients presenting with diarrhea there are six questions. Fever was by far the most common complaint. 68% of patients complained of fever, 17% complained of cough, and about 5% complained of diarrhea. There are two general questions and, for children under five years old, there are three additional questions (independent of their symptoms). See Appendix A. The other quality indicators include whether the attending health provider examined the patient during the encounter (literally whether they touched the patient), whether the patient’s blood pressure was measured during the visit, whether their temperature was measured, and whether a stethoscope was used at any point during the consultation. I combine these into a single procedures index by taking an average. Lastly, I have three measures of the quality of health provider communication: (i) whether they told the patient the diagnosis, (ii) whether they provided any health education related to the diagnosis and (iii) whether the provider explained whether or not to return for further treatment. Again I combine these into a single communication index by taking an average.

Data validity

Given that these outcomes are based on patient recall, some discussion of data validity is warranted. One does not expect patient recall to be perfect but, given that patients and caregivers were interviewed shortly after the encounter while events would still be fresh in their minds, there is reason to believe that the data are reliable. It is worth emphasizing that many of these outcomes consist of discrete events that patients would be unlikely to miss or forget. It is hard to imagine that a patient would not know (or remember) whether a stethoscope was used during the interaction or whether they were examined by the health care provider. I do have some data that can be used to examine validity. During Visit 3 the interaction was observed by a member of the research team. The research assistant sat in a corner of the room where they could observe the interaction but did not otherwise say or do anything to interfere with the consultation. They recorded their observations on a structured form that collected much of the same information that was asked of patients. By comparing patient and observer reports for the same primary care encounter I can validate the accuracy of the patient recall data using the observer report as a gold standard. The results of this validation exercise are presented in Table A.1. I compare percentage agreement and also report Cohen’s kappa, a measure of inter-rater agreement. One does not expect perfect agreement but would hope to see substantial agreement. Overall, percent agreement is high (it ranges from 0.84 to 0.94) and the kappa statistic indicates substantial agreement between patient and observer reports (it ranges from 0.57 to 0.74). For reference, kappa values between 0.41 and 0.6 are considered moderate agreement, values between 0.61 and 0.8 are considered substantial agreement, and values between 0.81 and 1 are considered perfect agreement (Viera et al., 2005).

Descriptives

Table 2 Panel A presents summary statistics for the 288 participating health centers (the data are from the baseline visit). These are fairly small health facilities with about 14 beds on average. 70% of health centers offer inpatient care. Only 3% offer caesareans. They attend to about 677 patients each month on average (or approximately 23 per day). Panel B reports descriptive statistics for 1324 health workers who completed the survey administered during Visit 4, representing about 48.2% of all health workers on the staff register. The modal health care worker is a community health practitioner.12 Just over half are men and most are married. The average number of years of clinical experience is 12. 91% of surveyed providers had heard about Covid-19. Only 6% of providers had ever been tested for Covid-19. The average rating on the worry scale was 6 (out of 10).
Table 2

Health center and health worker summary statistics.

MeanSd
A. Health Center Variables (N=288)
Total beds (inpatient/maternity)1414.1
Open 24 h a day, 7 days a week.632.483
Offers inpatient admission.701.458
Offers caesarean section.0278.165
Has on-site laboratory.743.438
Monthly number of outpatients677789
B. Health Worker Variables (N=1324)
Age37.48.92
Male.555.497
Has spouse or partner.815.388
Years of experience12.37.95
Qualifications..
 Doctor.00151.0389
 Nurse or midwife.0612.24
 Community Health Practitioner.474.5
 Other qualification.463.499
Aware of Covid.905.294
Ever been tested for Covid.0626.242
Worried about getting infected (0–10)6.073.09

Table shows means and standard deviations for health center and health worker variables. Health workers with other qualifications include laboratory technologists/technicians, pharmacists and pharmacy techs, dental techs, and health assistants. Source of data is the In-charge Data (Panel A) and Health Worker Data (Panel B).

Table 3 reports patient descriptive statistics. I have data for 5379 patients who visited the health center for treatment of an illness between January 2019 and November 2020. 4120 of these visits happened before, and 1259 during, the pandemic. I report variable means separately for visits occurring before and during the pandemic. I also report a test of difference in means (Column 3). There are some differences in patient characteristics: for example patients seen during the pandemic are younger and come from less poor households (measured by asset ownership); they are also less likely to report two or more symptoms though there is no difference in the average duration of symptoms. They also appear to be more likely to present with fever and diarrhea. Most of these differences go away once I include health center and calendar month fixed effects. Since I have data on both demographic and clinical characteristics I am able to control for differences in patient characteristics and examine how their inclusion impacts the results.
Table 3

Patient and encounter summary statistics.

Pre-pandemicPandemicUnadjustedAdjusted
MeanMeandifferencedifference
Panel A: patient characteristics
Age in years18.02715.314−2.714***−1.192
Male0.4160.4180.0020.024
Number of household assets (out of 5)1.7281.9230.195***0.384***
Two or more symptoms0.5140.376−0.138***−0.126***
Has been sick for more than 1 week0.1180.1260.0090.008
Complains of fever0.6610.7370.076***−0.036
Complains of cough0.1730.156−0.0160.028
Complains of diarrhea0.0510.027−0.024***−0.003
Panel B: quality measures
Percent of recommended questions asked0.8250.707−0.118***−0.125***
Blood pressure was measured0.4840.218−0.265***−0.317***
Temperature was measured0.6110.330−0.280***−0.283***
Physical examination was done0.7070.458−0.248***−0.322***
Stethoscope was used0.5840.304−0.280***−0.228***
Patient was told the diagnosis0.8880.664−0.224***−0.305***
Health education was provided0.8780.656−0.222***−0.295***
Patient was told whether to return0.8180.524−0.294***−0.298***
Observations41201259

Variable means before and during the pandemic are shown. Differences in means are shown in the last two columns. The first are unadjusted differences, the second include health center and month fixed effects. Standard errors are clustered at the level of the health center. p¡0.1p¡0.05. Source of data is the Patient Data.

Panel B reports on the quality of the encounter. It provides the first evidence that quality significantly declined during the period of the pandemic for which I have data. The differences are stark. For example, the probability of a physical examination decreases by about 25 percentage points in the unadjusted model. Adjusting for fixed differences between health centers (by including health center fixed effects) and seasonality (by including calendar month fixed effects), if anything, increases the magnitude of these differences. I describe how I approach the analysis in the next section.

Analysis and results

Empirical strategy

The empirical approach is straightforward. The data consist of a repeated cross-section of patients receiving care in these health centers before, and during, the pandemic. I will compare the quality of primary care interactions occurring during the pandemic vs. before the pandemic in a given health center, after accounting for seasonality and controlling for patient demographic and clinical characteristics. I specify the following linear model: denotes a given indicator of the quality of the encounter for patient in health center at time . is a dummy denoting whether the encounter occurred during the pandemic. April 2020 is used as the start month, so if the visit occurred between April and November 2020. thus identifies the effect of the pandemic. includes the following controls: a quadratic of patient’s age, a dummy for sex, symptom dummies, an indicator denoting presentation with two or more symptoms, an indicator for an illness that has lasted for longer than one week (a proxy for severity), and the number of assets owned by the household (a measure of socioeconomic status). When the dependent variable is adherence to history-taking guidelines I also control for the total number of indicated questions. The model includes calendar month fixed effects to flexibly control for seasonality. I also include indicators for the day of the week on which the visit occurred because quality might vary systematically by day of the week. For example it might be higher on Mondays than Fridays or on weekdays than on weekends (Becker, 2007, Palmer et al., 2015). are health center fixed effects. Standard errors are clustered at the level of the health center. As an alternative strategy I estimate a double-difference specification. Visits between January and March 2020 are defined as unexposed, and visits between April and November 2020 are defined as exposed. I then calculate the change between exposed and unexposed encounters in 2020 (first difference) and take a second difference over the same change in 2019, the comparator period. Formally, ()-() where represents the outcome of interest and and denote exposed and unexposed observations respectively. This is estimated using a linear probability model and the same set of controls. The model is specified as shown below: Where indicates a visit in 2020 (2019 is the reference period). The subscripts and denote month and year respectively. is the double-difference estimate. The main advantage of this specification is that it helps to account for any unobserved shocks to quality in 2020 that are unrelated to the pandemic. Imagine for example that health workers went on strike in 2020 and this extended into the pandemic. Quality might then deteriorate but not because of the pandemic. Alternatively, it is possible that quality would have improved in the absence of the pandemic because of unobserved policies or programs. Provided that these shocks are not perfectly correlated with the pandemic then this specification helps to account for them. The main threat to identification in this context is a change in the type of patient visiting the health center for treatment during the pandemic. Specifically, if they have different clinical characteristics then the kind of care they receive as a result may change. This threat is mitigated here because I have data on key clinical characteristics, including the nature of their symptoms and the duration of these symptoms, and so can control for any such differences. I have a crude proxy for severity – the number of symptoms and whether the illness has lasted for longer than one week – but to the extent that there is remaining unaccounted-for severity this may result in an under/over estimate if patients during the pandemic are more/less severely ill along unmeasured dimensions. That said, one does not really expect many of these outcomes to vary significantly with patient characteristics. They are routine clinical procedures that should be completed in nearly all cases. All sick patients should have their blood pressure and temperature taken during a visit; they should also be examined and the health provider should talk to them about their diagnosis and treatment.

Results

I start with a visual presentation of the results in Fig. 1. I graph trends for three indices of the quality of the primary care encounter by the month in which the encounter occurred: (i) a history-taking index: the fraction of recommended history-taking questions asked by the attending health provider, (ii) a procedures index: an average of the following indicators – a physical examination, a blood pressure check, a temperature check, and an examination with a stethoscope, and (iii) a communication index: an average of the following indicators – whether the health provider told the patient their diagnosis, whether they provided health education related to the diagnosis, and whether they explained to the patient whether or not to return for further treatment. For comparison I overlay trends for 2020 and 2019. The number of patients interviewed in each month is shown in Table A.2. On all three of these indices there is a clear decline during the pandemic. The fraction of recommended history-taking questions asked decreases by roughly 12 percentage points, the proportion of the routine procedures completed falls by about 26 percentage points, and there is a 24 percentage point decrease in the quality of communication. These are raw unadjusted results and so for a more rigorous examination I turn to the regression results.
Fig. 1

Effect of the pandemic on quality of primary care encounters. Note: The outcome variables are noted in the captions. History-taking denotes the fraction of recommended history-taking questions asked by the attending health provider. Procedures Index is an average of the following indicators: a physical examination, a blood pressure check, a temperature check, and an examination with a stethoscope. Communication Index is an average of the following indicators: whether the health worker explained their diagnosis to the patient, provided health education, and discussed when the patient should return for a follow-up. Figure shows raw (unadjusted) trends in these variables by month of the encounter. The number of observations per month is shown in Table A.2. Months with fewer than 50 observations are omitted from the figure. Dashed line denotes the start month of the epidemic in 2020. Source of data is the Patient Data.

The regression results are presented in Table 4. The table shows four different specifications. The first specification (Model 1) adjusts only for seasonality. The second specification (Model 2) controls for patient demographic characteristics (age, sex, household asset ownership). The third specification (Model 3) adds in controls for patients’ clinical characteristics (symptom dummies, whether they present with two or more symptoms, and whether the illness has lasted for longer than one week). The last specification (Model 4) is the double-difference specification. All models include health center fixed effects. The standard errors in all cases are clustered at the health center level.
Table 4

Effect of the pandemic on quality of primary care encounters.

Var MeanObsModel 1Model 2Model 3Model 4
Percent of recommended questions asked0.8255140−0.138 (0.025)***−0.119 (0.024)***−0.123 (0.024)***−0.160 (0.035)***
Procedures Index0.5965379−0.288 (0.030)***−0.268 (0.027)***−0.258 (0.026)***−0.206 (0.048)***
 Blood pressure was measured0.4845379−0.317 (0.033)***−0.278 (0.025)***−0.267 (0.025)***−0.245 (0.046)***
 Temperature was measured0.6115379−0.283 (0.039)***−0.271 (0.037)***−0.265 (0.036)***−0.180 (0.062)***
 Physical examination was done0.7075379−0.322 (0.037)***−0.318 (0.038)***−0.307 (0.036)***−0.227 (0.056)***
 Stethoscope was used0.5845379−0.228 (0.045)***−0.204 (0.043)***−0.194 (0.043)***−0.173 (0.066)***
Communication Index0.8615379−0.299 (0.032)***−0.284 (0.030)***−0.280 (0.029)***−0.317 (0.051)***
 Patient was told the diagnosis0.8885379−0.305 (0.033)***−0.289 (0.030)***−0.284 (0.029)***−0.378 (0.051)***
 Health education was provided0.8785379−0.295 (0.034)***−0.280 (0.031)***−0.278 (0.031)***−0.337 (0.055)***
 Patient was told whether to return0.8185379−0.298 (0.037)***−0.282 (0.035)***−0.277 (0.034)***−0.235 (0.058)***

Calendar month fixed effectsXXXX
Patient demographic characteristicsXXX
Patient clinical characteristicsXX

The dependent variables are in the rows. Means in the pre-pandemic period are shown in Column 1. The Procedures Index is an average of the four indicators underneath. The Communication Index is an average of the three indicators underneath. There are fewer observations in Row 1 because for a small subset of patients there were no applicable history-taking questions. The explanatory variable is the pandemic exposure indicator, denoting encounters during the pandemic (i.e. between April and November 2020). The estimating model for Models 1–3 is Eq. (1). Model 4 is the double-difference specification in Eq. (2). Demographic controls include patient’s age, sex, and number of household assets owned. Clinical controls include presenting symptoms, illness duration, and the day of the week of the visit. Standard errors in parentheses are clustered at the level of the health center. p¡0.1p¡0.05. Source of data is the Patient Data.

All the models tell the same story: the quality of primary care encounters has dropped significantly during the pandemic. The effects are large, robust and invariant to model specification. The results indicate that adherence to history-taking guidelines has decreased by between 12 and 16 percentage points. Physical examinations have decreased by between 23 and 31 percentage points; blood pressure checks have decreased by between 25 and 27 percentage points; temperature checks have decreased by between 18 and 27 percentage points; and examinations with a stethoscope have decreased by between 17 and 19 percentage points. There is a similar large and statistically significant decline in the quality of provider communication: patients are 28–38 percentage points less likely to be told their diagnosis, 28–34 percentage points less likely to receive any health education related to the diagnosis, and 24–28 percentage points less likely to be given information about whether to return for further treatment.13 Two additional specifications are shown in Table A.3. First, I drop observations from Visit 1 which occurred prior to the rollout of the experimental interventions. I also interact dummies for each experimental intervention with a dummy denoting whether an observation is from the pre-(experimental)-intervention or post-(experimental)-intervention period. The results are the same. Given the identification I am cautious in attaching a strict causal interpretation to these findings. As Table 3 shows, there are some differences in the characteristics of patients interviewed before and during the pandemic though, as the results show, many of these differences go away once fixed differences between health centers and seasonality are accounted for. A strength of this study is that I have good data on patient demographic and clinical characteristics and can control for them. As the results show, controlling for patient characteristics basically makes no difference suggesting that these results cannot be explained by changes in patient composition. One might wonder whether these results could be due to some change in study protocol during the pandemic. I note that the same study protocols were used throughout; the only difference being implementation of safety protocols, such as wearing masks and maintaining appropriate distance, for interviews conducted during the pandemic. Because the interviews were conducted using computer tablets, I have granular insight into the interview process. Time and location stamps, for example, allow me to verify that these interviews were conducted and that research assistants did not just hurry through the form. In fact interviews conducted during the pandemic took longer, which makes sense. It is possible that there could be some recall error associated with the pandemic. If patients have a more pessimistic outlook as a result of the pandemic it is possible they could be more likely to negatively perceive, and recall, the primary care encounter. This is hard to rule out but it seems highly unlikely that this can explain such large robust effects. If one accepts that these effects are real and that quality has indeed deteriorated, the natural follow-on question is then why? What has happened during the pandemic to make the quality of primary care encounters deteriorate so sharply? I discuss some possible explanations in the next section starting with more conventional ones.

Mechanisms

Changes in health worker composition

The first explanation that I examine is changes in health worker composition during the pandemic. It is possible that health care workers attending to patients during the pandemic were less skilled, which might explain the lower level of quality. For example we know that in response to the pandemic some health workers came out of retirement and, in some cases, health workers in training were rushed into service.14 One can imagine that the clinical skills of retired workers may have deteriorated and health workers hurried into service are certainly less experienced. It could also be the case that a sufficiently large number of health workers on the frontlines were affected by the virus – they fell sick, had to quarantine for extended periods, or died – and their replacements were lower quality. Alternatively, maybe the most experienced health providers were reassigned to deal with Covid cases leaving less experienced health care workers to deliver primary care. The most straightforward way to test this explanation is to hold health providers’ characteristics fixed. I do this in Table A.5 where I report the results from a health worker fixed effects specification. These results come with some caveats: I can only identify the treating provider for interviews conducted during Visits 2–4 and there is measurement error because of data linkage issues (often I am matching using partial name strings) which will likely attenuate the effects. This is why this is not one of the main specifications. That said the results largely carry through suggesting that I can rule out workforce compositional changes as the cause of these effects.

Provider overwork/fatigue

The second explanation that I examine is the possibility that health care workers were overworked because of a crush of sick patients leading to compromised quality. This might be due to fatigue or burnout or simply because health care workers had to ration effort. A priori, this does not seem like a persuasive explanation for these effects given what we know of the outbreak in Nigeria, but I can test this explanation. In order for health care workers to be overworked, it must be the case that patient loads increased significantly or, equivalently, the number of health care workers available to attend to patients decreased significantly. To examine whether health centers were slammed by patients during the pandemic, I use administrative data on monthly patient counts. I have approximately 20 months of data for each health center (from April 2019 to November 2020). I estimate a health center fixed effects regression model using log monthly patient counts as the outcome variable. The results show no evidence of a large increase in caseloads during the pandemic. The coefficient indicates a statistically insignificant 4% increase in monthly caseloads ( Table A.6 Column 1). To test whether there was a decrease in the number of available health care workers I turn to the staffing data. I count the number of health workers available in each health center at visit . By available I mean employed in the health center and available for duty. This measure accounts for both staff departures and additions. I have three observations for each health center. I use these data to estimate a count (Poisson) model. The model includes health center dummies and calendar month dummies. Average marginal effects are reported in Table A.6 Column 2. There is no evidence of a net decrease in the number of health care workers available during the pandemic. The results instead show a net gain of about 0.5 workers. To account for the fact that health workers might be available in theory but not in practice I estimate a second model where the dependent variable is the number of health workers physically present in the health center at visit . Health workers might be absent because they are sick, not on duty, away for training, etcetera. Counting the number present gives us a measure of true availability. The results are in Column 3. Again these are average marginal effects from a Poisson model. The coefficient indicates a small decrease in the number of health workers physically present during the pandemic – equivalent to about a third of a worker – but the coefficient does not reach statistical significance. None of these results provide evidence of overworked health care providers.

Salaries and effort

There is another channel that could produce these effects. Covid-19 has had well-documented economic effects. This is a non-trivial concern. Nigeria’s economy went into its deepest recession since the 1980s during the early months of the pandemic. The price of oil, which represents over 80 percent of exports and about 50 percent of consolidated government revenues, also dropped more than 60% between February and May 2020 (World Bank, 2020). If public sector budgets were impacted by the pandemic leading to health care workers not being paid their salaries, or not being paid on time, one can imagine a scenario in which health care workers might continue to come to work but be considerably less willing to exert effort — in other words, a sort of gift-exchange/reciprocation mechanism (Akerlof, 1984). I can also test this pathway because the health worker surveys, administered during Visit 1 and 4, included information about payment of salaries in the month prior to the survey and the amount paid. 1289 and 1324 health workers, respectively, were surveyed during Visits 1 and 4. I estimate a health center fixed effects regression model controlling for health worker age, sex, and clinical qualifications (e.g., doctor, nurse, community health practitioner, etcetera). The results are in Table A.7. There is no statistical evidence that health workers were less likely to be paid their last month’s salary, or paid less, during the pandemic. These results suggest that I can also rule out this pathway.

Risk mitigation

The last explanation that I examine is that these findings reflect an attempt by health care workers to mitigate risk. As noted in the introduction health care workers during the pandemic faced a difficult choice: helping patients vs. protecting themselves. Since the virus is spread by proximity, close contact with sick patients poses a significant risk to health providers. It is also noteworthy that during the early months of the pandemic there was considerable uncertainty about the mode of transmission of the virus raising the level of risk. This risk is greater when health care providers lack suitable protective equipment and when health care workers are unable to distinguish between infected and non-infected patients (even when testing is available it does not provide instantaneous results). These were common situations during the early months of the pandemic and continued to characterize many developing country settings long into the pandemic. It makes sense that a rational health worker would try to reduce their risk of exposure. The conventional way of reducing this risk is by wearing adequate protective gear but in settings where this is not widely available health care workers must employ alternative strategies. For example, on the extensive margin, they might reduce in-person contacts or turn away certain patients. Such behavior may be subtle such as calling in sick or avoiding perceived higher-risk patients. On the intensive margin they might spend less time with patients or avoid certain procedures that they believe carry more risk such as procedures requiring that they spend time in close proximity to a patient. My findings – health care providers asking fewer questions, spending less time talking with patients, and scaling back on procedures like physical examinations that require close contact – are suggestive of intensive margin risk mitigation. Also consistent with this narrative I do not find similar negative effects when I examine two other quality measures that are unrelated to duration/intensity of contact – whether a laboratory test was ordered and whether medicines were prescribed (see Table A.8).15 After the consultation a patient can be asked to wait outside the treatment room for the script to be brought to them, and lab tests are done elsewhere – the results can be reviewed, and any medicines prescribed, without the patient being in the room – suggesting that these procedures do not meaningfully impact risk. To provide some context to the analysis that follows I begin by providing some descriptive facts. Health care workers lacked proper protective equipment. Only 3% of health centers had N95 masks in stock at Visit 4. 20% did not have basic surgical masks in stock. Of those that had surgical masks 27% did not have enough supply to last a week. Only 2% had face shields.16 Importantly, none of these facilities had the capability to conduct on-site Covid-19 testing. This is relevant because this means that health care workers had no practical way of distinguishing between infected and non-infected patients (other than based on their symptoms). Health workers were acutely aware of the risk posed by the virus. They may even have overestimated this risk. The median/mean perceived infection fatality rate, as reported by health workers during Visit 4, was 5%/9.6%. For context, the infection fatality rate has been estimated to be around 1.5% for Nigeria (Onovo:2021). Meta-analytic estimates indicate that the infection fatality rate is around 0.7% (Meyerowitz-Katz and Merone, 2020). For context these surveys were administered between August–November 2020, more than six months into the pandemic. Health workers were very worried about getting infected. When asked how worried they were on a scale from 0–10 the median score was a 6 but one in five health workers chose the maximum score, 10. Fig. 2 shows that health worker perceptions of Covid-19 mortality are positively correlated with how worried they are about getting infected. For context none of the health workers surveyed reported to have ever tested positive for the virus, though this is more a reflection of low testing rates. 2.1% of surveyed workers reported that they had been hospitalized at one time for Covid-19.17 Perhaps more objectively, only two health worker deaths were reported during the pandemic compared to 11 before the pandemic.
Fig. 2

Correlation between health worker perceptions of Covid-19 mortality and level of worry about getting infected. Note: To measure perceptions of Covid-19 mortality health workers were asked how many people (out of 100) they thought would die if all 100 got infected with the virus. This is grouped into the categories shown. The -axis shows responses to the following question: “On a scale of 0–10 where 0 is not worried at all and 10 is extremely worried, how worried are you about getting Coronavirus or Covid-19?” Mean scores are computed for various levels of perceived infection fatality (X-axis). Source of data is the Health Worker Data.

There is some evidence of extensive margin risk mitigation: about 36% of in-charges surveyed reported that they (health workers in the health center) stopped providing outpatient care to patients with Covid-19 symptoms; 30% reported that they stopped admitting any patients with Covid-19 symptoms. This suggests that in the absence of testing health care providers used symptoms to identify, and avoid, patients that posed greater risk. These findings suggest two potential tests for the risk mitigation hypothesis. The first test is simple: if my findings are indicative of risk mitigation the reduction in the quality of the encounter should be more pronounced when patients present with symptoms associated with Covid-19, since these patients pose greater risk to the provider. Fig. 3 shows that fever and cough are the two symptoms most frequently mentioned by health workers as being associated with Covid-19. 62% of patients presented with one of these two symptoms. Difficulty breathing was a close third but this was not a common patient complaint: only nine patients (total) reported this symptom. I create a dummy indicator for fever or cough and interact this with the pandemic exposure indicator.18 As predicted I find a significantly larger decrease in the quality of the encounter when providers were faced by a patient with Covid symptoms (see Table 5). To help readers visualize this more easily Fig. 4 presents the results graphically. I aggregate the data at the health center level so that the explanatory variable is the fraction of patients with Covid symptoms. There is significant variation across health centers (see Figure A.2). One can clearly see that completion rates for these procedures decreases with the fraction of patients with symptoms associated, by health workers, with Covid-19.
Fig. 3

Health worker knowledge of Covid-19 symptoms. Note: Health workers were asked to mention all the symptoms of Covid-19 that they knew. The proportion of health workers mentioning each symptom is shown. Source of data is the Health Worker Data.

Table 5

Effect heterogeneity by whether patient has Covid-19 symptoms.

(1)(2)(3)
History-takingProcedures IndexCommunication Index
During−0.018−0.155***−0.216***
(0.026)(0.032)(0.035)
During x Covid Symptoms−0.142***−0.126***−0.079**
(0.027)(0.032)(0.037)

Observations514053795379
Mean of dep. var.0.8250.5960.861

The dependent variables are noted in the table captions. History-taking denotes the fraction of recommended history-taking questions asked by the attending health provider. Procedures Index is an average of the following indicators: a physical examination, a blood pressure check, a temperature check, and an examination with a stethoscope. Communication Index is an average of the following indicators: whether the health worker explained their diagnosis to the patient, provided health education, and discussed when the patient should return for a follow-up. There are fewer observations in Column 1 because for a small subset of patients there were no applicable history-taking questions. During denotes encounters during the pandemic (i.e. between April and November 2020). During x Covid interacts the pandemic exposure indicator with a dummy denoting a patient complaining of fever or cough, the two symptoms most commonly associated with Covid-19 by health workers. The results are from a health center fixed effects model. All models control for patient’s age, sex, number of household assets owned, presenting symptoms, illness duration, the day of the week of the visit, and calendar month fixed effects. Standard errors in parentheses are clustered at the level of the health center. p¡0.1p¡0.05. Source of data is the Patient Data.

Fig. 4

Heterogeneity in the effect of the pandemic by fraction of patients with Covid-19 symptoms. Note: The dependent variables are noted in the captions. History-taking denotes the fraction of recommended history-taking questions asked by the attending health provider. Procedures Index is an average of the following indicators: a physical examination, a blood pressure check, a temperature check, and an examination with a stethoscope. Communication Index is an average of the following indicators: whether the health worker explained their diagnosis to the patient, provided health education, and discussed when the patient should return for a follow-up. This figure exploits heterogeneity in the fraction of patients with Covid-19 symptoms (fever or cough). To plot this figure I collapsed the data to the health center level and then regressed each dependent variable on an interaction between the pandemic exposure indicator and the fraction of patients with Covid-19 symptoms. To capture non-linearity this is specified as a quadratic. The model includes health center dummies. The figure shows coefficients and 95% confidence intervals from this regression. Standard errors are clustered at the health center level. Source of data is the Patient Data.

The second test requires more setup: There is descriptive evidence that health care workers attempted to reduce risk by screening patients, i.e., identifying and turning away potentially infected patients. Intuitively, by eliminating high-risk patients from the pool they effectively lowered their risk. If the remaining patients are low risk this in turn makes other precautions less necessary. Imagine, for example, that they could precisely identify and eliminate all infected patients, this would effectively eliminate risk, rendering other precautions unnecessary. To fix ideas imagine that one is hosting a dinner during Covid. One can screen out infected people by asking everyone to take a Covid test on the day of the event. If everyone present tested negative that day, the risk of exposure is significantly diminished and, as a result, there is less need to take additional precautions, such as masking, that make the event less enjoyable. There is an implicit assumption that precautions are costly. This is intuitive: even something objectively low cost like masking has substantial hassle costs, e.g., hampering breathing or making conversations more difficult. Malaria prevention provides a good analogy: to reduce one’s risk of getting malaria, one can screen out mosquitoes, literally, by using a mosquito net. If one can exclude all (or most) mosquitoes, then that reduces the need to take additional precautions such as burning a mosquito coil or spraying insecticide (both of which have non-trivial usage costs, e.g., because of the odor). One would expect that the likelihood of screening will increase with the level of worry about getting infected. Because turning away sick patients is so antithetical to being a healthcare worker, one would expect this to be concentrated at the higher end of the worry distribution. This is a testable prediction. Recall that I have data on how worried health care workers were. I aggregate this to the health center level by taking an average for all surveyed health workers – to allow for flexibility I categorize average scores as follows: [0-2), [2-4), [4-6), [6-8), [8-10] – and correlate this against three indicators of patient screening taken from the in-charge survey: (i) whether providers stopped attending to outpatients with Covid symptoms, (ii) whether they stopped admitting patients with Covid symptoms, and (iii) whether they stopped attending to all patients except emergencies. The results are in Table 6. As expected there is a strong correlation.
Table 6

Association between average level of worry and probability of patient screening.

(1)(2)(3)(4)
Stopped attending to outpatients with Covid symptomsStopped admitting patients with Covid symptomsStopped attending to all but emergency casesNumber of strategies used
Level of worry 2–4−0.1250.051−0.016−0.090
(0.121)(0.119)(0.073)(0.236)
Level of worry 4–60.1380.096−0.0180.217
(0.109)(0.107)(0.065)(0.212)
Level of worry 6–80.231**0.261**0.0600.553**
(0.109)(0.107)(0.066)(0.213)
Level of worry 8–100.329***0.390***0.1000.819***
(0.111)(0.109)(0.067)(0.218)

Observations273273273273
Mean of dep. var.0.3660.3080.0880.762

Health workers were asked how worried they were about getting infected with the virus on a scale from 0-10. Scores were averaged for all health workers in a health center and categorized as follows: [0–2), [2–4), [4–6), [6–8), [8–10]. The omitted group is [0–2). ¡ 288 because workers in some health centers reported not being aware of Covid-19. Dependent variables are in the table headers. Results are from a linear probability model controlling for health center characteristics such as size and whether they offered inpatient services. p¡0.1p¡0.05. Source of data for the independent variables is the Health Worker Data. Source of data for the dependent variables is the In-charge Data.

Since the most worried workers are most likely to screen patients, this implies that the likelihood of alternative precautions should be lower at the higher end of the worry distribution. One might also expect it to be lower towards the bottom of the worry distribution — because the least worried workers will be less likely to take any precautions (as we have seen in the US and elsewhere). If they are not worried about getting infected, why take costly precautions? This implies a non-linear (U-shaped) effect, smaller reductions in quality of primary care encounters at either end of the distribution with the largest reductions in the middle. To test this prediction I re-estimate Eq. (1) interacting the pandemic indicator with the categorical worry variable. I report the results in Table A.9. The same results are presented visually in Fig. 5. The predicted non-linear relationship can be clearly seen in the data, providing additional evidence that these findings are likely a risk mitigation response.
Fig. 5

Effect heterogeneity by level of worry about getting infected. Note: The dependent variables are noted in the captions. History-taking denotes the fraction of recommended history-taking questions asked by the attending health provider. Procedures Index is an average of the following indicators: a physical examination, a blood pressure check, a temperature check, and an examination with a stethoscope. Communication Index is an average of the following indicators: whether the health worker explained their diagnosis to the patient, provided health education, and discussed when the patient should return for a follow-up. Figure shows coefficients and 95% confidence intervals from a health center fixed effects regression model in which the pandemic exposure variable is interacted with a categorical variable constructed by averaging health worker ratings – when asked how worried they were about getting infected with the virus on a scale from 0–10 – for all health workers in a health center. This is categorized as follows: [0-2), [2-4), [4-6), [6-8), [8-10]. The regression estimates are in Table A.9. The models control for patient’s age, sex, number of household assets owned, presenting symptoms, illness duration, the day of the week of the visit, and calendar month fixed effects. Standard errors in parentheses are clustered at the level of the health center. Source of data is the Patient Data.

I present some additional descriptive evidence in Appendix . The model prediction is that the likelihood of other health worker Covid precautions – not only intensity of patient interactions – will be lower at the higher, and potentially the lower, end of the worry distribution and higher in the middle. I have some additional data that I can use to examine this. As part of Visit 4 the research assistant recorded whether the health care provider in the consultation room was wearing a face mask or other face covering and whether they wore gloves before touching the patient, two typical precautions against Covid. I correlate each of these outcomes against the level of worry and plot the results in Figure A.3. The evidence is only correlational and provides a partial snapshot, but the findings nonetheless largely accord with the prediction. In particular the lower rate of use at higher levels of worry is quite striking and would be counterintuitive, but is rationalized by the model.

Discussion

Drawing on data on public health service delivery in a large developing country collected over a two-year period spanning the onset of the Covid-19 pandemic, this paper has shown that the quality of routine health care interactions between health care providers and patients deteriorated sharply during the first phase of the pandemic (April–November 2020). Specifically, I have shown that adherence to recommended history-taking guidelines decreased by about 15% and rates of completion of routine procedures such as blood pressure checks and physical examinations reduced by about 33%. I have also shown that the quality of communication by health care providers worsened significantly. The magnitude of these effects is concerning and they pose major problems for proper diagnosis and treatment of health conditions. This paper has also examined some underlying explanations for these effects and presented evidence that these effects likely reflect risk mitigation on the part of health workers. In other words they appear to be a fallout from health workers attempts to limit their exposure to the virus. Since the virus is spread through physical contact, and any patient could potentially be carrying the virus, health workers appear to have compensated by reducing the duration (e.g., asking fewer questions) and intensity of patient encounters (e.g., avoiding procedures that required being in close proximity to the patient). Alternative explanations such as changes in health care worker composition during the pandemic, and overworked/burned out health providers, do not receive strong support in the data. The results in this paper have several policy implications. One of the well-known findings in the health worker performance literature is the existence of a gap between possible (what health workers know they should do) and actual health worker performance (what they actually do), often referred to as the “know-do” gap (Das and Hammer, 2007, Leonard and Masatu, 2010, Mohanan et al., 2015). One of the takeaways from this paper is that the know-do gap has widened into a gulf during the pandemic (at least during the early months covered by my data). It will be important to examine whether these deficits continue and whether they may have widened or shrunk. A second implication of the findings in this paper is that the knock-on effects of the pandemic, particularly in developing countries, may far outstrip the direct effects – Covid-related hospitalizations and deaths – a point that has been made by others (Roberton et al., 2020, Bayani et al., 2021, Okeke et al., 2021). While attention is focused on the direct effects, there may be another crisis brewing just underneath the surface with effects that could conceivably extend beyond the end of the pandemic. Effect of the pandemic on quality of primary care encounters. Note: The outcome variables are noted in the captions. History-taking denotes the fraction of recommended history-taking questions asked by the attending health provider. Procedures Index is an average of the following indicators: a physical examination, a blood pressure check, a temperature check, and an examination with a stethoscope. Communication Index is an average of the following indicators: whether the health worker explained their diagnosis to the patient, provided health education, and discussed when the patient should return for a follow-up. Figure shows raw (unadjusted) trends in these variables by month of the encounter. The number of observations per month is shown in Table A.2. Months with fewer than 50 observations are omitted from the figure. Dashed line denotes the start month of the epidemic in 2020. Source of data is the Patient Data. Correlation between health worker perceptions of Covid-19 mortality and level of worry about getting infected. Note: To measure perceptions of Covid-19 mortality health workers were asked how many people (out of 100) they thought would die if all 100 got infected with the virus. This is grouped into the categories shown. The -axis shows responses to the following question: “On a scale of 0–10 where 0 is not worried at all and 10 is extremely worried, how worried are you about getting Coronavirus or Covid-19?” Mean scores are computed for various levels of perceived infection fatality (X-axis). Source of data is the Health Worker Data. Health worker knowledge of Covid-19 symptoms. Note: Health workers were asked to mention all the symptoms of Covid-19 that they knew. The proportion of health workers mentioning each symptom is shown. Source of data is the Health Worker Data. Heterogeneity in the effect of the pandemic by fraction of patients with Covid-19 symptoms. Note: The dependent variables are noted in the captions. History-taking denotes the fraction of recommended history-taking questions asked by the attending health provider. Procedures Index is an average of the following indicators: a physical examination, a blood pressure check, a temperature check, and an examination with a stethoscope. Communication Index is an average of the following indicators: whether the health worker explained their diagnosis to the patient, provided health education, and discussed when the patient should return for a follow-up. This figure exploits heterogeneity in the fraction of patients with Covid-19 symptoms (fever or cough). To plot this figure I collapsed the data to the health center level and then regressed each dependent variable on an interaction between the pandemic exposure indicator and the fraction of patients with Covid-19 symptoms. To capture non-linearity this is specified as a quadratic. The model includes health center dummies. The figure shows coefficients and 95% confidence intervals from this regression. Standard errors are clustered at the health center level. Source of data is the Patient Data. Effect heterogeneity by level of worry about getting infected. Note: The dependent variables are noted in the captions. History-taking denotes the fraction of recommended history-taking questions asked by the attending health provider. Procedures Index is an average of the following indicators: a physical examination, a blood pressure check, a temperature check, and an examination with a stethoscope. Communication Index is an average of the following indicators: whether the health worker explained their diagnosis to the patient, provided health education, and discussed when the patient should return for a follow-up. Figure shows coefficients and 95% confidence intervals from a health center fixed effects regression model in which the pandemic exposure variable is interacted with a categorical variable constructed by averaging health worker ratings – when asked how worried they were about getting infected with the virus on a scale from 0–10 – for all health workers in a health center. This is categorized as follows: [0-2), [2-4), [4-6), [6-8), [8-10]. The regression estimates are in Table A.9. The models control for patient’s age, sex, number of household assets owned, presenting symptoms, illness duration, the day of the week of the visit, and calendar month fixed effects. Standard errors in parentheses are clustered at the level of the health center. Source of data is the Patient Data. Schedule of data collection. Table shows the schedule of data collection and which interviews were conducted during each visit. Visit 1 is the baseline visit. For Visit 2, five visits were conducted after October, for Visit 3, four visits were conducted after April, and for Visit 4, one visit was conducted after October. The in-charge is the senior health provider responsible for managing the operation of the health center. April 2020 is the start month of the pandemic in Nigeria. Health center and health worker summary statistics. Table shows means and standard deviations for health center and health worker variables. Health workers with other qualifications include laboratory technologists/technicians, pharmacists and pharmacy techs, dental techs, and health assistants. Source of data is the In-charge Data (Panel A) and Health Worker Data (Panel B). Patient and encounter summary statistics. Variable means before and during the pandemic are shown. Differences in means are shown in the last two columns. The first are unadjusted differences, the second include health center and month fixed effects. Standard errors are clustered at the level of the health center. p¡0.1p¡0.05. Source of data is the Patient Data. Effect of the pandemic on quality of primary care encounters. The dependent variables are in the rows. Means in the pre-pandemic period are shown in Column 1. The Procedures Index is an average of the four indicators underneath. The Communication Index is an average of the three indicators underneath. There are fewer observations in Row 1 because for a small subset of patients there were no applicable history-taking questions. The explanatory variable is the pandemic exposure indicator, denoting encounters during the pandemic (i.e. between April and November 2020). The estimating model for Models 1–3 is Eq. (1). Model 4 is the double-difference specification in Eq. (2). Demographic controls include patient’s age, sex, and number of household assets owned. Clinical controls include presenting symptoms, illness duration, and the day of the week of the visit. Standard errors in parentheses are clustered at the level of the health center. p¡0.1p¡0.05. Source of data is the Patient Data. Effect heterogeneity by whether patient has Covid-19 symptoms. The dependent variables are noted in the table captions. History-taking denotes the fraction of recommended history-taking questions asked by the attending health provider. Procedures Index is an average of the following indicators: a physical examination, a blood pressure check, a temperature check, and an examination with a stethoscope. Communication Index is an average of the following indicators: whether the health worker explained their diagnosis to the patient, provided health education, and discussed when the patient should return for a follow-up. There are fewer observations in Column 1 because for a small subset of patients there were no applicable history-taking questions. During denotes encounters during the pandemic (i.e. between April and November 2020). During x Covid interacts the pandemic exposure indicator with a dummy denoting a patient complaining of fever or cough, the two symptoms most commonly associated with Covid-19 by health workers. The results are from a health center fixed effects model. All models control for patient’s age, sex, number of household assets owned, presenting symptoms, illness duration, the day of the week of the visit, and calendar month fixed effects. Standard errors in parentheses are clustered at the level of the health center. p¡0.1p¡0.05. Source of data is the Patient Data. Association between average level of worry and probability of patient screening. Health workers were asked how worried they were about getting infected with the virus on a scale from 0-10. Scores were averaged for all health workers in a health center and categorized as follows: [0–2), [2–4), [4–6), [6–8), [8-10]. The omitted group is [0–2). ¡ 288 because workers in some health centers reported not being aware of Covid-19. Dependent variables are in the table headers. Results are from a linear probability model controlling for health center characteristics such as size and whether they offered inpatient services. p¡0.1p¡0.05. Source of data for the independent variables is the Health Worker Data. Source of data for the dependent variables is the In-charge Data. What can policymakers do? A first recommendation is to make protective equipment more widely available. If health care workers lack the means to adequately protect themselves, one should not be surprised that they find other ways to mitigate risk. In that sense, they are only acting rationally. Improving the supply of protective masks and other safety equipment should mitigate these effects by reducing exposure risk, though I stress that it is unlikely to eliminate it because these do not completely eliminate risk. Vaccinating health care workers should also continue to be an urgent priority. Getting all health care workers vaccinated will be challenging given continued issues with supply and, perhaps more important, with vaccine hesitancy, but as vaccination rates increase there should be less need for risk mitigation and one would expect these effects to reduce. That said, none of the developed vaccines offer 100% protection, and as we are finding out, viral mutations may undermine the effectiveness of vaccines, suggesting that other complementary strategies may be needed. One approach may be paying health workers risk bonuses. Some of these quality outcomes are hard to observe, and thus to contract for (Miller and Babiarz, 2013). They also often require intensive data collection which is hard to do even at the best of times, suggesting that linking bonuses to performance may be impractical. A more feasible approach may be to provide unconditional bonuses to providers to compensate them for the additional risk. Such ‘gifts’ may help to crowd-in health worker intrinsic motivations (Brock et al., 2018). In closing I note that in order for health workers to do their jobs adequately, they must feel protected (Imai, 2020). This takes on literal connotations in the context of a life-threatening contagious disease outbreak. While the health care profession may disproportionately attract intrinsically motivated individuals (Kolstad and Lindkvist, 2012), this has its limits. Health workers are rational actors who will react in predictable ways to incentives. While each individual worker may believe that the potentially negative consequences of their actions are small, in aggregate these individual actions can add up to quite large effects. All of the policy solutions outlined above require additional spending, which may be a challenge, but the cost of doing nothing may be larger still. When thinking about additional spending amidst competing priorities, policymakers must take these effects into account. More can also be done within existing constraints. For example, policymakers may be able to get some mileage out of giving health providers accurate information about the virus so that they can re-optimize (Akesson et al., 2020, Banerjee et al., 2020). Information interventions are fairly low-cost to provide.

Conclusion

We are now two years into a global pandemic that has killed more than five million people worldwide and counting (World Health Organization, 2021). The direct health effects of the pandemic are clearly visible, but there is growing evidence that the knock-on effects may be even more significant, particularly in developing countries where health care systems are weak (Roberton et al., 2020, Green, 2020). This paper finds significant negative effects on the quality of regular health care interactions between health providers and patients. The findings call for urgent policy attention and for additional research especially since it is now becoming clear that ‘normal’ is still a long way off. As of the time of writing, several countries are re-entering lockdowns (Henley et al., 2021, Sauer, 2021). It is critical that we understand how, and in what ways, the pandemic is impacting welfare and come up with effective strategies to combat these effects.
  40 in total

1.  The impact of malpractice fears on cesarean section rates.

Authors:  L Dubay; R Kaestner; T Waidmann
Journal:  J Health Econ       Date:  1999-08       Impact factor: 3.883

2.  Understanding interobserver agreement: the kappa statistic.

Authors:  Anthony J Viera; Joanne M Garrett
Journal:  Fam Med       Date:  2005-05       Impact factor: 1.756

3.  Lifestyle and mental health disruptions during COVID-19.

Authors:  Osea Giuntella; Kelly Hyde; Silvia Saccardo; Sally Sadoff
Journal:  Proc Natl Acad Sci U S A       Date:  2021-03-02       Impact factor: 11.205

4.  Trends in Outpatient Care Delivery and Telemedicine During the COVID-19 Pandemic in the US.

Authors:  Sadiq Y Patel; Ateev Mehrotra; Haiden A Huskamp; Lori Uscher-Pines; Ishani Ganguli; Michael L Barnett
Journal:  JAMA Intern Med       Date:  2020-11-16       Impact factor: 21.873

5.  Limited Life Expectancy, Human Capital and Health Investments.

Authors:  Emily Oster; Ira Shoulson; E Ray Dorsey
Journal:  Am Econ Rev       Date:  2013-08

6.  Experience with the use of community health extension workers in primary care, in a private rural health care institution in South-South Nigeria.

Authors:  Best Ordinioha; Chinyere Onyenaporo
Journal:  Ann Afr Med       Date:  2010 Oct-Dec

7.  Fatalism, beliefs, and behaviors during the COVID-19 pandemic.

Authors:  Jesper Akesson; Sam Ashworth-Hayes; Robert Hahn; Robert Metcalfe; Itzhak Rasooly
Journal:  J Risk Uncertain       Date:  2022-06-02

8.  Who Is (and Is Not) Receiving Telemedicine Care During the COVID-19 Pandemic.

Authors:  Jonathan H Cantor; Ryan K McBain; Megan F Pera; Dena M Bravata; Christopher M Whaley
Journal:  Am J Prev Med       Date:  2021-03-06       Impact factor: 5.043

9.  Fear of COVID-19 and workplace phobia among Pakistani doctors: A survey study.

Authors:  Sadia Malik; Irfan Ullah; Muhammad Irfan; Daniel Kwasi Ahorsu; Chung-Ying Lin; Amir H Pakpour; Mark D Griffiths; Ibad Ur Rehman; Rafia Minhas
Journal:  BMC Public Health       Date:  2021-04-30       Impact factor: 3.295

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.