Literature DB >> 30301375

Optimal Allocation of Interviews to Baseline and Endline Surveys in Place-Based Randomized Trials and Quasi-Experiments.

Donald P Green1, Winston Lin2, Claudia Gerber3.   

Abstract

BACKGROUND: Many place-based randomized trials and quasi-experiments use a pair of cross-section surveys, rather than panel surveys, to estimate the average treatment effect of an intervention. In these studies, a random sample of individuals in each geographic cluster is selected for a baseline (preintervention) survey, and an independent random sample is selected for an endline (postintervention) survey.
OBJECTIVE: This design raises the question, given a fixed budget, how should a researcher allocate resources between the baseline and endline surveys to maximize the precision of the estimated average treatment effect?
RESULTS: We formalize this allocation problem and show that although the optimal share of interviews allocated to the baseline survey is always less than one-half, it is an increasing function of the total number of interviews per cluster, the cluster-level correlation between the baseline measure and the endline outcome, and the intracluster correlation coefficient. An example using multicountry survey data from Africa illustrates how the optimal allocation formulas can be combined with data to inform decisions at the planning stage. Another example uses data from a digital political advertising experiment in Texas to explore how precision would have varied with alternative allocations.

Entities:  

Keywords:  cluster-randomized experiment; place-randomized trial; quasi-experiment; repeated cross-section surveys; sample allocation

Mesh:

Year:  2018        PMID: 30301375      PMCID: PMC6293457          DOI: 10.1177/0193841X18799128

Source DB:  PubMed          Journal:  Eval Rev        ISSN: 0193-841X


Surveys are widely used to measure outcomes in randomized control trials (RCTs) and quasi-experiments. Although only endline (posttreatment) outcome data are required for the estimation of treatment effects in RCTs, baseline (pretreatment) survey data may be helpful for improving statistical precision and power. In panel surveys, a common set of respondents is tracked over time from baseline to endline, allowing researchers to assess how the trajectories of individual subjects’ outcomes in the treatment group compare with those of the control group. Optimizing the design of panel surveys for efficient estimation of average treatment effects (ATEs) has attracted increasing scholarly attention (McKenzie, 2012). As Gail, Mark, Carroll, Green, and Pee (1996) discuss, panel surveys have important strengths and are often desirable for statistical precision, but they can also have important drawbacks in some contexts. Maintaining contact with baseline respondents may be costly or difficult, especially when tracking subjects who frequently change address or phone number (Parker & Teruel, 2005). A further concern is that the baseline interview may prime subjects in ways that alter their reaction to the treatment, distort their posttreatment survey responses, or cause nonresponse rates in the endline survey to differ between treatment and control groups (Flay & Collins, 2005; Solomon, 1949). When treatments are administered to a set of geographic clusters (Boruch, 2005; Gail, Mark, Carroll, Green, & Pee, 1996), an alternative measurement design is to interview a random sample of individuals within each cluster at baseline and another random sample at endline. When researchers gather survey data using this repeated cross-section design with clusters of equal size, the ATE of the intervention may be estimated by comparing the average outcomes of treatment and control group clusters in the endline survey, adjusting for preexisting differences in the baseline survey. A wide array of applications have used this design. Table 1 presents illustrative examples of repeated cross-section designs from a variety of substantive domains. For example, Ter Kuile et al. (2003) assessed the effects of bed nets on malaria among young children by randomly assigning 60 Kenyan villages to treatment and control. Random samples of children in each village were given medical exams at baseline, and new random samples were examined at endline. Another example is Gerber, Gimpel, Green, and Shaw (2011), which assessed the persuasive effects of political advertisements across 18 television markets by conducting a baseline survey within each market before the advertising campaign and drawing new samples within each market for the endline surveys. Indeed, the use of this design is common among experiments that assess the persuasive effects of political advertising, where automated phone surveys are conducted with distinct random samples of registered voters during baseline and endline periods. These automated surveys are directed at landline phone numbers associated with a particular address rather than a specific person, which makes it impractical to conduct panel surveys that track the same respondents over time. One of the empirical applications described below (Turitto, Green, Stobie, & Tranter, 2014) uses this design to assess the effects of digital advertising on behalf of a candidate for lieutenant governor of Texas. Although such studies are common, political campaigns rarely make the results public.
Table 1.

Examples of Place-Based Evaluations Using Repeated Cross-Section Surveys.

StudyField/TopicSummary and Main FindingsRCT or Quasi-ExperimentBaseline SurveyEndline Survey(s)
Bloom and Riccio (2005) Jobs and public housingEvaluates an employment initiative within public housing developments, implemented in six U.S. cities. Finds positive effects on earnings and employment for most housing projects that implemented the program correctly. The effects did not spark changes in overall social conditions or quality of lifeRCT• Random assignment of 16 housing developments (6 treated, 10 control)• N = 2,123 for treatment group• N = 2,651 for control group

Follow-up 5 years later in 2003

N = 2,700–4,500 (300–500 per housing project)

Gerber et al. (2011) PoliticsExplores the impact of political radio and television advertising on public opinion among registered voters in Texas. Finds ephemeral effects on voting preferencesRCT• Random assignment of 18 designated media markets to varying quantities of TV and radio ads• Conducted a few days before the launch of the media campaign• N = 150 per media market (N total = 2,998)

Two follow-ups: a week after intervention and second round 5 weeks later

N = 350 per week per media market (N = 7,022 for week 1)

Number of surveys and survey responses vary by week

Ter Kuile et al. (2003) Public Health: MalariaStudies the impact of insecticide-treated bed nets on malaria-associated morbidity in children under age 3 in Kenya. The nets reduced morbidity and improved weight gain in the treatment groupRCT• Randomly allocated 27 of 60 villages to treatment • N = 889 (across 27 randomly selected villages out of 60)

Two rounds (14 and 22 months after intervention)

N = 980 in survey (1)

N = 910 in survey (2)

Breakdown of treatment versus control interviews is unclear

Smith, Ping, Merli, and Hereward (1997) Public health: ContraceptionStudies the impact of a revised and holistic contraception program in China on a range of outcome indicators, such as data quality on births and infant mortality. The results are mixedRCT (overlapping surveys)• Random assignment of24 townships• Control: N = 8,603

Five randomly selected townships (11,759 interviews; 2,676 of these respondents were also interviewed at baseline)

Four townships assigned to treatment condition

Cheadle et al. (1995) Public Health: NutritionExamines differences between evaluation tools when studying community-based nutrition programs. Identifies the “environmental indicator” as a good and low-cost evaluation tool compared with individual-level telephone and grocery surveysRCT for two communities; quasi-experiment for 1 community• Three intervention communities and seven control communities• Random sample of stores from community clusters (15 stores per community)• Phone survey of individuals, N = 500 per community

Two follow-ups after 2 years (1990: 21 stores per community; 1992: 26 stores per community)

Phone survey of individuals, N = 500 per community

Murray et al. (1994) Public Health: Cardiovascular diseaseInvestigates the impact of a 5–6 year heart health program in Minnesota on heart disease incidence, morbidity, and mortality. Mixed resultsQuasi-experiment• Nonrandom assignment of three communities to treatment and three to control• N = 300–500 per community

After 2 years (half of cohort)

4 years (other half) and both after 7 years

N = 300–500 per community

Farquhar et al. (1985) Public Health: Cardiovascular diseaseDescribes the research design for a long-term field study to assess the impact of community health education in California for the prevention of cardiovascular diseaseQuasi-experimentNonrandom allocation• Five communities (N = 625 per community)

Two rounds (after end of campaign, and 3 years later)

Green, Wilke, Cooper, and Baltes (2016) Social attitudesInvestigates the effects of exposure to video vignettes dramatizing the issues of violence against women, teacher absenteeism, and abortion stigmaRandomly allocated 28 rural trading centers to different messages• 1,107 surveys in 28 trading centers in Uganda

Follow-up surveys 2 months after videos were screened

Note. RCT = randomized control trial.

Examples of Place-Based Evaluations Using Repeated Cross-Section Surveys. Follow-up 5 years later in 2003 N = 2,700–4,500 (300–500 per housing project) Two follow-ups: a week after intervention and second round 5 weeks later N = 350 per week per media market (N = 7,022 for week 1) Number of surveys and survey responses vary by week Two rounds (14 and 22 months after intervention) N = 980 in survey (1) N = 910 in survey (2) Breakdown of treatment versus control interviews is unclear Five randomly selected townships (11,759 interviews; 2,676 of these respondents were also interviewed at baseline) Four townships assigned to treatment condition Two follow-ups after 2 years (1990: 21 stores per community; 1992: 26 stores per community) Phone survey of individuals, N = 500 per community After 2 years (half of cohort) 4 years (other half) and both after 7 years N = 300–500 per community Two rounds (after end of campaign, and 3 years later) Follow-up surveys 2 months after videos were screened Note. RCT = randomized control trial. When using the repeated cross-section design to estimate the ATE of an intervention, a resource allocation question arises: In order to maximize the precision of the estimated ATE, how much of the survey budget should be allocated to the baseline survey as opposed to the endline survey? To our knowledge, none of the studies listed in Table 1 discuss this allocation problem. This article begins by formalizing the allocation problem in a balanced experimental design (where equal numbers of clusters are assigned to treatment and control) and deriving a result that expresses the optimal allocation as a function of the budgeted number of survey interviews per cluster, the cluster-level correlation between the baseline measure and the endline outcome, and the intracluster correlation coefficient (ICC). We then show how insights from the formal analysis can be applied in practice, using data from the Afrobarometer surveys (Afrobarometer, 2009, 2015) for an illustrative example. Next, we discuss survey allocation in an imbalanced design, where the expense associated with administering treatment leads researchers to assign more clusters to control than to treatment. In the concluding section, we summarize the main lessons and discuss possible extensions to address a wider range of design considerations.

Model and Notation

To keep the allocation problem tractable, we will make a number of simplifying assumptions. First, suppose that we are planning an experiment or quasi-experiment with J clusters and that we are willing to assume the clusters are randomly assigned to treatment or control—either because the study is in fact a cluster-randomized experiment or because we believe the treated and untreated clusters are similar enough that modeling treatment as cluster randomized is reasonable. (In many nonrandomized studies, this assumption is not reasonable, and our analysis would need to be extended to consider possible roles for baseline covariates in reducing bias.) Assume that baseline and endline interviews are equally costly and that our survey budget allows a total of S interviews.[1] One option is to allocate the entire budget to the endline survey, since treatment effects can be estimated without baseline data. Can precision be improved by allocating some interviews to a baseline survey and using the baseline data for blocking or covariate adjustment? If so, how many baseline and endline interviews should be conducted?[2] For now, we assume a balanced design in which clusters are assigned to treatment and to control; the main ideas carry over to the case of an imbalanced design, which we discuss later. We also assume that any attempt to use a baseline covariate to improve precision will be done via linear regression adjustment, not blocking.[3] However, we do not assume that the true relationship between the outcome and the covariate is linear.[4] Our analysis assumes that the J clusters were randomly selected from a much larger superpopulation and that the goal is to estimate an ATE (defined below) in the superpopulation. In practice, many studies use clusters that are not randomly drawn from any superpopulation. Some researchers therefore prefer a “finite population” framework in which statistical inferences are limited to the actual clusters in the study. Others defend the superpopulation framework on the grounds that it is useful to make inferences about “a hypothetical infinite population, of which the actual data are regarded as constituting a random sample” (Fisher, 1922, p. 311). However, the two frameworks tend to yield similar or even identical results, and the superpopulation framework often makes the mathematics easier. For a helpful discussion, see Reichardt and Gollob (1999). Suppose each cluster j has a population of individuals. Let and denote the endline outcome and the baseline covariate, respectively, for individual i in cluster j. Using the potential outcomes framework (Holland, 1986; Neyman, 1923; Rubin, 1974), let and denote the values that would take if cluster j were assigned to treatment or control, respectively. Averaging at the cluster level, let , , , and . Assume that our goal is to estimate the ATE in the superpopulation of clusters, weighting each cluster equally: . (Since each cluster j is randomly drawn from the superpopulation, the expectation in this formula is just the average over all clusters in the superpopulation.) In each cluster j, the endline survey collects outcome data from a random sample of individuals, and the baseline survey (if conducted) collects covariate data from an independent random sample of individuals.[5] Thus, the cluster has sample mean outcome and sample mean covariate value . For simplicity, we assume the sample sizes and are constant across clusters and are small relative to each cluster’s population size . If (i.e., no baseline interviews are conducted), we will estimate the ATE using the unadjusted treatment–control difference in mean outcomes, weighting each cluster equally. Letting equal 1 if cluster j is assigned to treatment and 0 otherwise, this estimator is given by: If , we will use the estimated coefficient on in an ordinary least squares regression of on and . Let denote the regression-adjusted estimator.[6] The allocation problem is to choose and to minimize the variance of the estimated ATE, subject to the constraint that , where is the budgeted number of survey interviews per cluster. Equivalently, the problem is to choose the proportion of baseline interviews . To simplify the derivations and formulas, we will analyze the variances of and when the treatment effect is homogeneous: Assume there is a constant τ such that for all i and j. Relaxing this assumption would complicate the analysis but would not necessarily be useful for study design, since the more complex formulas would involve quantities that are difficult to guess at the planning stage (such as the effect of treatment on the correlation between the covariate and the outcome). We now define several quantities that affect the variance of the estimated treatment effect. The between-cluster variance of the potential outcomes is the variance of (or, equivalently, the variance of , since we are assuming a homogeneous treatment effect): where and the expectations in these formulas are again just averages over all clusters in the superpopulation. The average within-cluster variance is given by: Define the covariate’s between-cluster variance and average within-cluster variance analogously. Assume , , , and are all nonzero (as would be expected in most applications). The ICC of the potential outcomes is given by: We define analogously and assume that (which may be a reasonable approximation if the covariate is a baseline version of the outcome). In what follows, it will be convenient to work with the quantity The between-cluster correlation between the covariate and each potential outcome is the correlation between and (or, equivalently, the correlation between and ): Equivalently, ρ is the square root of the R 2 that would be obtained if we could run a regression of on in the superpopulation of clusters.

Results for Balanced Designs

The Appendix shows that the variance of is approximately while, for large enough J, the variance of is approximately The factor is what the variance of the treatment–control difference in mean outcomes (weighting each cluster equally) would be if we could observe each cluster’s population mean outcome . The next factor, , inflates the variance because each cluster’s sample mean outcome is a noisy estimate of . Finally, linear regression adjustment improves asymptotic precision, multiplying the variance by a factor of approximately , in which the squared correlation between the population means and is attenuated by because the sample means and are noisy estimates of and . The approximation to the variance of may be improved by multiplying formula (8) by the degrees-of-freedom correction factor (Cox & McCullagh, 1982, p. 547). This factor is close to 1 when J is large. The optimal allocation of interviews between the baseline and endline surveys is derived in the Appendix. If , allocating all interviews to the endline survey is optimal (unless baseline interviews are desired for reasons other than improving precision). On the other hand, if , the proportion of baseline interviews π that minimizes the approximate variance of is and, for large enough J, allocating and using is more efficient than allocating all interviews to the endline survey and using . To interpret formula (9) and its requirement that , note that: (since and ). Thus, the proportion of interviews allocated to the baseline survey should always be less than one-half. is an increasing function of and n and a decreasing function of . Intuitively, the usefulness of collecting data on baseline covariates depends on both (the strength of the true correlation between population mean covariate values and population mean potential outcomes at the cluster level) and the signal-to-noise ratio in the sample means (which improves with larger values of n and the ICC).[7] Related to the previous point, the condition is needed in order for in formula (9) to be positive. Otherwise, the true covariate–outcome correlation ρ is not strong enough, relative to the measurement error in the sample means, to make it worthwhile to allocate any interviews to the baseline survey. For any given values of ρ and K, as n goes to infinity, approaches an upper limit of .

Example Using Afrobarometer Data

When deciding how to allocate a survey budget between baseline and endline interviews, one does not know the values of ρ and the ICC, but it may be possible to form educated guesses using external data. This example illustrates the types of calculations involved, using data from the Afrobarometer, an ongoing series of cross-section public opinion surveys on democracy, governance, economic conditions, and related issues in African countries (Afrobarometer, 2009, 2015). The first round (wave) of Afrobarometer surveys was conducted in 12 countries from 1999 to 2001. More recent rounds have included over 35 countries, with representative samples of 1,200 or 2,400 noninstitutionalized adult citizens in each country. The Afrobarometer surveys are useful for illustrative purposes given the large number of randomized trials conducted in Africa that use surveys to measure outcomes, the large number of respondents in each country at each point in time, and the wide array of outcomes measured (which allows us to consider outcomes with different ICCs and different values of ρ). In order to simulate the country-level assignment typical of many quasi-experiments that estimate the effects of national policies on outcomes (e.g., Dorn, Fischer, Kirchgässner, & Sousa-Poza, 2007; Welsch, 2007), we use data from the 20 countries that were included in both the fourth (March 2008 to June 2009) and the fifth (October 2011 to September 2013) rounds of Afrobarometer surveys.[8] We focus on two outcome variables: Economic optimism: “Looking ahead, do you expect the following to be better or worse: Economic conditions in this country in 12 months’ time?” (coded on a scale of 1 = “much worse” to 5 = “much better”).[9] Inclination to protest: “Here is a list of actions that people sometimes take as citizens. For each of these, please tell me whether you, personally, have done any of these things during the past year. If not, would you do this if you had the chance: Attended a demonstration or protest march?” (coded on a scale of 0 = “no, would never do this” to 4 = “yes, often”).[10] Consider the problem of allocating a survey budget in a cluster-randomized experiment or quasi-experiment where the main outcomes of interest resemble the economic optimism and protest inclination variables. Suppose it has already been decided that the experiment will include 20 clusters, with 10 clusters assigned to treatment and 10 to control, and the budget allows a total of n interviews per cluster. For illustrative purposes, we will show calculations for both and . (The larger sample size is similar to those in several of the evaluations listed in Table 1, such as Gerber et al. 2011.) Should a baseline survey be fielded, and if so, how should the interviews be allocated between the baseline and endline surveys? To apply formula (9), we have the number of interviews per cluster n, but we need to estimate and ρ for the main outcomes. If the interval between the proposed baseline and endline surveys is approximately the same as that between the fourth and fifth rounds of Afrobarometer surveys, we can use the Afrobarometer data to estimate both and ρ. Here, we use the analysis of variance (ANOVA) estimator of ICC (Donner, 1986, p. 68; Ridout, Demétrio, & Firth, 1999, p. 138). The estimated ICCs for economic optimism are 0.180 and 0.231 in the fourth and fifth rounds of the survey, while for inclination to protest, the corresponding estimates are 0.0425 and 0.0458. These translate into estimates for K of 4.56 or 3.33 (economic optimism) and 22.5 or 20.8 (inclination to protest). The simplest way to estimate ρ is to just use the observed correlation between the fourth- and fifth-round country-level means of the relevant variable. These correlations are 0.578 for economic optimism and 0.681 for inclination to protest. However, ρ in formula (9) is the correlation between the cluster-level population means of the covariate and outcome in the absence of treatment, while the observed correlation between the sample means is expected to be somewhat attenuated (because the sample means are noisy estimates of the population means). A more refined estimate of ρ (derived in the Appendix) is where and are the harmonic means of the country-level sample sizes in the fourth- and fifth-round surveys, and and are the estimates of K given above. Using this method, we obtain for economic optimism and for inclination to protest. The refinement does not matter much in this example, but it can matter when the sample sizes in the external data source are smaller or the ICCs are smaller. For example, without changing the ICCs, if we had , formula (10) would yield for inclination to protest. The boundary condition is easily satisfied given our planned number of interviews per cluster ( or ) and any of the above estimates of ρ and K, so we can use formula (9) to calculate , the optimal share of interviews to allocate to the baseline survey. The fourth- and fifth-round survey estimates of K are close enough that the choice between them hardly makes a difference; we use the fifth-round estimates in the remainder of this example. If , formula (9) yields for economic optimism and for inclination to protest, while if , the same formula yields for economic optimism and for inclination to protest.[11] As shown below, the precision of the estimated ATEs does not change dramatically as the proportion of baseline interviews π varies between 36% and 39% (for ) or between 29% and 35% (for ), so whether π is optimized for one outcome variable or the other (or a compromise between them) will not matter much in this example. Figure 1 shows how the proportion of baseline interviews π affects the standard error (SE) of the estimated ATE on economic optimism:
Figure 1.

Survey allocation and precision when the outcome variable is economic optimism. Near the top left corner, the filled triangle (for n = 100 interviews per cluster) and circle (for n = 500) show the standard error (SE) of the unadjusted estimate of average treatment effect when all interviews are allocated to the endline survey. The curves plot the SE of the regression-adjusted estimate against the share of interviews allocated to the baseline survey. The open circle on each curve marks the optimal baseline share. See text for details.

Survey allocation and precision when the outcome variable is economic optimism. Near the top left corner, the filled triangle (for n = 100 interviews per cluster) and circle (for n = 500) show the standard error (SE) of the unadjusted estimate of average treatment effect when all interviews are allocated to the endline survey. The curves plot the SE of the regression-adjusted estimate against the share of interviews allocated to the baseline survey. The open circle on each curve marks the optimal baseline share. See text for details. Near the top left corner, the two points marked with a filled triangle (for ) and circle (for ) show the SE of the unadjusted estimate when all interviews are allocated to the endline survey. To compute these SEs (0.275 and 0.272), we take the square root of formula (7) with , , and set to 0.367, the unbiased estimate from the ANOVA (Donner, 1986, p. 68) applied to the fifth-round survey data. The two curves (dashed for and solid for ) show the approximate SE of the regression-adjusted estimate , calculated by multiplying the asymptotic variance from formula (8) by the degrees-of-freedom correction and then taking the square root, with , , , and the same values as above for J, K, and . The open circle on each curve marks the optimal allocation from formula (9). The optimal baseline shares are (for ) and (for ), achieving SEs of 0.241 and 0.230, respectively. However, both curves are relatively flat over a wide range of allocations: Virtually the same SEs could be achieved by allocating anywhere from 20% to 50% of the interviews to the baseline survey. Thus, it is not important for the allocation to be exactly optimal. In Figure 1, when , the optimal allocation’s SE, 0.230, is about 15% lower than the SE that could be achieved without any baseline interviews, 0.272. Therefore, the minimum detectable effect (MDE; Bloom, 1995) is about 15% smaller under the optimal allocation than it would be without any baseline interviews. (When , the corresponding reduction is about 13%.) For example, for a two-sided test at the 10% significance level, the MDE with 80% power is 2.49 × 0.230 = 0.573 under the optimal allocation, while it is 2.49 × 0.272 = 0.677 without any baseline interviews. (The unit for these MDEs is a point on the 5-point scale of 1 = “much worse” to 5 = “much better” for the economic optimism question.) Figure 2 plots the analogous calculations for the protest inclination outcome variable. (The SEs are much smaller because the estimate of the between-cluster variance is only 0.0325 for this variable.) When , the optimal allocation () achieves an SE of 0.066, which is about 20% lower than the SE that could be achieved without any baseline interviews (0.082). However, as discussed in note 11, when n is reduced to 100, the usefulness of the baseline covariate data declines substantially because the ICC for inclination to protest is not very large. Now the optimal allocation () achieves an SE of 0.084, which is only about 6% lower than the SE that could be achieved without any baseline interviews (0.089). Again, it is not important for the allocation to be exactly optimal. When , all baseline allocations between 20% and 50% achieve approximately the same precision. When , a 50% baseline allocation results in an SE of 0.086, which is just slightly higher than the optimal SE (0.084).
Figure 2.

Survey allocation and precision when the outcome variable is inclination to protest. Near the top left corner, the filled triangle (for n = 100 interviews per cluster) and circle (for n = 500) show the standard error (SE) of the unadjusted estimate of average treatment effect when all interviews are allocated to the endline survey. The curves plot the SE of the regression-adjusted estimate against the share of interviews allocated to the baseline survey. The open circle on each curve marks the optimal baseline share. See text for details.

Survey allocation and precision when the outcome variable is inclination to protest. Near the top left corner, the filled triangle (for n = 100 interviews per cluster) and circle (for n = 500) show the standard error (SE) of the unadjusted estimate of average treatment effect when all interviews are allocated to the endline survey. The curves plot the SE of the regression-adjusted estimate against the share of interviews allocated to the baseline survey. The open circle on each curve marks the optimal baseline share. See text for details.

Imbalanced Designs

Many experiments and quasi-experiments use imbalanced designs with unequal-sized treatment and control groups. For example, if the intervention is very costly, the researchers may decide to assign clusters to treatment and clusters to control. In this section, we assume that J, , and the total number of survey interviews S have already been chosen, but we need to decide how to allocate the S interviews between the baseline and endline surveys and between the treatment and control group clusters. (As shown below, it turns out to be desirable to allocate more interviews per cluster to the group that has fewer clusters.) We assume here that the baseline survey, if any, will be administered after clusters are assigned but before treatment begins. Thus, for both the baseline survey and the endline survey, we have the option of allocating different numbers of interviews per cluster to the treatment and control groups. We also assume that if a baseline survey is conducted, we will estimate the ATE using the coefficient on in an ordinary least squares regression of on , , and the interaction . Including the interaction can improve asymptotic precision in imbalanced designs (Lin, 2013; Yang & Tsiatis, 2001). In our context, the interaction term allows the regression model to take into account the possibility that the correlation between and is stronger or weaker in the treatment group than the control group. For example, if we allocate more baseline and endline interviews per cluster to the treatment group than to the control group, then the cluster-level sample means and will be noisier estimates of the cluster-level population means and in the control group than in the treatment group. We would therefore expect the correlation between and to be stronger in the treatment group than in the control group. While it appears to be difficult to solve for an exact optimum, numerical calculations (such as those in the example below) suggest that the following allocation performs well in many scenarios: Allocate half the interviews to the treatment group and half to the control group.[12] The number of interviews per cluster will then differ between the treatment and control groups: There will be interviews per cluster in the treatment group and in the control group. For example, if the budget allows interviews, and there are 30 clusters with 10 assigned to treatment and 20 to control, then allocate 10,000 interviews (1,000 per cluster) to the treatment group and 10,000 (500 per cluster) to the control group. Let . If , allocate all interviews to the endline survey. If , allocate a proportion of interviews to the baseline survey. (Although π could be allowed to differ between the treatment and control groups, in many scenarios, there is little gain from such fine-tuning. The suggested baseline allocation here mimics the one we derived for the balanced design in Equation 9 and uses , the number of interviews per cluster in the group that has fewer clusters.)

Example: A Digital Advertising Experiment

To illustrate the ideas discussed above, we consider an application to digital political advertising drawn from Turitto, Green, Stobie, and Tranter (2014). Ten of 30 noncontiguous midsized cities in Texas were randomly assigned to the treatment, a 7-day digital advertising campaign on behalf of David Dewhurst, the incumbent candidate for lieutenant governor in the 2014 Republican primary. Using a repeated cross-section design, a baseline survey of Republican voters was conducted during January 3–6 (just before the launch of the treatment) and an endline survey was conducted during January 14–17 (just after the treatment ended). These automated phone surveys asked respondents, “Thinking about the race for Texas Lieutenant Governor for a moment, if the primary election were held today, which of the following candidates would you vote for?” and presented a list of candidates in random order. The goal of the study was to estimate the effect of the treatment on the proportion of respondents who indicated that they would vote for Dewhurst. The baseline survey was designed to obtain approximately 100 interviews in each treatment group city and 50 interviews in each control group city, while the endline survey was designed to obtain approximately 300 interviews in each treatment group city and 150 interviews in each control group city. Thus, out of a total of approximately 8,000 interviews, half were allocated to the treatment group and half to the control group (as suggested above), with 25% allocated to the baseline survey and 75% to the endline survey. We can explore in hindsight how the precision of the estimated treatment effect would vary with alternative allocations of the survey interviews.[13] The budgeted total number of interviews is , with cities in the treatment group and cities in the control group. Our outcome variable is defined as 1 if the respondent indicated support for Dewhurst and 0 otherwise. Using the endline survey data and the same methods as in the Afrobarometer example, we estimate as 0.00575 and as 0.0291. The corresponding ICC estimate from the baseline survey is 0.0210. The observed correlation between the baseline and endline city-level means of the outcome variable is 0.609, and the harmonic means of the city-level sample sizes are 56.9 for the baseline survey and 175.7 for the endline survey. Applying Equation 10, we estimate the covariate–outcome correlation ρ as 0.895 (which suggests that city-level support for Dewhurst was fairly stable from early to mid-January). Next, we calculate the suggested proportion of baseline interviews π from Equation 11. The suggested number of interviews per city is for treatment group cities and for control group cities, so the parameter in (11) equals 400. The boundary condition is easily satisfied with the above estimates of ρ and the ICC. Equation 11 yields . Figure 3 explores how alternative allocations of the survey interviews would affect the SE of the estimated treatment effect.[14] Each curve shows how the SE varies with the share of interviews allocated to the treatment group, holding the share allocated to the baseline survey (which is assumed to be the same across the treatment and control groups) constant at zero, 25% (the actual baseline share), 43% (the baseline share suggested above), or 50%. Comparisons within each curve show that the SE is minimized when half the interviews are allocated to the treatment group and half to the control group. Comparisons across the bottom three curves show that precision is only slightly better with the suggested 43% baseline share (yielding at best an SE of 2.28 percentage points, which implies an MDE of 5.68 percentage points) than a 25% or 50% baseline share (yielding an SE of 2.35 or 2.30 percentage points at best, which implies an MDE of 5.85 or 5.73 percentage points). Thus, the actual 25% baseline share appears to have been a reasonable choice. Finally, the topmost (dotted) curve shows that precision would be noticeably worse if all interviews were allocated to the endline survey (the SE for the unadjusted estimate is at best 3.1 percentage points, implying an MDE of 7.7 percentage points).
Figure 3.

Survey allocation and precision in the digital advertising example. The top (dotted) curve plots the SE of the unadjusted treatment effect estimate against the treatment group’s share of interviews when all interviews are allocated to the endline survey. The other three curves plot the SE of the regression-adjusted estimate against the treatment group’s share of interviews, holding the baseline survey’s share constant at 25% (the actual allocation), 43% (the suggested allocation), or 50%. See text for details.

Survey allocation and precision in the digital advertising example. The top (dotted) curve plots the SE of the unadjusted treatment effect estimate against the treatment group’s share of interviews when all interviews are allocated to the endline survey. The other three curves plot the SE of the regression-adjusted estimate against the treatment group’s share of interviews, holding the baseline survey’s share constant at 25% (the actual allocation), 43% (the suggested allocation), or 50%. See text for details.

Discussion

When the outcomes of interest are relatively stable over time, a study design with repeated cross-section surveys can be an effective strategy for efficient estimation of ATEs. Our analysis is intended to sketch some of the key issues involved in cost-efficient allocation of survey interviews and to invite more complex formalizations of the allocation problem. For simplicity, we omitted a number of complications that researchers may want to consider in applications, such as multiple baseline or follow-up survey waves, fixed costs associated with each survey wave, asymmetric costs of interviews in treatment and control areas, use of multiple baseline covariates in regression adjustment, and motivations for conducting a baseline survey other than improving the precision of estimated ATEs. Also, we assumed that the estimand is an ATE that weights each cluster equally, but researchers may prefer to weight the clusters according to population size or other considerations. Furthermore, we took the numbers of clusters assigned to treatment and control as given, while a more sophisticated analysis would simultaneously optimize the allocation of clusters to treatment arms and the allocation of survey interviews, given information about treatment costs, survey costs, and the overall budget. Researchers may wish to use our framework as a starting point for more complicated analyses that consider such issues. Because we omitted such complications, the formulas we have given for optimal allocation will not necessarily be optimal in practice, but the analysis may be of heuristic value. In a cluster-randomized experiment with repeated cross-section surveys, the optimal share of interviews to allocate to the baseline survey is less than one-half and tends to increase with the cluster-level correlation between baseline and endline measures of the outcome variable, the ICC, and the total number of interviews per cluster. In many scenarios, a wide range of baseline allocations yields approximately the same statistical precision. This suggests that researchers have quite a bit of latitude to accommodate other design considerations, such as fielding a baseline survey in order to train enumerators or pretest a survey instrument.
  9 in total

1.  Estimating intraclass correlation for binary data.

Authors:  M S Ridout; C G Demétrio; D Firth
Journal:  Biometrics       Date:  1999-03       Impact factor: 2.571

2.  Robustness of ordinary least squares in randomized clinical trials.

Authors:  David R Judkins; Kristin E Porter
Journal:  Stat Med       Date:  2015-12-23       Impact factor: 2.373

3.  An extension of control group design.

Authors:  R L SOLOMON
Journal:  Psychol Bull       Date:  1949-03       Impact factor: 17.737

Review 4.  On design considerations and randomization-based inference for community intervention trials.

Authors:  M H Gail; S D Mark; R J Carroll; S B Green; D Pee
Journal:  Stat Med       Date:  1996-06-15       Impact factor: 2.373

5.  Evaluating community-based nutrition programs: comparing grocery store and individual-level survey measures of program impact.

Authors:  A Cheadle; B M Psaty; P Diehr; T Koepsell; E Wagner; S Curry; A Kristal
Journal:  Prev Med       Date:  1995-01       Impact factor: 4.018

6.  Assessing intervention effects in the Minnesota Heart Health Program.

Authors:  D M Murray; P J Hannan; D R Jacobs; P J McGovern; L Schmid; W L Baker; C Gray
Journal:  Am J Epidemiol       Date:  1994-01-01       Impact factor: 4.897

7.  Some aspects of analysis of covariance.

Authors:  D R Cox; P McCullagh
Journal:  Biometrics       Date:  1982-09       Impact factor: 2.571

8.  The Stanford Five-City Project: design and methods.

Authors:  J W Farquhar; S P Fortmann; N Maccoby; W L Haskell; P T Williams; J A Flora; C B Taylor; B W Brown; D S Solomon; S B Hulley
Journal:  Am J Epidemiol       Date:  1985-08       Impact factor: 4.897

9.  Impact of permethrin-treated bed nets on malaria and all-cause morbidity in young children in an area of intense perennial malaria transmission in western Kenya: cross-sectional survey.

Authors:  Feiko O ter Kuile; Dianne J Terlouw; Penelope A Phillips-Howard; William A Hawley; Jennifer F Friedman; Margarette S Kolczak; Simon K Kariuki; Ya Ping Shi; Arthur M Kwena; John M Vulule; Bernard L Nahlen
Journal:  Am J Trop Med Hyg       Date:  2003-04       Impact factor: 2.345

  9 in total
  2 in total

1.  Cluster randomised trials with different numbers of measurements at baseline and endline: Sample size and optimal allocation.

Authors:  Andrew J Copas; Richard Hooper
Journal:  Clin Trials       Date:  2019-10-03       Impact factor: 2.486

2.  Optimal design of cluster randomised trials with continuous recruitment and prospective baseline period.

Authors:  Richard Hooper; Andrew J Copas
Journal:  Clin Trials       Date:  2021-03-08       Impact factor: 2.486

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.