Literature DB >> 35089925

Simulating the effect of evaluation unit size on eligibility to stop mass drug administration for lymphatic filariasis in Haiti.

Natalya Kostandova1, Luccene Desir2, Abdel Direny3, Alaine Knipes4, Jean Frantz Lemoine5, Carl Renand Fayette6, Amy Kirby1, Katherine Gass7.   

Abstract

BACKGROUND: The Transmission Assessment Survey (TAS) is a decision-making tool to determine when transmission of lymphatic filariasis is presumed to have reached a level low enough that it cannot be sustained even in the absence of mass drug administration. The survey is applied over geographic areas, called evaluation units (EUs); existing World Health Organization guidelines limit EU size to a population of no more than 2 million people. METHODOLOGY/PRINCIPAL
FINDINGS: In 2015, TASs were conducted in 14 small EUs in Haiti. Simulations, using the observed TAS results, were performed to understand the potential programmatic impact had Haiti chosen to form larger EUs. Nine "combination-EUs" were formed by grouping adjacent EUs, and bootstrapping was used to simulate the expected TAS results. When the combination-EUs were comprised of at least one "passing" and one "failing" EU, the majority of these combination-EU would pass the TAS 79% - 100% of the time. Even in the case when both component EUs had failed, the combination-EU was expected to "pass" 11% of the time. Simulations of mini-TAS, a strategy with smaller power and hence smaller sample size than TAS, resulted in more conservative "passing" and "failing" when implemented in original EUs.
CONCLUSIONS/SIGNIFICANCE: Our results demonstrate the high potential for misclassification when the average prevalence of lymphatic filariasis in the combined areas differs with regards to the TAS threshold. Of particular concern is the risk of "passing" larger EUs that include focal areas where prevalence is high enough to be potentially self-sustaining. Our results reaffirm the approach that Haiti took in forming smaller EUs. Where baseline or monitoring data show a high or heterogeneous prevalence, programs should leverage alternative strategies like mini-TAS in smaller EUs, or consider gathering additional data through spot check sites to advise EU formation.

Entities:  

Mesh:

Substances:

Year:  2022        PMID: 35089925      PMCID: PMC8827424          DOI: 10.1371/journal.pntd.0010150

Source DB:  PubMed          Journal:  PLoS Negl Trop Dis        ISSN: 1935-2727


Introduction

Lymphatic filariasis (LF) is a vector-borne disease caused by nematodes, or roundworms, that reside in lymphatic vessels and can lead to debilitating disability, as well as stigma, psychological problems, and lowered quality of life [1,2]. The cornerstone of the global LF program is prevention through Mass Drug Administration (MDA). The primary objective of MDA is to lower the level of microfilaraemia in infected people so that, even after MDA is stopped, transmission cannot continue [3]. The World Health Organization recommends annual MDA to all those living in areas at risk until transmission is no longer deemed to be ongoing. Of the 72 countries considered endemic for lymphatic filariasis, 50 are considered to require MDA, of which only three have yet to start MDA; 17 countries have been validated as having eliminated LF as a public health problem [4]. There are costs associated with implementing MDA; consequently, to maximize the use of scarce public health resources, it is important for programs to know when MDA can be stopped with minimal risk of recrudescence. A 2011 study of communes in Haiti that received MDA found the cost of MDA distribution in the first year of the national strategic plan in just nine out of 55 communes to be $264,970. Extending this cost to all of the communes in program amounts to about $1,214,102 for just one year, not including the cost of albendazole [5]. In 2011, the World Health Organization (WHO) developed guidelines for determining when MDA can be stopped [3]. The geographic area across on which a decision to stop MDA will be based is called an evaluation unit (EU), and is often made up of a combination of MDA implementation units (IUs). An EU should not exceed two million people [3]. An EU should be comprised of epidemiologically homogeneous areas that have received at least five rounds of MDA, with at least 65% of the population swallowing the drugs each round, and the prevalence of circulating filarial antigen (CFA) in all sentinel and spot-check sites in an EU must be less than 2% [3]. If all of these conditions are satisfied, a Transmission Assessment Survey (TAS) is carried out to determine whether MDA should be stopped [3]. The target population for TAS is children 6 to 7 years old. In areas where over 75% of children are enrolled in primary schools, school-based surveys can be used for TAS, whereas community-based surveys are required in areas with lower school enrollment [3]. The tests and critical thresholds used to determine if an EU can safely stop treatment differ based on the type of LF and its vector. In areas where Wuchereria bancrofti is the endemic parasite, and the mosquito vector is Culex or Anopheles, decision rule and critical cut-off are set to determine if the upper one-sided 95% confidence limit around the CFA prevalence is less than 2% in order for the EU to ‘pass’ the TAS and safely stop MDA. TAS is an example of a modified Lot Quality Assurance Sampling method, with schools or communities serving as the primary sampling unit (PSU). When the total number of PSUs in the EU is small (e.g., <40), PSUs are selected via systematic sampling, while cluster sampling is used in larger EUs. The TAS guidelines provide a table, which takes into account the total population of 6 to 7 year olds in the EU, the sampling methodology, and anticipated design effect, to determine the recommended sample size and critical cutoff value for the survey [3]. Upon completion of the survey, the observed number of positive tests is compared to a critical cutoff, designed to measure the target threshold with known error. In the case of the TAS, the critical cutoff is designed to measure a threshold of 2% (1% where Aedes is the vector), with <5% chance of Type I error (falsely rejecting the null hypothesis that the prevalence is above the target threshold) and maintaining power of at least 75% when the true prevalence is less than half the threshold. Practically, if the observed number of positive cases in a TAS is greater than the critical cutoff, the EU ‘fails’ and continues MDA for at least two more rounds; if the observed number of positive cases is less than or equal to the cutoff, the EU is considered to ‘pass,’ and can stop MDA [3]. Haiti is one of four countries in the Americas endemic for LF, bearing 90% of LF disease burden in the region. The species endemic to Haiti is Wuchereria bancrofti and the primary vector is the Culex quinquefasciatus mosquito [6]. In 2001, the CFA prevalence among children aged 6 to 11 was between 0 and 45%, with over 88% of all communes showing prevalence greater than 1% and thus qualifying for MDA according to WHO guidance [3]. In 2000, with support from the Ministry of Public Health and the Population (MSPP), the National Program to Eliminate LF (NPELF) was started. Despite hurricanes, a devastating earthquake, and a cholera outbreak, by 2012, NPELF was able to implement MDA nationwide, reaching more than eight million people, with estimated coverage of 71% [7]. By 2019, 122 of the 140 communes in Haiti passed at least one TAS and no longer required MDA [8]. Despite the tremendous success of the TAS at enabling over a thousand EUs to stop MDA for the global LF program, some evidence suggests that the TAS, as it is currently designed, may not be an effective tool for stopping MDA in all settings [9]. The focality of LF infection, which increases as transmission is driven towards elimination, calls the liberal size allowance (up to two million population) for EUs into question. For example, the epidemiology and geographic distribution of LF is likely to be very different for people living in a densely populated area with homogeneous vector distribution, as opposed to those living in a sparsely populated area with varying altitudes, humidity, and vector distribution. As the heterogeneity of transmission increases, the ability of cluster surveys, such as the TAS, to capture the underlying variation diminishes and the likelihood that pockets of ongoing transmission will be missed is increased [10]. It is important to note that the current TAS guidance suggests grouping IUs is appropriate when they share similar epidemiological features; however, this advice does not seem to be universally followed by country programs. Although reducing the size of an EU may improve the chances of including pockets with persistent transmission of LF if they exist, reducing the size of an EU, and thus increasing the number of EUs overall, would increase costs dramatically. The mean cost of a community-based TAS, based on a 2013 study in 13 countries, is $38,513, whereas the average cost of a school-based TAS is $18,239 [11]. Given the limited resources available to LF elimination programs, the guidelines for EU size should balance good decision-making with programmatic feasibility. At the same time, the additional costs of TAS in smaller EUs should be weighed against the costs of additional rounds of MDA, as well as the costs of misclassifying EUs. In this study, TAS data from Haiti were used to perform simulations to explore the programmatic implications of EU size. In particular, the effect of using larger EUs for classifying an area as ready (or not) to stop MDA was explored by combining adjacent smaller EUs. In addition, the potential of using a TAS with a reduced sample size, referred to as a ‘mini-TAS’, in smaller EUs was considered as a potential cost-saving approach.

Methods

Ethical statement

Ethical clearance was not required for this study, as it was a secondary analysis of programmatic data. No personally identifying individual-level data were used in this analysis.

Dataset

The dataset utilized in this study was a subset of data from a TAS-Soil-Transmitted Helminthiasis-Malaria survey conducted by the Haitian MSPP, IMA World Health, and Centers for Disease Control and Prevention (CDC) in 2015 in Haiti. The TAS was conducted in 14 EUs, with each unit comprised of one or more communes, third-level administrative divisions in Haiti, with the exception of one evaluation unit that was smaller than a commune. All EUs had completed the TAS eligibility requirements as established by WHO: at least 5 consecutive rounds of MDA with coverage over 65%; CFA prevalence at sentinel and spot-check sites of <2%; and a total population under two million people. The TAS were conducted using either a randomized cluster or systematic survey design targeting children 6–7 years old, with schools as the primary sampling unit. Immunochromatographic card test (ICT) was used to test for the presence of filarial antigens. The data collected included the names of each EU, the names and locations for each school, the ages and sex of the children tested, and the ICT results (positive, negative, indeterminate, and not available). Information from the Survey Sample Builder (http://www.ntdsupport.org/resources/transmission-assessment-survey-sample-builder) files for each EU was used to obtain information about the target population, total number of schools, and expected absentee rates for each EU. Henceforth these data will be referred to as the ‘observed’ data.

Forming combo-EUs

In order to explore the implications of EU size, and because in Haiti EUs tend to have substantially fewer than two million people, larger EUs were simulated by combining adjacent EUs. In this manner, nine unique combinations of adjacent EUs (hereby referred to as ‘combo-EUs’) were formed. Each of these new combo-EUs represented an alternative EU that the NPELF could have designated as the basis for its stopping MDA decision, as the combo-EUs would satisfy the TAS eligibility guidelines specified by WHO. Homogeneity criterion was not considered in forming combo-EUs, as baseline prevalence estimates were several years old, and becomes some other countries and TAS disregard homogeneity criterion when forming EUs. Target populations for each combo-EU were determined by combining the target populations for each component EU contained in the combo-EU. The total number of schools in the combo-EU was taken to be the sum of schools in each component EU. The expected absentee rate for each individual evaluation unit varied from 10% to 15%; since each of the combo-EUs contained at least one EU with an expected absentee rate of 15%, all of the combo-EUs were assigned the expected absentee rate of 15%. Because the target population of each of the combo-EUs exceeded 1000 and the number of schools in each combination exceeded 40, cluster sampling was assumed, as recommended by the WHO TAS guidelines. The WHO TAS table was used to obtain the necessary TAS sample size for the combo-EUs [3]. The average number of students per school was estimated by dividing the total target population of the combo-EU by the number of schools in the combo-EU. Finally, the target TAS sample size was divided by this average number of students to obtain the number of schools that needed to be sampled for each combo-EU, with a minimum of 30 schools required. If the sample size was not reached, additional children were sampled from a list of backup schools, selected proportionately from the EUs comprising the combo-EU.

Passing or failing decision

In this study it was assumed that the programmatic decision for a combo-EU was to ‘pass’ the TAS if all component EUs passed the TAS (i.e., with the number of positive tests less than or equal to the critical cutoff), allowing MDA to be stopped. Whereas if any of the component EUs failed, the programmatic decision for the combo-EU was to fail, a conservative decision to avoid prematurely stopping MDA in areas with ongoing transmission. A TAS in each combo-EU was treated as a stratified cluster survey, with component EUs acting as strata and schools as clusters. Sampling weights were assigned to each child with a positive or negative ICT, with the weights for children in EU j defined as follows: where N is the target population in EU j and n is the number of children with a valid (positive or negative) ICT in the sample in EU j. The expected prevalence for the combo-EU was then obtained as a weighted average of each component EU’s prevalence. To assess the TAS critical cutoff, an upper one-sided 95% confidence interval was calculated for each expected prevalence accounting for the stratified cluster sampling using R package survey. If the confidence interval around the expected prevalence in the combo-EU contained or exceeded the TAS threshold of 2%, then the expected decision for the combo-EU was to fail; otherwise, the expected decision for the combo-EU was to pass.

Bootstrapping

To understand the distribution of TAS results that one might expect had larger EUs been formed, bootstrapping, that is sampling with replacement from the observed data, was used to estimate the number of ICT positives if TAS were conducted in each combo-EU. In the first step, the estimated number of schools required to meet the TAS sample size for a combo-EU was sampled with replacement from among all the schools in the observed TAS datasets for each of the component EUs. School selection was stratified by EU and schools were bootstrapped independently from each EU, with the number of selected schools proportional to the total number of schools in the EU. For those component EUs that were originally sampled systematically, rather than through cluster sampling, additional bootstrapping of children within the school was performed in order to obtain the necessary sample size. In these schools, the number of children selected was equal to the average number of children per school in the combo-EU. For EUs with cluster sampling, bootstrapping was only done at the school level, and results from all children that had been tested were retained. In some replicates, by chance, a disproportionate number of smaller schools was selected. As a result, the sample size was smaller than the target. In this case, if the target sample size of children was not reached from the schools selected through bootstrapping, additional schools were sampled until the desired sample size was met. This is consistent with how TAS is performed in the field, whereby additional randomly selected clusters are added if the target sample size is not met from the original sample of clusters. This bootstrap sampling was replicated 1000 times for each combo-EU, resulting in 1000 simulated TAS results. The total number of positive ICT results in each of the bootstrap replicates was calculated based on the number of ICT positive results in the observed TAS data for each selected school, and an upper 95% one-sided confidence interval was calculated for the combo-EU. If the confidence interval contained 2%, then the combo-EU was said to have failed; otherwise, the combo-EU passed. The proportion of replicates with upper one-sided 95% confidence intervals exceeding 2% was calculated. It was necessary to drop EU #1 from the bootstrap simulations because an error in the original dataset, whereby schools 1 through 16 were all coded as “1,” made it impossible to recreate the school-level results. A table with assessment of reproducibility of TAS results for component EUs using bootstrap is presented in Supporting Information (S2 Table).

Mini-TAS

The alternative to combining IUs into EUs would be for each IU to be its own EU, a decision that comes with significant cost implications due to the increase in the number of TASs that would be required. Although the Haitian program chose to adopt this strategy, other programs might find it difficult to assume this added cost up front. The ‘mini-TAS’ represents a modification to the TAS platform that can reduce the cost and other resources required while still maintaining its integrity as a decision-making tool for stopping MDA. Simulations were run to compare the trade-offs of using the mini-TAS, in place of the TAS, for making stop-MDA decisions when each IU represents its own EU. The mini-TAS is similar in design to the standard TAS. It is a 30-cluster survey designed to measure a threshold of 2% but requires testing roughly a quarter of the number of children of a standard TAS. This reduction in sample size, intended to reduce the time and cost associated with conducting a TAS, effectively reduces the power of the survey tool from 75% to 40%. The mini-TAS has been approved by WHO as a tool for confirmatory mapping of LF [12], and the details of its design have been well-documented [13]. The implications of conducting the mini-TAS were simulated in each EU in the observed Haiti dataset. The required sample size for the mini-TAS was based on the hypergeometric distribution so that each EU has no more than a 5% chance of being misclassified as passing when the true prevalence exceeds 2% (Type I error), and at least a 40% chance of correctly passing if the CFA prevalence is 1.0% (S1 Table). The bootstrapping approach was repeated as before, with replicates forced to achieve the desired sample size every time. For systematically sampled EUs, the number of children to sample from each school was calculated by multiplying the total mini-TAS sample size by the proportion of valid ICT results in the school. If this sample size was not reached, additional children were sampled at random until the desired sample size was reached. For cluster surveys, the original mini-TAS design uses population proportionate to estimated size sampling to select the school clusters. To achieve an equal probability of selection, it is therefore necessary to use a cluster-specific sampling interval that is inversely proportional to the estimated size of the school. This results in a fixed expected sample size across all schools (which reduces to: per school sample size = total sample size / 30 clusters). To simulate this, at each school, the per school sample size was first drawn without replacement; if the original dataset had less than this required number of children with valid ICT results within the school, additional children were sampled with replacement from that school until the required number was reached. The number of passing and failing replicates out of the 1000 total replicates obtained for each EU was calculated in a similar manner to the TAS simulations. Upper one-sided 95% CI was calculated for each replicate, and the replicates were said to “pass” if the upper bound was less than 2%, and to “fail” if the upper bound was greater than or equal to 2%. All analyses were conducted in R [14]. The package survey [15] was used for calculation of upper one-sided 95% confidence bound to allow for complex survey methodology.

Results

The TAS dataset

Information pertaining to characteristics of the EUs and TAS results from the observed data is presented in Table 1. Fourteen total EUs were sampled in TAS, with number of children in target grades in schools in the EUs ranging from 707 children 6–7 years to 35,357. Four of these EUs had low baseline prevalence of infection (0.1–4.9% ICT positivity), one had medium baseline prevalence (5.0–9.9% ICT positivity), and nine had high baseline prevalence of infection (10.0% and over ICT positivity) based on estimates from 2001 [16]. The number of schools in the EUs ranged from 17 to 721 and the average number of students in target grades per school ranged from 29 to 64.
Table 1

Characteristics of individual Evaluation Units and Transmission Assessment Survey results.

Evaluation Unit #Baseline prevalence of infectionTarget populationTotal schools in Evaluation UnitAverage # of students in target gradesExpected absentee rate# Schools testedType of survey# Children Tested# Positive ResultsCritical CutoffObserved Transmission Assessment Survey Decision
1*Low14,8133674010%36Cluster1494016Pass
2*Low35,3577214910%46Cluster1659318Pass
3High2,442673610%53Cluster1231214Pass
4Medium6,8211205710%45Cluster1528018Pass
5High707174210%16Systematic36413Pass
6Low18,9773335710%42Cluster1617218Pass
7*High1,597256415%25Systematic55106Pass
8*Low20,8334414715%47Cluster1587218Pass
9High754262915%24Systematic58706Pass
10High1,875365215%30Systematic67207Pass
11High1,336423215%31Cluster858199Fail
12High1,634483415%37Cluster10371511Fail
13High9,2991994715%32Cluster19841920Pass
14High4,038745515%33Cluster14141016Pass

Baseline prevalence of infection is based on estimates from 2001 [16]. Evaluation Units (EUs) with Immunochromatographic card test (ICT) positivity between 0.1 and 4.9% are classified as low baseline prevalence; those with 5–9.9% ICT positivity have medium prevalence, and those with 10% and higher positivity are high prevalence at baseline. Target population is the expected number of school children enrolled in 1st and 2nd grades of primary schools. Number of schools in EU denotes the number of schools that exist in the evaluation unit. Number of schools tested is the number of schools that were selected in TAS, and for whom there is at least one ICT results present in the data. Number of children tested is the number of positive and negative ICT results that were recorded in the EU. If the number of positive ICT results in the EU is greater than the critical cutoff, the EU is said to fail; else, the EU passes. EUs marked with asterisks (*) were not considered for formation of combination-EUs because the combination-EU comprised of these adjacent units would have had a small enough number of positive results that failing would have been highly unlikely.

Baseline prevalence of infection is based on estimates from 2001 [16]. Evaluation Units (EUs) with Immunochromatographic card test (ICT) positivity between 0.1 and 4.9% are classified as low baseline prevalence; those with 5–9.9% ICT positivity have medium prevalence, and those with 10% and higher positivity are high prevalence at baseline. Target population is the expected number of school children enrolled in 1st and 2nd grades of primary schools. Number of schools in EU denotes the number of schools that exist in the evaluation unit. Number of schools tested is the number of schools that were selected in TAS, and for whom there is at least one ICT results present in the data. Number of children tested is the number of positive and negative ICT results that were recorded in the EU. If the number of positive ICT results in the EU is greater than the critical cutoff, the EU is said to fail; else, the EU passes. EUs marked with asterisks (*) were not considered for formation of combination-EUs because the combination-EU comprised of these adjacent units would have had a small enough number of positive results that failing would have been highly unlikely. The number of schools visited per EU as part of the TAS spanned from 16 in EU #7 to 53 schools in EU #3. Four of the EUs had <40 schools and required systematic sampling, meaning all schools that were accessible were sampled. The remaining ten EUs were sampled through cluster surveys, with the number of schools visited ranging from 31 to 53. In the EUs where cluster surveys were conducted, all children in the target grades were tested for CFA using the ICT test, whereas in systematically sampled EUs, a set fraction of students in the target grades were tested. The total number of children tested per EU ranged from 364 in EU #5 to 1986 in EU #13. Distribution of positive ICT results per school within EUs is provided in Supporting Information, S3 Table. Two of the EUs, EU #11 and EU #12, failed the TAS, that is, the number of positive ICT results exceeded the critical cutoff. EU #13 passed the TAS but came close to reaching the critical cutoff, with 19 positive ICT results, compared to a cutoff of 20. All other EUs passed the TAS, with the number of positive ICT results far below the critical cutoff. The EUs and the locations of schools where the surveys were conducted are displayed in Fig 1.
Fig 1

Sites of Transmission Assessment Surveys and Evaluation Units.

Red circles represent schools where schoolchildren in grades 1 and 2 were tested. The administrative division shapefile that served as a base map is available at https://data.humdata.org/dataset/777e8b06-337f-4295-80bc-ca1515244215/resource/9b57a285-e12f-4d1a-b167-676d96a2b4af/download/hti_adm_cnigs_20181129.zip; the shapefile with Evaluation Unit number as an attribute is available for download at https://doi.org/10.15139/S3/JUUSHC.

Sites of Transmission Assessment Surveys and Evaluation Units.

Red circles represent schools where schoolchildren in grades 1 and 2 were tested. The administrative division shapefile that served as a base map is available at https://data.humdata.org/dataset/777e8b06-337f-4295-80bc-ca1515244215/resource/9b57a285-e12f-4d1a-b167-676d96a2b4af/download/hti_adm_cnigs_20181129.zip; the shapefile with Evaluation Unit number as an attribute is available for download at https://doi.org/10.15139/S3/JUUSHC. Of the potential combo-EUs, those that were comprised solely of EUs with no positive ICT results, or an extremely small number of positive results (3 or less for EUs large enough to merit a cluster survey), such as EU #2 and EU #1, or EU #7 and EU #8, were not considered, as it would be expected that these combo-EUs result in a passing decision; their inclusion would not be informative. This left nine combo-EUs for the simulations; a description of these combo-EUs is presented in Table 2.
Table 2

Characteristics of combination Evaluation Units, formed from adjoining Evaluation Units.

Evaluation Unit CombinationComponent Evaluation UnitsObserved DecisionTarget sample size# Schools to be sampledProgrammatic decisionExpected true prevalence (upper 1-sided Confidence Interval)Expected transmission assessment conclusion
A12Fail154041Fail1.03% (1.58%)Pass
13Pass
B12Fail90933Fail0.99% (2.11%)Fail
9Pass
C12Fail154043Fail0.96% (1.48%)Pass
13Pass
9Pass
D11Fail90931Fail1.54% (2.70%)Fail
5Pass
E11Fail153236Fail0.36% (0.61%)Pass
4Pass
5Pass
F11Fail155634Fail0.20% (0.36%)Pass
4Pass
5Pass
6Pass
G10Pass139231Pass0.48% (0.83%)Pass
14Pass
H11Fail155634Fail0.20% (0.36%)Pass
4Pass
6Pass
I11Fail135649Fail1.79% (2.80%)Fail
12Fail

Positive Immunochromatographic card tests (ICTs), Critical Cutoff, Decision, and # schools tested all refer to individual characteristics of the component Evaluation Units (EUs) that make up the combination EUs (combo-EUs). Target sample size is the number of children that should be selected via bootstrapping to achieve desired power and alpha levels. Number of schools sampled is the expected number of schools (aka clusters) that will need to be selected from the combo-EU in order to achieve the desired sample size, sampled proportionately to total number of schools in the component EUs. Programmatic decision is to fail if at least one of the individual EUs is said to fail; if all individual EUs comprising the combo-EU pass, the desired conclusion is to pass. The expected true prevalence is the weighted average of prevalence in the EUs comprising the combo-EU. The expected Transmission Assessment Survey decision is to fail the combo-EU if the upper one-sided 95% confidence interval of the expected true prevalence is greater than or equals 2%, and to pass otherwise.

Positive Immunochromatographic card tests (ICTs), Critical Cutoff, Decision, and # schools tested all refer to individual characteristics of the component Evaluation Units (EUs) that make up the combination EUs (combo-EUs). Target sample size is the number of children that should be selected via bootstrapping to achieve desired power and alpha levels. Number of schools sampled is the expected number of schools (aka clusters) that will need to be selected from the combo-EU in order to achieve the desired sample size, sampled proportionately to total number of schools in the component EUs. Programmatic decision is to fail if at least one of the individual EUs is said to fail; if all individual EUs comprising the combo-EU pass, the desired conclusion is to pass. The expected true prevalence is the weighted average of prevalence in the EUs comprising the combo-EU. The expected Transmission Assessment Survey decision is to fail the combo-EU if the upper one-sided 95% confidence interval of the expected true prevalence is greater than or equals 2%, and to pass otherwise. As seen in Table 2, the expected TAS decision, based on the expected prevalence of positive ICT results from the weighted average of the component EUs, differed from the programmatic decision for five out of the nine combo-EUs. That is, although the programmatic decision for the combo-EU was to fail if at least one of its component EUs had failed the TAS, in five of the combo-EUs that had at least one component EU that failed the TAS, the upper one-sided 95% CI around the expected prevalence was less than 2%, indicating a passing result. Thus, for these combo-EUs, there was a discordance between the desired and expected decisions.

Combo-EU Bootstrapping

The results from the bootstrapping to obtain the distribution of likely TAS results for each combo-EU are shown in Table 3. When the combo-EUs were comprised of EUs with the same observed TAS decision–that is, with all component EUs failing, or all passing–the bootstrapping simulations produced the same decision in the majority of the replicates. In the case of combo-EU G, comprised of component EUs #10 & #14 that both passed TAS, 981 out of 1000 replicates also passed TAS (1.9% failed).
Table 3

Results of bootstrapping results simulating Transmission Assessment Surveys in combination Evaluation Units.

Evaluation Unit CombinationProgrammatic decisionMedian bootstrap prevalence (upper 1-sided 95% Confidence Interval)Bootstrap expected conclusionEU# of schools selected from each EU% of replicates failing Transmission Assessment Survey (out of 1,000)
AFail0.97% (1.42%)Pass12818.2%
1333
BFail1.12% (2.08%)Fail122261.6%
912
CFail0.96% (1.46%)Pass12821.1%
1331
95
DFail2.01% (3.41%)Fail112293.2%
59
EFail0.08% (0.28%)Pass1190.2%
425
54
FFail0.11% (0.28%)Pass1130.0%
52
48
622
GPass0.58% (0.95%)Pass10101.9%
1421
HFail0.11% (0.27%)Pass1130.0%
49
623
IFail1.78% (2.52%)Fail112389.3%
1226

Replicates are obtained by proportional sampling. Programmatic decision is to fail if at least one of the individual Evaluation Units (EUs) is said to fail; if all individual EUs comprising the EU combination pass, the desired conclusion is to pass. The median bootstrap prevalence is the expected prevalence of positive Immunochromatographic card Test results in the bootstrap of 1000 replicated. The bootstrap expected conclusion is to fail the EU combination if the upper one-sided 95% confidence interval exceeds 2%, and to pass otherwise. The Number of baseline schools selected refers to the number of schools selected from each individual EU to be proportional to the total number of schools in the EU, relative to the number of schools in the EU combination. Additional schools were sampled if desired sample size was not achieved.

Replicates are obtained by proportional sampling. Programmatic decision is to fail if at least one of the individual Evaluation Units (EUs) is said to fail; if all individual EUs comprising the EU combination pass, the desired conclusion is to pass. The median bootstrap prevalence is the expected prevalence of positive Immunochromatographic card Test results in the bootstrap of 1000 replicated. The bootstrap expected conclusion is to fail the EU combination if the upper one-sided 95% confidence interval exceeds 2%, and to pass otherwise. The Number of baseline schools selected refers to the number of schools selected from each individual EU to be proportional to the total number of schools in the EU, relative to the number of schools in the EU combination. Additional schools were sampled if desired sample size was not achieved. For the combo-EU I, comprised of two failing EUs (#11 & #12), the vast majority of bootstrap replicates (89.3%) also failed the TAS. For the eight combo-EUs comprised of component EUs with discordant TAS decisions, the programmatic decision is for the combo-EU to fail the TAS. However, as seen in Table 3, the rate by which these combo-EUs failed the TAS was highly variable. Combo-EU D, comprised of EUs #11 and #5, and combo-EU B, comprised of EUs #12 and #9, had the highest percentage of failing replicates, with 93.2% and 61.6% of replicates failing TAS, respectively. For the remaining six combo-EUs comprised of EUs with discordant TAS results, the rate of TAS failure ranged from 0% in the case of combo-EUs F and H, to 21.1% for the combo-EU C. The results of mini-TAS simulations are presented in Table 4. The vast majority of the mini-TAS bootstrap replicates passed the TAS. In seven of the thirteen EUs, all of the bootstrap replicates would pass in the mini-TAS, which is intuitive because the total number of positive ICTs in the full TAS sample was at or below the cut-off threshold for mini-TAS. In two other EUs, where the observed TAS decision was to pass, mini-TAS would have resulted in a failing decision a small portion of the time (1.1% for EU #3 and 0.9% for EU #6). For the EU with the borderline passing TAS decision, EU #13, mini-TAS would have failed 30% of the time. The two EUs that failed in TAS also failed 100% of the mini-TAS replicates. EU #14, on the other hand, failed 100% of mini-TAS replicates, despite having passed the TAS.
Table 4

Results of mini-Transmission Assessment Survey (mini-TAS) simulations.

Evaluation Unit #Observed Transmission Assessment Survey DecisionMini-Transmission Assessment Survey typeMini-Transmission Assessment Survey Sample SizeMini-Transmission Assessment Survey Critical Cutoff% of replicates that fail mini-Transmission Assessment Survey
2PassCluster48030.0%
3PassCluster48031.1%
4PassCluster48030.0%
5PassSystematic22010.0%
6PassCluster48030.9%
7PassSystematic30020.0%
8PassCluster48030.0%
9PassSystematic22010.0%
10PassSystematic30020.0%
11FailCluster4503100.0%
12FailCluster4503100.0%
13PassCluster480330.4%
14PassCluster4803100.0%

Mini-TAS mimics the TAS procedure, with power reduced to 40%, effectively reducing sample size. One thousand replicates are obtained through bootstrapping; replicates were declared to “pass” the mini-TAS if the number of positive Immunochromatographic card Test results in the replicate was less than or equal to the critical cutoff; otherwise, the replicate was considered to have failed the mini-TAS.

Mini-TAS mimics the TAS procedure, with power reduced to 40%, effectively reducing sample size. One thousand replicates are obtained through bootstrapping; replicates were declared to “pass” the mini-TAS if the number of positive Immunochromatographic card Test results in the replicate was less than or equal to the critical cutoff; otherwise, the replicate was considered to have failed the mini-TAS.

Discussion

The TAS is a statistically robust decision-making tool that has been successfully implemented by program managers in many countries and used to guide important stop-MDA decisions for LF. While WHO provides strong guidance on how to conduct and interpret TASs, the best practices for forming survey evaluation units are vague, particularly when it comes to recommended EU size. In this study, programmatic data from Haiti’s LF elimination program were used to simulate various EU formations and the resulting programmatic decisions regarding the decision to stop MDA. The study’s results suggest that there is a high potential for misclassifying areas where MDA should not be stopped when such implementation areas are combined with low prevalence areas into a single EU. In fact, of the eight EU combinations for which the desired program conclusion was to fail the TAS, five EU combinations would be expected to pass at least 79% of the time. For all combo-EU replicates, the bootstrap expected decision conformed with the expected true prevalence decision based on a weighted average of prevalence of each of the comprising EUs. Unfortunately, this decision was different from the programmatic decision in the vast majority of the combinations, which would be to fail the combo-EU if any of the comprising EUs should fail TAS. It should be noted that the two EUs that failed TAS (EU #11 and EU #12) had below average target population; when combined with larger EUs, which passed TAS, the probability of sampling a high enough number of the positive ICTs from EU #11 and EU #12 was low. Only when combining the two failing EUs with even smaller EUs with low prevalence were we more likely to fail the combination-EUs (eg combination-EU D). The high rate of disagreement of results with both the expected decision and the desired decision is concerning. TAS is used to assess whether MDA for LF can be stopped. Falsely passing a combination EU in which one or more of the composite EUs should have failed could have significant public health consequences and jeopardize elimination efforts. With MDA prematurely stopped, transmission would continue unabated for at least two years before a second TAS could be carried out and the program would have a chance at recognizing the error. Once the error was identified, restarting an MDA program in an EU previously declared free of transmission would require significant human and financial resources and would incur a political capital cost. This study suggests that prematurely stopping MDA might be the more likely form of misclassification when IUs are combined, a concerning conclusion. The financial and logistical challenges of conducting TAS are significant and thus the desire to combine IUs into a larger single EU to reduce that burden is understandable; however, it can be difficult to know which IUs are appropriate to combine. Although it might seem obvious that combining two IUs with discordant results (i.e., one pass and one fail) would lead to an incorrect decision for one of the component IUs, it is important to keep in mind that programs do not have this information in advance when they are determining whether to combine IUs. In its TAS manual, WHO advises that IUs can be combined if they have had at least five rounds of MDA and share “similar epidemiological features” [3]. The manual suggests that the epidemiological features of interest can include rates of MDA coverage and prevalence in sentinel and spot-check sites. Currently, the manual recommends that there be at least one sentinel site per one million population, with at least one corresponding spot-check site [3]. As seen in S1 Fig, which is an ArcGIS-generated map of the distribution of positive ICT results from TAS in the northern EUs, positive cases appear to cluster. Because of the focality of LF, particularly towards the end-stages of the program, as the size of the EU increases, so does the likelihood that the cluster sampling used in the TAS will miss a hotspot of ongoing transmission [10]. Although limiting the size of the EU is the best way to reduce the risk of undetected hotspots, an alternative strategy might be to increase the number of pre-TAS sentinel and spot-check sites prior to selection of EUs. If the pre-TAS data suggest some low level of infection remain (e.g., CFA between 1% and 2%), it might be prudent to restrict the corresponding IU to a single EU. One method for addressing the tradeoff between the improved decision-making power that comes with smaller EU size vs. the added costs and resources that more EUs represent, is to use the mini-TAS, in place of the TAS. Because the mini-TAS sample size is much smaller, a single team can typically complete sampling in two clusters (i.e., schools) per day, which may result in two- or three-fold savings in survey implementation costs [13]. While cost effectiveness analysis of switching to mini-TAS approach is outside the scope of this study, published experiences with both tools in Tanzania suggest that the mini-TAS costs $9,598 per EU [13] while the cost of TAS is $29,721 [11]. Based on our analysis, using a mini-TAS would tend to provide more conservative results that favor continuing MDA compared with the TAS (a consequence of reducing the power from 75% down to 40%). In the nine high-performing EUs with zero or very few ICT positives during TAS, the simulations suggest that the mini-TAS would be likely to agree with the TAS and the EU would be classified as ‘passing’ >98% of the time (100% of the time for those with no positives, as expected, as well as three of the EUs with a low number of positives). In the EUs that failed the TAS, it is reassuring to observe that they would likely fail the mini-TAS 100% of the time. In EUs where the TAS results were borderline (EUs #13 & 14), the mini-TAS was more likely to fail the EU compared to the TAS, failing 70% of the time for EU #13 and 100% of the time for EU #14. This might have occurred because out of 33 schools in this EU, eight had at least one positive ICT. With the low cut-off threshold in mini-TAS (three positive ICT), it is likely that cluster sampling would have picked up a high enough number of these positive results to trigger a failing decision. While some NTD practitioners might find this increase in failures concerning, others might argue that it is the more conservative decision particularly in light of recent evidence that the TAS might not be sufficiently sensitive for detecting ongoing transmission in all settings [9,17]. Although our study focused on the issue of combining IUs to form EUs, in some countries dense population and district structure might result in IUs that approach, or even exceed, two million population. In this case, the question is not about combining IUs but whether it makes sense to split IUs into smaller EUs when conducting the TAS. Here again it becomes an important trade-off between accurate decision-making and cost. Subdividing large IUs to form smaller EUs offers two advantages: 1) MDA can be stopped in the portions of the IU where treatment was successful and 2) reducing the area over which disease prevalence is being averaged decreases the risk that “transmission hotspots” go undetected [18]. Here too, leveraging the mini-TAS to make stop-MDA decisions in these smaller EUs might provide a strategy to maintain the robust design and decision-making power of the TAS, while reducing the overall cost and material requirements to the program. It is important to note that the simulation approach taken here of directly combining data from two or more EUs may not be an appropriate way to estimate real-life TAS results. In particular, it was difficult to identify the most appropriate way to combine observed TAS data from EUs that used discordant sampling methods (systematic vs. cluster sampling). As with any bootstrap sampling approach, this analysis was limited to samples that had been obtained during TAS. Where the prevalence is heterogeneous, cluster-based surveys (such as the TAS) may miss small foci of infection by chance and these foci would not be reflected in the subsequent bootstrap simulations. The results from these simulations suggest that epidemiological characteristics, rather than total population or geographic size, should be given the greatest consideration when forming EUs. Furthermore, these results suggest that the strategy adopted by the Haitian program to limit EUs to a single IU (i.e. commune) in areas where baseline transmission intensity was high was a wise and conservative approach that likely averted misclassification of EUs. This strategy makes sense, as areas with historically high transmission intensity are likely to be more vulnerable to recrudescence or harboring pockets of focal transmission. Cluster sample surveys, such as the TAS, are limited in their ability to detect focal transmission. Restricting the total size of areas at greatest risk increases the chance of detecting focal transmission and making the correct treatment decision. Ultimately, the decision of EU size is based on availability of good information and financial resources. Where baseline information is available, we recommend that it factor into the decision to combine IUs, in the case of low transmission settings, or keep separate, in the case of areas with historically high transmission. Providing a precise threshold to determine whether combining or splitting IUs is indicated is unrealistic, given the sparsity of most baseline data and the relevance of other epidemiologic factors. Programs must also consider the cost benefits of conducting fewer TAS evaluations with the increased risk of EU misclassification. The mini-TAS represents a potential compromise for programs, as it provides a strategy to maintain the robust design and decision-making power of the TAS, while reducing the overall cost and resource requirements. Ultimately program managers should continue to make thoughtful decisions when forming EUs to improve the likelihood that appropriate stop-MDA decisions are made and enable programs to reach their elimination goals as efficiently as possible.

Spatial distribution of positive Immunochromatographic card test results in Haiti Transmission.

The administrative division shapefile that served as a base map is available at https://data.humdata.org/dataset/777e8b06-337f-4295-80bc-ca1515244215/resource/9b57a285-e12f-4d1a-b167-676d96a2b4af/download/hti_adm_cnigs_20181129.zip; the shapefile with Evaluation Unit number as an attribute is available for download https://doi.org/10.15139/S3/JUUSHC. Assessment Survey data. (TIF) Click here for additional data file.

Decision rules and sample size for mini-Transmission Assessment Surveys.

Table adapted from [13]. (PDF) Click here for additional data file.

Comparison of observed 2015 Haiti Transmission Assessment Survey results in 13 Evaluation Units and simulated results using bootstrapping.

(PDF) Click here for additional data file.

Distribution of positive Immunochromatographic card test results within Evaluation Units.

(PDF) Click here for additional data file. 1 Oct 2021 Dear Dr Gass, Thank you very much for submitting your manuscript "Simulating the effect of evaluation unit size on eligibility to stop mass drug administration for lymphatic filariasis in Haiti" for consideration at PLOS Neglected Tropical Diseases. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. The reviewers appreciated the attention to an important topic. Based on the reviews, we are likely to accept this manuscript for publication, providing that you modify the manuscript according to the review recommendations. With the revised submission, please include the original data in a form that allows the results to be checked and replicated. The reference to an aggregate summary of the data provided in reference 12 is not sufficient. If there are concerns to make the original microdata available to readers, if possible, please aggregate the microdata to a level that allows the bootstrap analysis to be reproduced, e.g., in the form of the number of types of test result per school (positives, negatives, indeterminate). This would allow the authors to at least drop the age and sex-related data, which based on the provided information, did not seem to play a role in the bootstrap procedure. However, if age and sex did play a role in the bootstrap procedure, the microdata should be made available without aggregation. I further encourage the authors to reconsider dropping EU #1 from the bootstrap simulations ("because an error in the original dataset made it impossible to recreate the school-level results"). Although the nature of the error is not fully clear, it sounds like a technical problem. Given that bootstrapping is a well-established technique, it should be possible to resolve a technical issue and/or find help to resolve it. Or, if the authors can provide more details about the problem in their response, perhaps the reviewers can provide a suggestion. Please prepare and submit your revised manuscript within 30 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. When you are ready to resubmit, please upload the following: [1] A letter containing a detailed list of your responses to all review comments, and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out [2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file). Important additional instructions are given below your reviewer comments. Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments. Sincerely, Luc E. Coffeng, MD PhD Guest Editor PLOS Neglected Tropical Diseases Jennifer Keiser Deputy Editor PLOS Neglected Tropical Diseases *********************** With the revised submission, please include the original data in a form that allows the results to be checked and replicated. The reference to an aggregate summary of the data provided in reference 12 is not sufficient. If there are concerns to make the original microdata available to readers, if possible, please aggregate the microdata to a level that allows the bootstrap analysis to be reproduced, e.g., in the form of the number of types of test result per school (positives, negatives, indeterminate). This would allow the authors to at least drop the age and sex-related data, which based on the provided information, did not seem to play a role in the bootstrap procedure. However, if age and sex did play a role in the bootstrap procedure, the microdata should be made available without aggregation. I further encourage the authors to reconsider dropping EU #1 from the bootstrap simulations ("because an error in the original dataset made it impossible to recreate the school-level results"). Although the nature of the error is not fully clear, it sounds like a technical problem. Given that bootstrapping is a well-established technique, it should be possible to resolve a technical issue and/or find help to resolve it. Or, if the authors can provide more details about the problem in their response, perhaps the reviewers can provide a suggestion. Reviewer's Responses to Questions Key Review Criteria Required for Acceptance? As you describe the new analyses required for acceptance, please consider the following: Methods -Are the objectives of the study clearly articulated with a clear testable hypothesis stated? -Is the study design appropriate to address the stated objectives? -Is the population clearly described and appropriate for the hypothesis being tested? -Is the sample size sufficient to ensure adequate power to address the hypothesis being tested? -Were correct statistical analysis used to support conclusions? -Are there concerns about ethical or regulatory requirements being met? Reviewer #1: Methods were clearly defined and appropriate. Reviewer #2: (No Response) Reviewer #3: The objectives of the study are clearly defined. The authors test whether reasonable combinations of evaluation units would yield different programmatic decisions. The data and methods are appropriate for the questions, and the decisions to exclude some of the observed data from the deeper analysis are reasonable. One recommendation would be to add clarity on how additional clusters or children from clusters were drawn for the bootstrapping analysis. It seems odd that an analysis using totally sufficient samples could result in too small of a sample using essentially the same design. -------------------- Results -Does the analysis presented match the analysis plan? -Are the results clearly and completely presented? -Are the figures (Tables, Images) of sufficient quality for clarity? Reviewer #1: General comment re: presentation of results: I know “Desired programmatic decision” is defined, however it seems odd to state that any desired programmatic decision is to “Fail” – and to see that written in Table 3 just seems odd. If other language could be used to describe the same phenomena, I recommend using different language here (and in the text) so the layperson doesn’t actually ever think a program would desire to fail. Perhaps just drop the word “Desired”? Or replace it with "Purported"? Reviewer #2: (No Response) Reviewer #3: The tables and figures are clear and informative. The results are well described. -------------------- Conclusions -Are the conclusions supported by the data presented? -Are the limitations of analysis clearly described? -Do the authors discuss how these data can be helpful to advance our understanding of the topic under study? -Is public health relevance addressed? Reviewer #1: Conclusions and considerations for programmatic decisions are clearly stated. Reviewer #2: (No Response) Reviewer #3: The conclusions suit the results and the scope of the analysis. The authors clearly describe the limitations of their work and the underlying data. The relevance and applicability are well treated. -------------------- Editorial and Data Presentation Modifications? Use this section for editorial suggestions as well as relatively minor modifications of existing data that would enhance clarity. If the only modifications needed are minor and/or editorial, you may wish to recommend “Minor Revision” or “Accept”. Reviewer #1: Minor suggested revisions below: Line 15, replace Hispaniola Program with Hispaniola Initiative Line 91: Recommend moving sentence “The only guidance…” before the sentence that starts on line 88 “An EU should be comprised…” Line 149: First use of STH – but hasn’t been spelled out previously. Line 276: “Error” Reference source not found.” Is printed in lieueof reference. Line 281: Missing word “of” between “number students” Line 283: Recommend formatting Table 1 to emulate Tables 2 - 4 (e.g., same font size) Line 300: Recommend adding the words “passed the TAS but” btwn “EU #13” and “came close to” Line 304: Capitalize the “F” in “figure 1” Line 306. Transmission Assessment Surveys and Evaluation Units could be capitalized – see Table 3 – or review all figure and table titles throughout and be consistent. Line 337: Add “the” btwn “in majority” Line 343: Everywhere else, “card test” is lowercase except here. Choose one and be consistent throughout. Line 369: Everywhere else, “card test” is lowercase except here. Choose one and be consistent throughout. Line 373: Consider replacing “stopping-treatment decisions” with “stopping-MDA decisions” or “stop-MDA decisions” as is used elsewhere in manuscript. Line 374: Replace TAS surveys with “TASs” – which has been used elsewhere in manuscript. Line 466: “stop MDA decisions was earlier hyphenated as “stop-MDA decisions” – review manuscript in entirety and be consistent. Line 533: S1 Figure, spell out acronyms. Reviewer #2: (No Response) Reviewer #3: (1) The abstract does not mention the mini-TAS analysis. If there is room, please briefly note that you explored alternative designs as a compromise. (2) While the use of the upper 95% CI to determine the TAS result based on bootstrapping is functionally equivalent to the LQAS, most readers will be more familiar with decision rules and cutoffs when it comes to TAS data. It may be helpful to reiterate this in the results, that the simulations produced a frequency of positive ICTs above the cutoff. (3) The authors note in line 256 that the mini-TAS uses "population proportionate to estimated size sampling, which results in a fixed sample size across all schools." Is this not the other way around, that the PPES sampling uses a fixed fraction rather than a fixed number? (4) There is a reference error in line 276. (5) Table 1: perhaps include a column noting which EUs will be excluded from further analysis due to low number of positive children. (6) Around line 414-418, the authors note that "Because the mini-TAS sample size is much smaller, a single team can 417 typically complete sampling in two clusters (i.e., schools) per day, which may result in two- or three-fold 418 savings in survey implementation costs." With school-based sampling and good mobilization, two clusters per day is plausible in a regular TAS. What are the estimated savings in labor and supplies that come from the smaller number of clusters and overall smaller sample size of the mini-TAS when compared to a regular TAS, particularly one done on a tighter schedule? (7) Lines 444-446: did you notice a difference in the results when drawing from discordant sample designs (cluster v. systematic)? Perhaps in the variance? Intuitively, it seems like the very different population structures would affect the results in some way. (8) Lines 450-451: Would reducing the upper population threshold from 2 million to say 1 million or 500,000 make a difference? Please elaborate more on why epidemiological conditions (presumably, high baselines leading to persistent transmission after years of MDA) matter more than the other factors. Moreover, programs often only have baselines and perhaps an interim survey or two before Pre-TAS. Beyond increasing the frequency of interim monitoring or the number of sentinel and spot check sites, how should programs classify their baselines? I.e., does 5% go with 10%, does 10% go with 15%, etc. This may be an area of further work and another paper, but a rough +/- x% may be helpful. -------------------- Summary and General Comments Use this section to provide overall comments, discuss strengths/weaknesses of the study, novelty, significance, general execution and scholarship. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. If requesting major revision, please articulate the new experiments that are needed. Reviewer #1: Very well-written with clear explanation of relevance to programmatic decision-making. Reviewer #2: I have read the paper “Simulating the effect of evaluation unit size on eligibility to stop mass drug administration for lymphatic filariasis in Haiti” with great interest. It is a great piece of work with clear policy relevance for the global programme to eliminate lymphatic filariasis. I have some suggestions and minor comments SUGGESTIONS: 1. I miss an assessment of the reproducibility of TAS results with current EU-size and guidelines. Could you do 1000 bootstraps of the data within each component EU to assess how often that EU would be classified as passing or failing TAS? 2. Table 1: could you add information about the number of positives per school (e.g. no positives in x schools, 1 positive in y schools, etc), so that readers have complete information to reproduce the analyses? 3. Table 1: it is interesting to see how the number of children tested varies with population size. In the smallest EUs about half of the target population is being tested. This declines to about 5% in the largest EUs. Similarly, nearly all schools are surveyed in small EUs, while only 10% of schools is surveyed in the largest EUs. Explaining the rationale behind the survey sample builder is perhaps beyond the scope of this paper, but it might be useful for readers to be reminded about this and to understand under which circumstances this would be appropriate. MINOR COMMENTS 4. Line 123: I’d suggest to delete the word “safely”, as the appropriateness of the decision often remains to be seen 5. Lines 123-133: this text is written as if there was not any statement about homogeneity in WHO’s guidance for creating EUs. But there was a statement about this in the guidelines. Perhaps one or two sentences about why this has not always been effectively applied would be helpful in the stage. Was there a lack of guidance on how to define whether an area is sufficiently homogenous to be considered one EU? 6. Line 155-157: was the choice of EU-sizes in Haiti driven by information on homogeneity? 7. Line 171-173: state explicitly that the homogeneity criterion was not considered in forming combo-EUs 8. Line 291-292: EU numbers don’t match to the numbers in table 1. 9. Figure 1: the red is difficult to see (many points appear black to me, possibly because of the black border) 10. Line 353: did you intend to also mention the results for combo-EU B in this sentence (38.4%)? Replace “failing” by “passing” 11. Lines 354-356: it is a bit confusing that passing rates are provided in table 3 and that failing rates are discussed in the text. I suggest to harmonize this. In the discussion of the results presented in table 3, it may be useful to point out that target population was relatively small in the two component EUs that failed TAS in the observed data. When these EUs are combined with considerably larger EUs with low baseline prevalence, these component EUs make up a minor part of the total population. Only when combined with even smaller EUs, we still get a signal. 12. Lines 397-399: I don’t understand the first part of this sentence. How does combining IUs into larger EUs prevent “misclassifying well-performing EUs as failing”? Even with small EUs they should pass the criteria. Reviewer #3: This paper is relevant to the NTD community. The methods are appropriate and straightforward. It is well written and clearly presented. -------------------- PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No Figure Files: While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Data Requirements: Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5. Reproducibility: To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols References Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article's retracted status in the References list and also include a citation and full reference for the retraction notice. 26 Nov 2021 Submitted filename: Response to reviewers_PLOS NTDS.docx Click here for additional data file. 21 Dec 2021 Dear Dr Gass, Thank you very much for submitting your manuscript "Simulating the effect of evaluation unit size on eligibility to stop mass drug administration for lymphatic filariasis in Haiti" for consideration at PLOS Neglected Tropical Diseases. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. The reviewers appreciated the attention to an important topic. Based on the reviews, we are likely to accept this manuscript for publication, providing that you modify the manuscript according to the review recommendations. The reviewers and editor appreciate the thorough reply and revisions. To be able to accept the manuscript for publication, I ask you to please check and address the last two points about line 198 and table 3. Please prepare and submit your revised manuscript within 30 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. When you are ready to resubmit, please upload the following: [1] A letter containing a detailed list of your responses to all review comments, and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out [2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file). Important additional instructions are given below your reviewer comments. Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments. Sincerely, Luc E. Coffeng, MD PhD Guest Editor PLOS Neglected Tropical Diseases Jennifer Keiser Deputy Editor PLOS Neglected Tropical Diseases *********************** The reviewers and editor appreciate the thorough reply and revisions. To be able to accept the manuscript for publication, I ask you to please check and address the last two points about line 198 and table 3. Reviewer's Responses to Questions Summary and General Comments Use this section to provide overall comments, discuss strengths/weaknesses of the study, novelty, significance, general execution and scholarship. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. If requesting major revision, please articulate the new experiments that are needed. Reviewer #2: The authors have adequately adressed my comments on the previous version and I have no further comments. Reviewer #3: Great work, and thank you for your clear and thorough response. -------------------- PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No -------------------- Key Review Criteria Required for Acceptance? As you describe the new analyses required for acceptance, please consider the following: Methods -Are the objectives of the study clearly articulated with a clear testable hypothesis stated? -Is the study design appropriate to address the stated objectives? -Is the population clearly described and appropriate for the hypothesis being tested? -Is the sample size sufficient to ensure adequate power to address the hypothesis being tested? -Were correct statistical analysis used to support conclusions? -Are there concerns about ethical or regulatory requirements being met? Reviewer #1: Looks great, the addition of mini-TAS methods is appreciated. Reviewer #3: Thank you for addressing our comments, this made the paper richer and improved my understanding of the work. -------------------- Results -Does the analysis presented match the analysis plan? -Are the results clearly and completely presented? -Are the figures (Tables, Images) of sufficient quality for clarity? Reviewer #1: MUST UPDATE: Table 3 numbers do not match the text; Table 3 was updated to present % fail, and line 368 incorrectly reports "6.8% and 38.4% of replicates failing TAS, respectively" - these %, if you do the math, are for passing TAS. Recommend reviewing this complete sentence, and the Table 3 numbers, to decide what you want to present here - it could be that the % from Table 3 should be input into this sentence (alternatively, the word failing should be replaced with passing - however the complete sentence implies failure proportions are meant to be presented). Also, comment that Table 3 updated to % fail while Table 4 still presents % pass - in the event the authors wish to be consistent. Reviewer #3: The revisions addressed the comments well and improved the paper. -------------------- Conclusions -Are the conclusions supported by the data presented? -Are the limitations of analysis clearly described? -Do the authors discuss how these data can be helpful to advance our understanding of the topic under study? -Is public health relevance addressed? Reviewer #3: Reviewers' concerns appear to have been well addressed. -------------------- Editorial and Data Presentation Modifications? Use this section for editorial suggestions as well as relatively minor modifications of existing data that would enhance clarity. If the only modifications needed are minor and/or editorial, you may wish to recommend “Minor Revision” or “Accept”. Reviewer #3: Double check wording on line 198. Figure Files: While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Data Requirements: Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5. Reproducibility: To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols References Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article's retracted status in the References list and also include a citation and full reference for the retraction notice. 31 Dec 2021 Submitted filename: Response to reviewers_12.31.2021.docx Click here for additional data file. 6 Jan 2022 Dear Dr Gass, We are pleased to inform you that your manuscript 'Simulating the effect of evaluation unit size on eligibility to stop mass drug administration for lymphatic filariasis in Haiti' has been provisionally accepted for publication in PLOS Neglected Tropical Diseases. Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests. Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated. IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript. Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS. Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Neglected Tropical Diseases. Best regards, Luc E. Coffeng, MD PhD Guest Editor PLOS Neglected Tropical Diseases Jennifer Keiser Deputy Editor PLOS Neglected Tropical Diseases *********************************************************** 24 Jan 2022 Dear Dr Gass, We are delighted to inform you that your manuscript, "Simulating the effect of evaluation unit size on eligibility to stop mass drug administration for lymphatic filariasis in Haiti," has been formally accepted for publication in PLOS Neglected Tropical Diseases. We have now passed your article onto the PLOS Production Department who will complete the rest of the publication process. All authors will receive a confirmation email upon publication. The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any scientific or type-setting errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript. Note: Proofs for Front Matter articles (Editorial, Viewpoint, Symposium, Review, etc...) are generated on a different schedule and may not be made available as quickly. Soon after your final files are uploaded, the early version of your manuscript will be published online unless you opted out of this process. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers. Thank you again for supporting open-access publishing; we are looking forward to publishing your work in PLOS Neglected Tropical Diseases. Best regards, Shaden Kamhawi co-Editor-in-Chief PLOS Neglected Tropical Diseases Paul Brindley co-Editor-in-Chief PLOS Neglected Tropical Diseases
  13 in total

1.  Costs of integrated mass drug administration for neglected tropical diseases in Haiti.

Authors:  Ann S Goldman; Molly A Brady; Abdel Direny; Luccene Desir; Roland Oscard; Jean-Francois Vely; Mary Linehan; Margaret Baker
Journal:  Am J Trop Med Hyg       Date:  2011-11       Impact factor: 2.345

Review 2.  Lymphatic filariasis in children: clinical features, infection burdens and future prospects for elimination.

Authors:  Ranganatha Krishna Shenoy; Moses J Bockarie
Journal:  Parasitology       Date:  2011-08-03       Impact factor: 3.234

3.  A comprehensive assessment of lymphatic filariasis in Sri Lanka six years after cessation of mass drug administration.

Authors:  Ramakrishna U Rao; Kumara C Nagodavithana; Sandhya D Samarasekera; Asha D Wijegunawardana; Welmillage D Y Premakumara; Samudrika N Perera; Sunil Settinayake; J Phillip Miller; Gary J Weil
Journal:  PLoS Negl Trop Dis       Date:  2014-11-13

4.  Costs of Transmission Assessment Surveys to Provide Evidence for the Elimination of Lymphatic Filariasis.

Authors:  Molly A Brady; Rachel Stelmach; Margaret Davide-Smith; Jim Johnson; Bolivar Pou; Joseph Koroma; Kingsley Frimpong; Angela Weaver
Journal:  PLoS Negl Trop Dis       Date:  2017-02-01

5.  Epidemiological assessment of eight rounds of mass drug administration for lymphatic filariasis in India: implications for monitoring and evaluation.

Authors:  Subramanian Swaminathan; Vanamail Perumal; Srividya Adinarayanan; Krishnamoorthy Kaliannagounder; Ravi Rengachari; Jambulingam Purushothaman
Journal:  PLoS Negl Trop Dis       Date:  2012-11-29

6.  The global programme to eliminate lymphatic filariasis: health impact after 8 years.

Authors:  Eric A Ottesen; Pamela J Hooper; Mark Bradley; Gautam Biswas
Journal:  PLoS Negl Trop Dis       Date:  2008-10-08

7.  Haiti National Program for the elimination of lymphatic filariasis--a model of success in the face of adversity.

Authors:  Roland Oscar; Jean Frantz Lemoine; Abdel Nasser Direny; Luccene Desir; Valery E Madsen Beau de Rochars; Mathieu J P Poirier; Ann Varghese; Ijeoma Obidegwu; Patrick J Lammie; Thomas G Streit; Marie Denise Milord
Journal:  PLoS Negl Trop Dis       Date:  2014-07-17

8.  The rationale and cost-effectiveness of a confirmatory mapping tool for lymphatic filariasis: Examples from Ethiopia and Tanzania.

Authors:  Katherine M Gass; Heven Sime; Upendo J Mwingira; Andreas Nshala; Maria Chikawe; Sonia Pelletreau; Kira A Barbre; Michael S Deming; Maria P Rebollo
Journal:  PLoS Negl Trop Dis       Date:  2017-10-04

9.  Safety and efficacy of co-administered diethylcarbamazine, albendazole and ivermectin during mass drug administration for lymphatic filariasis in Haiti: Results from a two-armed, open-label, cluster-randomized, community study.

Authors:  Christine L Dubray; Anita D Sircar; Valery Madsen Beau de Rochars; Joshua Bogus; Abdel N Direny; Jean Romuald Ernest; Carl R Fayette; Charles W Goss; Marisa Hast; Kobie O'Brian; Guy Emmanuel Pavilus; Daniel Frantz Sabin; Ryan E Wiegand; Gary J Weil; Jean Frantz Lemoine
Journal:  PLoS Negl Trop Dis       Date:  2020-06-08

10.  Identifying residual transmission of lymphatic filariasis after mass drug administration: Comparing school-based versus community-based surveillance - American Samoa, 2016.

Authors:  Meru Sheel; Sarah Sheridan; Katherine Gass; Kimberly Won; Saipale Fuimaono; Martyn Kirk; Amor Gonzales; Shannon M Hedtke; Patricia M Graves; Colleen L Lau
Journal:  PLoS Negl Trop Dis       Date:  2018-07-16
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.