| Literature DB >> 28207932 |
Michael G Buhnerkempe1,2, Katherine C Prager1,2, Christopher C Strelioff1, Denise J Greig3, Jeff L Laake4, Sharon R Melin4, Robert L DeLong4, Frances M D Gulland3, James O Lloyd-Smith1,2.
Abstract
Identifying mechanisms driving pathogen persistence is a vital component of wildlife disease ecology and control. Asymptomatic, chronically infected individuals are an oft-cited potential reservoir of infection, but demonstrations of the importance of chronic shedding to pathogen persistence at the population-level remain scarce. Studying chronic shedding using commonly collected disease data is hampered by numerous challenges, including short-term surveillance that focuses on single epidemics and acutely ill individuals, the subtle dynamical influence of chronic shedding relative to more obvious epidemic drivers, and poor ability to differentiate between the effects of population prevalence of chronic shedding vs. intensity and duration of chronic shedding in individuals. We use chronic shedding of Leptospira interrogans serovar Pomona in California sea lions (Zalophus californianus) as a case study to illustrate how these challenges can be addressed. Using leptospirosis-induced strands as a measure of disease incidence, we fit models with and without chronic shedding, and with different seasonal drivers, to determine the time-scale over which chronic shedding is detectable and the interactions between chronic shedding and seasonal drivers needed to explain persistence and outbreak patterns. Chronic shedding can enable persistence of L. interrogans within the sea lion population. However, the importance of chronic shedding was only apparent when surveillance data included at least two outbreaks and the intervening inter-epidemic trough during which fadeout of transmission was most likely. Seasonal transmission, as opposed to seasonal recruitment of susceptibles, was the dominant driver of seasonality in this system, and both seasonal factors had limited impact on long-term pathogen persistence. We show that the temporal extent of surveillance data can have a dramatic impact on inferences about population processes, where the failure to identify both short- and long-term ecological drivers can have cascading impacts on understanding higher order ecological phenomena, such as pathogen persistence.Entities:
Keywords: asymptomatic infection; birth pulse; critical community size; epidemic drivers; maintenance host; marine mammal stranding; partially observed Markov process; pathogen reservoir; seasonal transmission; subclinical shedding
Mesh:
Year: 2017 PMID: 28207932 PMCID: PMC7166352 DOI: 10.1111/1365-2656.12656
Source DB: PubMed Journal: J Anim Ecol ISSN: 0021-8790 Impact factor: 5.091
Figure 1Weekly strands of California sea lions due to leptospirosis. Highlighted regions indicate the four stranding eras over which models were fit (grey boxes; 1984–1987, 1988–1991, 1991–1994 and 2004–2007). Note that the eras from 1995–1997, 2000–2003 and 2009–2011, while qualitatively similar to the four eras specified above, are not considered because they are preceded by relatively large outbreaks so the criterion of initial population susceptibility is not satisfied.
Description of models, parameters and their values. For models SR, CR and CS, only parameters that differ from those in CSR are shown. For parameters that were fit to the data, the range of random start values is shown in brackets
| Parameter | Description | Value |
|---|---|---|
|
| ||
| μ | Natural mortality rate | 0·0029 |
|
| Per capita susceptible recruitment rate from 1 March to 20 May | 0·019 |
|
| Rate of progression from exposure to acute infection | 1 |
| γ | Rate of recovery from acute infection | 0·5 |
|
| Initial fraction of the population that was susceptible | 0·8 |
|
| Initial fraction of the population that was exposed | 0·0025 |
|
| Initial fraction of the population that was in each of the four acute infection stages | 0·001 |
|
| Initial fraction of the population that was chronically infected | 0·005 |
|
| Initial fraction of the population that was recovered and immune to infection | 0·1885 |
|
| Total population size in 1984 | 70 240 |
|
| Total population size in 1988 | 98 681 |
|
| Total population size in 1991 | 150 338 |
|
| Total population size in 2004 | 220 349 |
| β1 | Transmission coefficient from 17 June to 4 August | [0, 50] |
| β2 | Transmission coefficient from 5 August to 17 November | [0, 50] |
| β3 | Transmission coefficient from 18 November to 16 June | [0, 50] |
| α | Disease‐induced mortality rate | [0, 1] |
| ρ | Proportion of acutely infected individuals that become chronically infected | [0, 1] |
| ε | Proportional shedding intensity of chronic individuals compared to acutely infected individuals | [0, 1] |
| δ | Rate at which chronically infected individuals recover | [0, 2] |
|
| Probability of observing an individual dying of leptospirosis as a strand | [0, 1] |
|
| ||
|
| Initial fraction of the population that was exposed | 0·0035 |
|
| Initial fraction of the population that was in each of the four acute infection stages | 0·002 |
|
| Initial fraction of the population that was chronically infected | 0 |
| ρ | Proportion of acutely infected individuals that become chronically infected | 0 |
|
| ||
| β0 | Transmission coefficient for the entire year | [0, 50] |
|
| ||
|
| Per capita susceptible recruitment rate for the entire year | 0·0040 |
Rate parameters have units 1/weeks.
Values of fit parameters are given in Tables S1–S4.
Figure 2Chronic shedding, seasonal transmission, and seasonal susceptible recruitment (CSR) fit to 2·5 years of stranding data starting in 1984. (a) Model fit showing the average number (solid red line) and 95% confidence interval (pink region) of weekly strandings predicted by CSR fit to data from the 1984–1987 stranding era (black line). (b) Force of infection from acutely infected (blue) and chronically infected (orange) individuals on a log scale. (c) Proportional force of infection from acutely (blue) and chronically infected (orange) individuals. (d) Predicted proportion of the population that is acutely infected (blue line) and chronically infected (orange line) over a 30‐year period. Shaded regions represent the 95% confidence intervals from 100 simulations. Simulations began on 17 June, and axis ticks marking the years occur on this date.
Figure 3Comparison of models with and without chronic shedding over multiple surveillance durations. (a) Stranding data for all four stranding eras at surveillance durations of 2·5, 1·5 and 0·5 years. (b) Differences in Akaike information criterion (AIC) scores (ΔAIC – beginning with the best models at 0 at the top) between chronic shedding, seasonal transmission, and seasonal susceptible recruitment (CSR) (filled circles on solid lines) and seasonal transmission and seasonal susceptible recruitment (SR) (open circles on dashed lines) at each surveillance duration in each stranding era. A ΔAIC score of less than 2 (denoted by the dashed black line) is generally considered to be a preferred model. (c) Probability of persistence for the model preferred between CSR and SR (as determined by ΔAIC in panel b). In all panels, colours denote the stranding era: 1984–1987 (green), 1988–1991 (blue), 1991–1994 (orange) and 2004–2007 (red).
Figure 4Likelihood space for chronic shedding parameters. Using the estimated parameter values for the CSR model fit to the 1984–1987 stranding era with 2·5 years of surveillance data, the average duration of chronic infection (1/δ) and the transmission efficiency of chronic shedding relative to acute shedding (ε) were varied, and the log‐likelihood of the parameters was recalculated. This was done for three values of the probability of becoming a chronic shedder (ρ), (a) 1, (b) 0·01 and (c) 0·0001. Log‐likelihood ranges from high (white) to low (dark grey). The contour line gives the 99% confidence region for a likelihood ratio test when all parameters except for the three chronic shedding parameters are fixed. We note that this is not a true profile confidence region due to computational constraints and is meant to be only illustrative.
Figure 5Comparison of models with chronic shedding that either had seasonal transmission, seasonal susceptible recruitment or both. (a–d) give results for models fit to 1984–1987, 1988–1991, 1991–1994 and 2004–2007 stranding eras respectively. Models CSR (solid black line), CS (dotted red line) and CR (dashed blue line) were compared. Because of the importance of chronic infection over longer surveillance durations (Fig. 3), all models were fit with 2·5 years of stranding data. Preferred models for each dataset, as determined by Akaike information criterion (AIC) (Tables S1–S4), are shown above each panel. Simulations began on 17 June, and axis ticks marking the years occur on this date.