Literature DB >> 18047531

A test for constant fatality rate of an emerging epidemic: with applications to severe acute respiratory syndrome in Hong Kong and Beijing.

K F Lam1, J V Deshpande2, E H Y Lau1, U V Naik-Nimbalkar2, P S F Yip1, Ying Xu1.   

Abstract

The etiology, pathogenesis, and prognosis for a newly emerging disease are generally unknown to clinicians. Effective interventions and treatments at the earliest possible times are warranted to suppress the fatality of the disease to a minimum, and inappropriate treatments should be abolished. In this situation, the ability to extract most information out of the data available is critical so that important decisions can be made. Ineffectiveness of the treatment can be reflected by a constant fatality over time while effective treatment normally leads to a decreasing fatality rate. A statistical test for constant fatality over time is proposed in this article. The proposed statistic is shown to converge to a Brownian motion asymptotically under the null hypothesis. With the special features of the Brownian motion, we are able to analyze the first passage time distribution based on a sequential tests approach. This allows the null hypothesis of constant fatality rate to be rejected at the earliest possible time when adequate statistical evidence accumulates. Simulation studies show that the performance of the proposed test is good and it is extremely sensitive in picking up decreasing fatality rate. The proposed test is applied to the severe acute respiratory syndrome data in Hong Kong and Beijing.

Entities:  

Mesh:

Year:  2007        PMID: 18047531      PMCID: PMC7188335          DOI: 10.1111/j.1541-0420.2007.00935.x

Source DB:  PubMed          Journal:  Biometrics        ISSN: 0006-341X            Impact factor:   2.571


1. Introduction

One of the key statistics for monitoring a newly emerging disease is the fatality rate. Traditionally, the fatality rate is assumed to be constant throughout the epidemic, and is generally estimated as the ratio of the cumulative number of deaths to the cumulative number of cases. It is well known that this estimator is insensitive to changes in fatality over the course of the epidemic (Last, 1995). Studies have shown that this traditional estimator has underestimated the fatality of the severe acute respiratory syndrome (SARS) considerably especially in the early part of the epidemic (Yip, Lam, et al., 2005; Yip, Lau, et al., 2005). The consequence of this underestimation could be very serious because the implementation of suitable preventive measures at the early stage of an outbreak may heavily depend on the estimated fatality rate. Moreover, the fatality rate of a newly emerging epidemic may vary over time possibly due to improvement in clinical treatments and hospital management. Sometimes, changes in weather or environmental conditions over the course of the epidemic may also have an effect on the fatality. It is important to keep track of the fatality rate over the course of the epidemic so that the effectiveness of any implemented health policy or clinical treatment can be assessed and evaluated. The estimator for the time‐varying fatality rate via a counting process and kernel function was proposed in Yip, Lam, et al. (2005). An estimator derived from the chain multinomial model was considered by Yip, Lau, et al. (2005). In this article, we propose a test for testing the null hypothesis of constant fatality rate against the alternative hypothesis of decreasing fatality rate. Due to the urgency in an outbreak of an epidemic, effective interventions and treatments at the earliest possible time are warranted to suppress the fatality of the disease to a minimum. Hence, a desirable property of the test is the ability to reject the null hypothesis as soon as we have enough evidence of a decreasing fatality as information accumulates over time. This property is particularly important in practice when combating a newly emerging disease. For example, the etiology, pathogenesis, clinical features, clinical management, or prognosis for the SARS epidemic were almost unknown to clinicians. Therefore, appropriate and effective interventions should be undertaken at the earliest possible time during the outbreak or else any delayed effort will result in less impact in controlling the epidemic. In this situation, the ability to extract the most information out of the data available is critical for making a focused and effective decision. Therefore, it is desirable if we can reject the null hypothesis of constant fatality rate at the earliest possible time during the outbreak of the epidemic. A constant fatality rate is, very likely, a result of the ineffectiveness of the current health policy or treatment in combating the disease or virus. In the SARS epidemic in 2003, it was suggested by the World Health Organization (WHO) that simple methods for calculating case‐fatality ratios from aggregate data will not give reliable estimates during the course of an epidemic (WHO, 2003). This statement is not entirely true. In the case of a constant fatality rate, a simple estimator based on the ratio of the cumulative number of deaths to the sum of the cumulative numbers of deaths and recoveries is shown to be its maximum likelihood estimator (MLE). This estimator is demonstrated to be more efficient in Section 3 and is more preferred if the assumption of constant fatality is valid. If the fatality rate is known to vary over time, a real‐time fatality rate estimator should be used (Yip, Lam, et al., 2005; Yip, Lau, et al., 2005), which was shown to be able to capture the changes and to monitor the fatality rate in a timely manner. The proposed test, together with the real‐time fatality rate estimator, can help to assess the effectiveness of existing health policies and interventions and to decide the need of new treatments especially in the absence of large scale clinical trials for a newly emerging epidemic. The setup of the problem, the proposed test, and its asymptotic properties will be discussed in the next section. The proposed method is applied to Hong Kong and Beijing SARS data in Section 3. Results of a simulation study to assess the performance of the test are reported in Section 4.

2. A Test for Constant Fatality Rate

Consider a population initially consisting of n healthy individuals subject to infection during an epidemic. When an individual is infected, he/she will be admitted to and treated in a hospital some time after the incubation period, usually longer at the beginning of the epidemic and shorter after the epidemic has taken off when the community is more aware of the disease. Inpatients are assumed to either die at a rate of γ1(t) or recover at a rate of γ on day t, where t is the calendar time since the beginning of the epidemic rather than the time since individual onset as in traditional survival analysis. By focusing on the calendar time instead of the survival time, we gain more insight in the trend of the fatality rate at the population level over the course of the epidemic. The population real time or time‐varying fatality rate is defined as which can be viewed as the conditional probability of death given that an event of death or recovery occurs at time t. This fatality rate is a measure of the combined effect of the processes of death and recovery which is similar to the setting of the competing risks model (Yip and Lam, 1992; Kalbfleisch and Prentice, 2002, Chapter 7). The data consist of the processes {, the cumulative numbers of deaths and recoveries up to time t and {I(t)}, the number of inpatients just before time t, 0 ≤t≤τ, where τ is a prespecified time measured in days. We decide to observe the process till time τ in the hope that τ is large enough to cover the entire anticipated length of the epidemic. Now suppose that N 1(t) and N 2(t) are counting processes with intensity processes γ and γ, respectively, that satisfy the multiplicative intensity model where is the σ‐field representing the history of the epidemic process up to time t that satisfies the usual regularity conditions (Andersen et al., 1993, p. 60). Define These are zero mean martingales with respect to the filtration (Andersen et al., 1993, pp. 72–82). Also, define , the indicator function of having at least one inpatient at time s−. We assume that no two events occur simultaneously. It is of interest to test the null hypothesis that the fatality rate π(t) as defined in (1) is constant over the period [0, τ] against the alternative hypothesis that π(t) decreases with t. We note that γ. Thus we seek a statistical test for testing the equivalent hypothesis where c 0 is an unknown constant. Let Note that there is no contribution to for the period during which J(s) = 0. Define and . Using integration by parts, may be rewritten as The last term in (3) is 0 under H 0 and is strictly positive under H 1. Hence, the proposed test statistic is consistent against any decreasing fatality rate. We assume that in probability, in probability, in probability, for 0 < t < τ. Using the Rebelledo martingale central limit theorem (Andersen et al., 1993, p. 83), we get that as and under H 0, the process converges weakly to a process in D[0, τ] with the Skorohod topology, where {U(t), 0 ≤t≤τ} is a continuous zero‐mean Gaussian martingale and cov{ with Let where is just the usual Nelson–Aalen estimator for Γ(t). Then, one can easily show that as using the same theorem. At some prespecified time τ, we can establish a straightforward test statistic given by Under the usual mild regularity conditions, the statistic V (τ) has a standard normal distribution asymptotically under the null hypothesis. The performance of the test is very good in terms of unbiasedness and the power of the test. However, the test statistic in (5) does not possess the desirable property of being able to reject the null hypothesis as soon as we have accumulated enough statistical evidence, but the decision can only be made at time τ. Here, is not necessarily a nondecreasing function even though a decrement in is very unlikely empirically. To get around this problem, we simply let and . Note that h is a nondecreasing, continuous, nonrandom function on [0, σ It follows from (4) that converges in probability to h(s) uniformly for as and that ( converges weakly to (U, h) in (Billingsley, 1999, Theorem 3.9). Using an argument similar to Billingsley (1999, p. 151), it can be shown that the process converges weakly to the Gaussian process { as n→∞. Further, implying that the process { is a Brownian motion. Technically, for the above results to hold, we require that no two events occur simultaneously and the processes , and I(t) are observed continuously. However, in practice, the observations are usually taken on a daily basis and the exact transition times are generally unobservable. In such situations, we propose to randomly impute the death and discharge (recovery) times within the day, or simply impute the order of deaths and discharges on that day because the test statistic only requires the order of occurrences of the events rather than the exact time to occurrences of the events. Also, we assume that all inpatients enter the system at the beginning of each day. From our empirical experience, the effect of randomization of the sequence of the deaths and discharges within a day has minimal effect on the behavior of the proposed test statistic. As the processes were observed on a daily basis in practice, for ease of presentation, we define a partition 0 = where if or else for t= 1, … , τ. A sequential test for constant fatality rate can then be based on {. The idea is to compute a statistic at the end of the tth day and compare it with its corresponding stopping boundary c for t= 1, … , τ. The null hypothesis of constant fatality rate can be rejected at the end of day t if exceeds for the first time. The set of appropriate stopping boundaries ( are chosen such that in order to preserve the overall probability of committing a type I error at the prespecified level α (Armitage, McPherson, and Rowe, 1969). Adjustments are made on the level of significance for each comparison in the repeated significance tests using the α‐spending function idea as proposed in Lan and DeMets (1983). A monotone increasing function of t for with α(0) = 0 and α(τ) =α is specified. Let . A popular choice of the α‐spending function is proposed by Lans and DeMets (1983), where Φ(.) is the distribution function of a standard normal random variable and satisfies Φ(. This α‐spending function will be used throughout this article. The probability that H 0 is rejected on the tth day is then With the set of α(t)'s obtained from the α‐spending function, we have the recursive relationship As the process converges in distribution to a Brownian motion process, the asymptotic joint distribution of { is Gaussian with Cov{ for all , and that the increments are independent. The values of the stopping boundaries can thus be obtained recursively through numerically, with . The value of the stopping boundary is largely affected by the value of . Hence the standardized statistic is considered in the following discussion and will be compared with the new stopping boundaries given by so that a plot of the and over t will be smoother. In the cases with for t= 1, … , τ, the value of will remain unchanged, and we will simply skip through the comparison at the end of day t, move forward to day t+ 1 and compare with where satisfies provided that .

3. Applications

The proposed test is applied to Hong Kong and Beijing SARS data. There were 1755 SARS cases in total, with 299 deaths in Hong Kong (Department of Health, HKSAR, 2003) and 2521 cases with 193 deaths in Beijing (Beijing Center for Disease Control and Prevention, 2003). The daily numbers of deaths, recoveries and inpatients in the two regions are used. For Hong Kong, the data available spanned from March 12, 2003 to June 30, 2003 with a total of 110 days. For Beijing, the data available spanned from April 21, 2003 to July 1, 2003, with a total of 72 days. There were only 29 and 27 inpatients left in the hospitals, respectively, for Hong Kong and Beijing at the end of the periods as specified above. When there are more than one death or discharge on the same day, we impute the rank of the death and discharge times within the same day at random. For simplicity, we set τ= 60 for both cases and the overall level of significance is set at α= 0.05. For easy reference, we plot the standardized statistics V (t) (represented by the star) and the stopping boundaries (represented by the solid line) against the corresponding day for the Hong Kong and Beijing data in the left and right panels of Figure 1, respectively. The null hypothesis of constant fatality is not rejected for Hong Kong data throughout the said period. The same conclusion was drawn based on the proposed sequential tests with τ= 110 and the test statistic in (5). However, the null hypothesis is rejected on day 22 (May 12, 2003) for Beijing data. In other words, the SARS fatality rate in Hong Kong was concluded to be constant throughout the period, while that in Beijing showed a decreasing trend in fatality over time, which could possibly be due to their improvement of treatment including the use of Chinese medicine. In Beijing, they had also implemented very strict quarantine measures in controlling the infections of SARS and a specialized hospital was built for SARS patients in seven days and was opened on May 1, 2003 (Pang et al., 2003) that helped to control the spread of the epidemic in the hospital settings because most of the hospitals are not well equipped to deal with the SARS patients and infection within hospital would become serious.
Figure 1

Plot of the standardized statistic V (t) (star) and the stopping boundaries (solid line) against the corresponding day for outbreak of SARS in Hong Kong (left panel) and Beijing (right panel).

Plot of the standardized statistic V (t) (star) and the stopping boundaries (solid line) against the corresponding day for outbreak of SARS in Hong Kong (left panel) and Beijing (right panel). In the case of constant fatality, a simple estimator for π=π(t) at time t is given by which is shown to be the MLE in the Appendix. The estimated for the constant fatality rate and the estimated time‐varying fatality rate of Yip, Lam, et al. (2005) over t were obtained for the Hong Kong and Beijing data. The estimates and their associated 95% pointwise confidence limits were plotted in the left and right panels of Figure 2, respectively. The above conclusion that the null hypothesis of constant fatality rate is not rejected for the Hong Kong data is further supported by the plot in the left panel of Figure 2. The estimate based on has been very stable since April 22, while the estimated time‐varying fatality rate simply fluctuates around over t. The two sets of estimates agree closely with each other in general. The 95% pointwise confidence intervals based on are much narrower than those based on the time‐varying fatality estimates, . Hence the simple estimator should be more efficient if the constant fatality assumption is valid. The plot in the right panel of Figure 2 shows that the constant fatality is highly unlikely to be the case for Beijing data. The patterns of the two estimates differ considerably. The estimate based on does not appear to be constant itself, but exhibits a decreasing trend over time. However, it has a similar pattern as but the value of is consistently about 10–13% greater than that of . It is mainly due to the inertia of the MLE which is strongly influenced by the higher number of deaths in the early part of the epidemic. In Beijing, there were more confirmed cases after May 1, but with fewer deaths afterward. The proposed test statistic for constant fatality seems to work well in these applications. A large scale simulation study is reported to investigate the performance of the proposed test empirically in the next section.
Figure 2

The estimated time‐varying fatality rate (dashed line) and for the constant fatality rate (solid line) and their corresponding 95% pointwise confidence limits for outbreak of SARS in Hong Kong (left panel) and Beijing (right panel).

The estimated time‐varying fatality rate (dashed line) and for the constant fatality rate (solid line) and their corresponding 95% pointwise confidence limits for outbreak of SARS in Hong Kong (left panel) and Beijing (right panel).

4. Simulation Study

We simulate several scenarios to assess the performance of the proposed test. In the simulation we assume that data with exact death and recovery times are not available. This mimics the practical situation that only the daily counts are available. Because only the death and recovery processes but not the infection process determine the test statistic, simulating the infection times is unnecessary while our focus is on the fatality rate. Therefore, we simply make use of the observed daily number of inpatients as in Hong Kong SARS epidemic in the simulation. The daily numbers of deaths and recoveries are simulated according to a multinomial distribution with probabilities and q 2(t) on day t, respectively. The value of τ is assumed to be 60 days, and the overall level of significance is set at α= 0.05 throughout. In each set of simulations, 10,000 data sets are generated and the proportion of data sets with the null hypothesis being rejected is the empirical size or power. Results are tabulated in Tables 1 and 2.
Table 1


 Simulation results for the empirical significance levels of the proposed test under different scenarios: (A) constant death and recovery probabilities; (B) linearly decreasing/increasing death probability with proportional recovery probability; and (C) exponentially decreasing probability with proportional recovery probability. avg.N

Scenario q1(t) q 2(t) avg.N 1(τ) avg.N 2(τ)πEmpirical size (%)
(A)0.0080.01273.3646341.72170.44444.93
(A)0.0080.02273.2267683.26090.28574.91
(A)0.0080.04273.39551367.31300.16674.99
(A)0.0080.08273.43822735.71200.09095.13
(A)0.010.01341.6004341.68440.50004.78
(A)0.010.02341.4107683.53640.33334.90
(A)0.010.04341.93481367.50200.20004.97
(A)0.0050.01170.7891341.56220.33334.94
(A)0.0050.05170.97051709.14500.09095.20
(B)0.005 + 0.0001t q 1(t)294.9111294.66240.50004.69
(B)0.005 + 0.0001t 2 q 1(t)294.9808589.28110.33334.87
(B)0.005 + 0.0001t 4 q 1(t)295.09441179.53700.20005.06
(B)0.005 + 0.0001t 8 q 1(t)295.04262360.18700.11115.18
(B)0.1 − 0.0008t q 1(t)2426.81902426.92600.50005.09
(B)0.1 − 0.0008t 2 q 1(t)2426.79404853.78400.33334.83
(B)0.1 − 0.0008t 4 q 1(t)2426.49809706.90900.20005.07
(B)0.1 − 0.0008t 8 q 1(t)2426.643019415.15000.11115.04
(C)0.01 exp(−0.01t)2 q 1(t)239.4667479.49330.33334.96
(C)0.01 exp(−0.01t)8 q 1(t)239.77241918.81200.11115.36
(C)0.02 exp(−0.01t)2 q 1(t)479.3387958.81710.33334.90
(C)0.02 exp(−0.01t)8 q 1(t)479.62933836.93400.11115.17
(C)0.1 exp(−0.04t)4 q 1(t)916.09153664.22800.20005.11
(C)0.2 exp(−0.02t)4 q 1(t)3422.549013689.16000.20005.05
Table 2


 Simulation results for the empirical powers of the proposed test under different scenarios: (A) stepwise increase in recovery probability; (B) constant death probability and linearly increasing recovery probability; (C) linearly decreasing death probability and constant recovery probability; and (D) linearly increasing recovery probability and linearly increasing/decreasing death probability. avg.N

Scenario q 1(t) q 2(t) avg.N 1(τ) avg.N 2(τ)π(0) Power (%)avg.rej.day
(A)0.0080.04 + 0.04 I(t≥ 30)274.08532315.78700.16670.454598.5535.7
(A)0.0080.04 + 0.04 I(t≥ 40)274.17531944.70800.16670.454599.6444.0
(A)0.0080.04 + 0.04 I(t≥ 50)273.37811608.62200.16670.454595.3953.5
(A)0.0080.04 + 0.04 I(t≥ 55)273.64461484.82500.16670.454561.1657.3
(A)0.010.02 + 0.02 I(t≥ 30)341.72061157.87500.33330.400098.2935.8
(A)0.010.02 + 0.02 I(t≥ 40)342.1126971.28720.33330.400099.8344.0
(A)0.010.02 + 0.02 I(t≥ 50)341.5698803.11430.33330.400094.8554.1
(A)0.010.02 + 0.02 I(t≥ 55)341.8031742.04840.33330.400058.7457.3
(A)0.010.05 + 0.04 I(t≥ 30)341.92052657.46000.16670.400097.2536.0
(A)0.010.05 + 0.04 I(t≥ 40)341.78032286.56900.16670.400099.4844.6
(A)0.010.05 + 0.04 I(t≥ 50)341.69641950.25100.16670.400092.2454.3
(A)0.010.05 + 0.04 I(t≥ 55)341.94521827.19900.16670.400052.0557.2
(B)0.050.05 + 0.0008t 1709.73602703.15300.50000.324398.5240.3
(B)0.050.05 + 0.001t 1710.19502951.06700.50000.375099.8937.3
(B)0.050.05 + 0.004t 1710.22906677.16200.50000.7059100.0026.0
(B)0.0050.005 + 0.002t 171.14932653.72600.50000.923099.0637.6
(B)0.0050.005 + 0.004t 171.25115136.40800.50000.960099.5136.6
(B)0.0050.005 + 0.008t 171.219910104.55000.50000.979699.4835.9
(B)0.0050.005 + 0.01t 171.220812587.55000.50000.983699.4235.8
(C)0.05 − 0.0004t 0.051212.72201709.36700.50000.315896.9244.3
(C)0.05 − 0.0002t 0.051460.64301708.76800.50000.136442.1348.5
(C)0.05 − 0.0001t 0.051585.46101709.61500.50000.063815.3648.4
(D)0.005 + 0.0001t 0.01 + 0.005t 294.92076550.51100.33330.931491.4040.5
(D)0.02 + 0.001t 0.02 + 0.008t 1925.618010617.98000.50000.724199.1133.8
(D)0.1 − 0.001t 0.05 + 0.005t 2178.40207919.41800.66670.7805100.0020.5
(D)0.3 − 0.004t 0.2 + 0.005t 5295.112013048.23000.60000.8214100.0020.3
Simulation results for the empirical significance levels of the proposed test under different scenarios: (A) constant death and recovery probabilities; (B) linearly decreasing/increasing death probability with proportional recovery probability; and (C) exponentially decreasing probability with proportional recovery probability. avg.N Simulation results for the empirical powers of the proposed test under different scenarios: (A) stepwise increase in recovery probability; (B) constant death probability and linearly increasing recovery probability; (C) linearly decreasing death probability and constant recovery probability; and (D) linearly increasing recovery probability and linearly increasing/decreasing death probability. avg.N Different scenarios with constant fatality rate, that is the null hypothesis, namely (A) constant death and recovery probabilities; (B) linearly decreasing/increasing death probability with proportional recovery probability; and (C) exponentially decreasing death probability with proportional recovery probability, are simulated to assess the empirical significance level of the test. Table 1 shows that the empirical sizes of the test are very close to α= 0.05 in all cases. The proposed test is seen to be nearly unbiased. Different scenarios with decreasing fatality rate, namely (A) constant death probability and piecewise constant recovery probability with a sudden increment at day t 0; (B) constant death probability and linearly increasing recovery probability; (C) linearly decreasing death probability and constant recovery probability; and (D) linearly increasing recovery probability and linearly increasing/decreasing death probability, are simulated to assess the empirical power of the proposed test. The simulation results are tabulated in Table 2. It is obvious that the empirical powers are very high (over 90%) in general unless the decrement in the fatality rate is extremely slow like the last case of scenario (C). From the results based on scenario (A), we may conclude that the test is extremely sensitive to the changes in the fatality rate and the null hypothesis may be rejected as early as within a week since the sudden drop in the fatality rate on the average. The column 1 −π(τ)/π(0) represents the relative decrement in the fatality rate from the start to time τ or the end of the epidemic. The power of the test increases with the relative decrement. Moreover the empirical powers remain to be quite high in general even when the relative decrement is only around 20%. Considering all scenarios, the proposed test statistic seems to be very powerful in general, and is sensitive enough to pick up various forms of changes in the fatality rate. Therefore, the performance of the proposed test is very satisfactory and will be useful to monitor the fatality rate of a newly emerging disease.

5. Discussion

We have presented a testing procedure which is able to detect a decreasing fatality rate in a macro perspective, especially when the decrease in death probability or increase in recovery probability happens within a more intense period in the midst of an epidemic. When an effective new treatment or an intervention policy is implemented, it should cause a marked drop in the fatality rate. By formulating a model with time‐varying fatality rate, the proposed test is a useful tool to justify or to evaluate the effectiveness of the adopted health policies and/or the clinical management. A main advantage of this test is that as information accumulates over time, we are able to reject the null hypothesis as soon as there is enough evidence of decreasing fatality over time. Hence, more effective intervention measures can be launched in place to reduce fatality. Applications of V (τ) to the Hong Kong (with τ= 110) and Beijing data (with τ= 72) lead to essentially the same conclusions with asymptotic p‐values 0.3082 and < 0.0001, respectively. The performance of the test statistic V (τ) is also studied in a simulation study not reported here. The performance is very similar to the one proposed, but the shortcoming of the test V (τ) is that it is evaluated at a prespecified time τ, but we are not able to reject H 0 before time τ even though there exhibits strong statistical evidence. However, this could also be the merit in some other applications. Modification of the proposed sequential tests can be made easily by using different times of interim analysis, say by making a comparison every three days instead of every day. This version has the advantage that the environmental and weather conditions will be more homogeneous within each period under consideration so that some uncontrollable variation can be grouped together or marginalized as the fatality rate may also be affected by the temperature or humidity. More work needs to be done to explore the feasibility of the modified test in practice. As of January 29, 2007 (WHO, 2007), there are a total of 164 deaths out of 270 human cases of H5N1 avian influenza, giving an estimated traditional fatality rate greater than 50%. If the strain acquires human‐to‐human transmissibility, its virulence to humans may or may not be the same as the historical level when transmission is limited to avian‐to‐human only. In the case of an outbreak due to human‐to‐human transmission, the proposed test will then be an extremely useful tool to provide a formal statistical method to test the hypothesis of decreasing fatality presumably due to some effective interventions in a sense that the alternative hypothesis can be accepted as soon as there is enough statistical evidence so that prompt and decisive action can be taken more effectively to suppress the fatality at the earliest time. A delay in the implementation of some effective intervention may result in more life loss globally. Nevertheless, we hope this test will never be used in practice.
  2 in total

1.  Estimating the case fatality rate using a constant cure-death hazard ratio.

Authors:  Zheng Chen; Kohei Akazawa; Tsuyoshi Nakamura
Journal:  Lifetime Data Anal       Date:  2009-05-21       Impact factor: 1.588

2.  A sequential test to compare the real-time fatality rates of a disease among multiple groups with an application to COVID-19 data.

Authors:  Yuanke Qu; Chun Yin Lee; K F Lam
Journal:  Stat Methods Med Res       Date:  2021-12-08       Impact factor: 3.021

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.