Literature DB >> 35403239

Bayesian sample size determination for diagnostic accuracy studies.

Kevin J Wilson1, S Faye Williamson2, A Joy Allen3,4, Cameron J Williams1,3,4, Thomas P Hellyer4, B Clare Lendrem3,4.   

Abstract

The development of a new diagnostic test ideally follows a sequence of stages which, among other aims, evaluate technical performance. This includes an analytical validity study, a diagnostic accuracy study, and an interventional clinical utility study. In this article, we propose a novel Bayesian approach to sample size determination for the diagnostic accuracy study, which takes advantage of information available from the analytical validity stage. We utilize assurance to calculate the required sample size based on the target width of a posterior probability interval and can choose to use or disregard the data from the analytical validity study when subsequently inferring measures of test accuracy. Sensitivity analyses are performed to assess the robustness of the proposed sample size to the choice of prior, and prior-data conflict is evaluated by comparing the data to the prior predictive distributions. We illustrate the proposed approach using a motivating real-life application involving a diagnostic test for ventilator associated pneumonia. Finally, we compare the properties of the approach against commonly used alternatives. The results show that, when suitable prior information is available, the assurance-based approach can reduce the required sample size when compared to alternative approaches.
© 2022 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

Entities:  

Keywords:  Bayesian assurance; binomial intervals; contingency tables; power calculations; sensitivity; specificity

Mesh:

Year:  2022        PMID: 35403239      PMCID: PMC9325402          DOI: 10.1002/sim.9393

Source DB:  PubMed          Journal:  Stat Med        ISSN: 0277-6715            Impact factor:   2.497


INTRODUCTION

Diagnostic accuracy studies evaluate the ability of a diagnostic test (the index test) to correctly identify patients with and without a target condition. This is typically achieved by prospectively comparing results from the index test to the true disease status obtained from the best available reference standard for a cohort of patients. The two main measures used to assess intrinsic diagnostic accuracy are sensitivity and specificity. For a test to proceed to the next stage of evidence development, it is important that these measures are estimated to an appropriate degree of accuracy. This hinges on the sample size chosen for the diagnostic accuracy study. Too small a sample size will lead to an imprecise estimate with wide corresponding intervals, which is non‐informative to the decision maker, and contributes to research waste. Alternatively, too large a sample size may delay the results of the study due to longer recruitment times and resource limitations, in addition to financial and ethical implications. Consequently, choosing a sample size which strikes a balance between accuracy and efficiency is a crucial step in the design of any diagnostic accuracy study. Traditional sample size calculations are based on a hypothesis‐testing framework. The idea is to choose a sample size such that the probability of rejecting the null hypothesis when there is a clinically relevant difference is greater than a required power (typically 80% or 90%) with a specified type I error rate (typically 5% for a two‐sided test). However, a sample size which captures the precision of the measure of interest, by targeting a desirable width of the corresponding confidence interval, can be more appropriate in certain circumstances. , This is pertinent in early clinical diagnostic studies, where the aim is to estimate test accuracy with sufficient precision, which is the approach adopted here. In this article, we consider the sample size problem from a Bayesian perspective and propose a novel approach, referred to as the Bayesian assurance method (BAM), to determine sample sizes for diagnostic accuracy studies. In doing so, we explore whether utilizing information from the preceding laboratory study will reduce the sample size in the diagnostic accuracy study, and thus lead to a more efficient development process. This may be important if there is need to deploy accurate diagnostic tests rapidly, such as in response to the COVID‐19 pandemic, where early detection of infectious individuals is critical to outbreak containment. Another relevant area is rare diseases, where there are a limited number of patients available, or where there are practical or ethical issues with conducting large studies. This extends to (rare) disease subgroups, in which the sensitivity and specificity of a diagnostic test can vary. The BAM shares similar characteristics to seamless and adaptive designs, in that it utilizes data from one stage to inform decisions in the subsequent stages in order to improve efficiency and flexibility. Seamless designs, which aim to combine separate studies, and adaptive designs, which allow for prespecified modifications to the design based on accruing data, are well‐established in interventional studies, yet have received little attention in the context of diagnostics. However, the flexibility offered by these designs is just as important in diagnostic accuracy studies. Motivated by the desire to accelerate diagnostic research, Vach et al and Zapf et al discuss the utility of seamless and adaptive designs, respectively, in developing diagnostics. Zapf et al advocate the development and implementation of adaptive designs for diagnostics, and highlight this as a promising area for future research, which this article contributes towards. The BAM can be used to choose the sample size according to both sensitivity and specificity criteria simultaneously, rather than separately as in most existing methods. Criteria for combining sensitivity and specificity to define the success of a diagnostic test, and how this affects the sample size required, are discussed by Vach et al. Korevaar et al suggest specifying a joint hypothesis on the sensitivity and specificity based on predefined minimally acceptable criteria. Branscum et al proposed an approach to choose the sample size based on the predictive probability that the posterior probability of the sensitivity and specificity both being within prespecified limits is high. Although the assurance approach in this article is related to that taken by Branscum et al, there are some key differences. For example, they required the estimated sensitivity and specificity, along with the upper and lower limits for both intervals, to be specified in advance, and focused only on a two‐sided approach, whereas we assure the widths of the intervals directly, requiring only the prior distributions for the parameters, and consider both the one‐ and two‐sided cases. Several existing approaches consider binomial confidence intervals based on a normal approximation to determine the sample size (referred to as the Wald interval) or some adjustment to it, for example, the Agresti‐Coull (AC) interval. An alternative is to use an exact binomial interval (known as the Clopper‐Pearson [CP] interval ). A description of commonly used intervals for proportions is provided in Newcombe. (Chapter3) Zhou et al (Chapter4) recommend the Zhou et al interval for values of sensitivity or specificity close to zero or one. Another recommended interval is the equal‐tailed Jeffreys interval, constructed using a Bayesian approach with a non‐informative Jeffreys prior (ie, Beta(1/2,1/2)) for the binomial proportion. Wei and Hutson provide a sample size calculation based on the conditional expectation of interval width given a hypothesized proportion. We compare the BAM to some of these approaches in Section 6. Sample size determination from a Bayesian perspective is typically based on assurance, which is considered an alternative to power. Assurance, and modifications to it, can be referred to as the probability of success and the expected/average power, among others; a review is provided in Kunzmann et al. (Section5) Unlike power, which is conditional on the true (but unknown) parameter value, the distinguishing property of assurance is that it is an unconditional probability which incorporates parameter uncertainty through a prior distribution and integration over the parameter range. This is formally defined in Section 3. The use of assurance for sample size calculations has occurred predominantly within clinical trials. In this article, we use assurance to represent the probability of obtaining the desired accuracy (based on a target interval width) in our estimates of sensitivity and/or specificity. The sample size is then taken to be the minimum which yields the required assurance. We describe inference for a standard diagnostic accuracy study in Section 2. The BAM is presented and further described in Section 3, with issues such as prior sensitivity and prior‐data conflict addressed in Section 4. As a motivating case study, we use the BAM to redesign a diagnostic accuracy study of a test for ventilator associated pneumonia (VAP) in Section 5, and assess the properties of the BAM, in comparison to some standard approaches, in Section 6.

INFERENCE IN A DIAGNOSTIC ACCURACY STUDY

We consider a diagnostic accuracy study to assess an index test under development. In the study, we observe the numbers of individuals in a contingency table (Table 1A).
TABLE 1

(A) A contingency table for a typical diagnostic accuracy study. (B) The contingency table for the biomarker selection study based on the biomarker IL‐1. (C) The contingency table for the diagnostic accuracy study based on the biomarker IL‐1

(A)DiseaseNo diseaseTotal
Test positive n1,1 n1,2 n1,T
Test negative n2,1 n2,2 n2,T
Total nT,1 nT,2 nT
(B) VAP No VAP Total
Test positive163551
Test negative12021
Total175572
(C) VAP No VAP Total
Test positive5155106
Test negative24244
Total5397150
(A) A contingency table for a typical diagnostic accuracy study. (B) The contingency table for the biomarker selection study based on the biomarker IL‐1. (C) The contingency table for the diagnostic accuracy study based on the biomarker IL‐1 The number of individuals with and without the disease is assumed to be known, based on a reference test. The intrinsic accuracy of the index test can be measured by its sensitivity and specificity, defined as the probability of a positive test given disease and the probability of a negative test given no disease, respectively. There are two approaches used to model numbers of individuals in the cells of the table: assuming either binomial or multinomial likelihoods. In the first case, and where is the sensitivity and is the specificity of the index test. The conjugate prior distributions are and . If we assume in the prior that the sensitivity and specificity are independent, then their posterior distributions are and . The independence assumption will often be reasonable since the diagnostic thresholds for the test are fixed at this stage, and the sensitivity and specificity consider mutually exclusive populations of patients. In the second case, we consider the vector and assume where is a vector containing the probabilities of each cell of the contingency table. Here, the sensitivity and specificity are given by and . A typical form for the prior distribution is a Dirichlet distribution, which provides conjugacy. That is, , where . It can be shown that the two approaches are equivalent in terms of inference for the sensitivity and specificity (see the Appendix). In this article, we will use the binomial form as it allows for the direct specification of the priors for the sensitivity, specificity, and prevalence. We will assume conjugate beta priors, as detailed above, throughout the rest of the article.

SAMPLE SIZE DETERMINATION

Assurance

Assurance is a Bayesian alternative to power to choose a sample size. Consider a two‐armed clinical trial in which a hypothesis test is to be conducted with vs , where represents the difference in the effect of two treatments. A typical power calculation would choose a sample size to provide a certain statistical power at a particular assumed value for , often taken to be the minimal clinically relevant difference. In this case, the power is and would increase with sample size. In practice, the choice of is relatively arbitrary. As the true effect size is unknown, this can result in conditioning on an event which is extremely unlikely. One approach to mitigate this is to conduct a sensitivity analysis, varying the value of and choosing a sample size which is robust to small perturbations. In the Bayesian context, we can take an alternative approach, and represent our uncertainty over using a prior distribution . The assurance is the expected power of the hypothesis test with respect to this prior, We choose to make the dependence on the sample size explicit for the assurance . Assurance is not restricted to the case where we will perform a hypothesis test at the end of a trial. If we perform a Bayesian analysis instead, then we may declare the trial a success and the new treatment superior if in the posterior, for example. In this case, Thus, the assurance is the unconditional probability that the trial results in a successful outcome. We use assurance to choose a sample size to estimate sensitivity, specificity, or both, of the index test to a certain degree of accuracy. We initially focus on sensitivity of the index test, and consider two cases: assuring the width of the posterior probability interval (two‐sided), and assuring the width of the lower half of the posterior probability interval (one‐sided).

Two‐sided case

Considering the inference from Section 2, a symmetric posterior probability interval for is , where the limits of the interval are defined such that The accuracy of the estimation of can be considered as the width of this interval, , and a successful diagnostic accuracy study would produce an interval with a width smaller than some target, . Suppose the number of individuals with the disease in the study, , is fixed. There are three possibilities: no values of lead to an interval with width smaller than , all values of lead to an interval with width smaller than , or some values of lead to an interval with width smaller than . To investigate the third case, consider the posterior variance of , For a fixed sample size , the denominator of this fraction is constant. That is, substituting . The variance is quadratic in and the squared term has a negative coefficient. Thus, the posterior probability interval will be narrower than when and , for two critical numbers of individuals . We define this set as .

One‐sided case

We consider a posterior probability interval for of the form , where the lower limit of the interval is defined such that . We consider the distance between the lower limit of the interval and a central point estimate of , that is, , where is the posterior median. A successful diagnostic accuracy study would result in this interval having a width smaller than some target, . By the same logic as the two‐sided case, the posterior probability interval will be narrower than when and , for two critical numbers of individuals . Thus, we consider the set for the one‐sided case, with and determined by the interval .

Evaluating the assurance

We can obtain an expression for the assurance for a sample size , conditional on a fixed number of diseased individuals . This is denoted by and defined as where represents the gamma function. A derivation is given in Section A of the supplementary material. As the number of individuals with the disease, , will not be known in advance, we need to sum over the possible values can take. If we have a random sample from the target population, then , where is the prevalence of the disease. Let for some chosen values of . The unconditional assurance is then where is the probability of observing individuals in the disease group. The assurance can thus be expressed as This is derived in Section A of the supplementary material. All that remains is to find the values of . For each fixed sample size, , and number of diseased individuals, , the values of , and will depend only on and, hence, the width of the interval will be a function of , , in both cases. Therefore, and for , where is a number below which the interval can never achieve the desired width and is a number above which the width of the interval is always below . Hence, for all and for all . To estimate the specificity of the index test to a given accuracy of , we can derive the assurance in the same way, which results in an assurance analogous to that in Equation (2). The details are given in Section A of the supplementary material. Finally, suppose we wish to estimate both the sensitivity and specificity to a particular accuracy. Consider different accuracy targets, and , for the sensitivity and specificity, respectively. In this case, the assurance for the sample size conditional on (and hence , since ) is given by where contains the values and that give a posterior interval narrower than for the sensitivity, and contains the values and that give a posterior interval narrower than for the specificity. To find the unconditional assurance, we sum over the possible values of to give: The proposed BAM is now summarized via the following steps: Choose whether we wish to assure our estimate of sensitivity , specificity , or both. Choose a target width(s) for the accuracy measure(s), a one‐ or two‐sided posterior interval and a level for the interval. Specify the prior distributions for the chosen accuracy measure(s) and the prevalence . We detail how to do this in the next section. Use Equation (2) or (3) (or see Section A of the supplementary material) to calculate the assurance for sample sizes . Choose the minimum sample size to give the desired assurance. Example: Suppose we wish to estimate both sensitivity and specificity to within , with posterior probability using a two‐sided interval, that is, and . We specify prior distributions for , and , and use Equation (3) to evaluate the assurance for sample sizes . To achieve the desired accuracy with a probability of at least , say, we choose the smallest value of which gives rise to an assurance greater than .

PRIOR SPECIFICATION AND MODEL CHECKING

A diagnostic accuracy study is part of an extensive development process for the diagnostic test, see Figure 1 in Reference 24. Its main purpose is to estimate performance characteristics of the test, particularly the sensitivity and specificity, in the target population in a clinically relevant setting. Prior to the diagnostic accuracy study is the analytical validity phase, in which the test may still be under development and the data generated may be used to support regulatory approvals. The validation conducted during this stage may test individuals from the target population. Consequently, the data produced can be used to inform the prior distributions in the diagnostic accuracy study. This assumes that the observations in the two stages are exchangeable, which may not always be reasonable. Therefore, in Section B of the supplementary material, we detail how the BAM can be used under weaker assumptions.

Specifying prior distributions

Consider the analytical validity testing. Suppose that a random sample of individuals was taken and the numbers in the cells of the contingency table were . Using the inferential approach in Section 2, priors for the sensitivity, specificity, and prevalence would be , , and , respectively. The corresponding posterior distributions (excluding conditioning statements) would be and where , , , , , and . These latter beta distributions can be used as priors for the diagnostic accuracy study. Although this does not negate the necessity of choosing the initial prior values , , and , these will have a small effect on the sample size chosen if sufficient data are available from the analytical validity stage. This is explored further in the next section. The approach taken here is equivalent to using a power prior with the parameter quantifying the heterogeneity between the diagnostic study population and analytic validity population set equal to one (representing homogeneous populations). In cases of heterogeneity between the two populations, a power prior could be used with this parameter taking a value in the range . For full details, see Reference 25. In cases where it is controversial to use data from the analytical validity stage when inferring the sensitivity and specificity of the test, we could use a weaker prior in the analysis, but retain the original prior in the design to inform the sample size calculations. This is illustrated in Section B of the supplementary material

Prior sensitivity

The choice of initial prior parameters, , , and may have little effect on the assurance if sufficient data are observed at the analytic validity stage. We explore this using local sensitivity analysis and investigate the following two questions: In particular, we vary the prior parameters for in turn over a range of values around their initial values, and record the smallest and largest values of the optimal sample size and assurance . If these values do not differ by much, then the optimal sample size is relatively robust to the initial prior choice. How does the optimal sample size, , change when varying the prior parameters? How does the assurance at , , change when varying the prior parameters? Using the grid search approach to determine an appropriate range of prior parameter values, we explore the sensitivity on a grid , where represents the distance between a prior and the original prior with parameters . That is, , where represents the beta prior distribution with parameters and is one of , , and . We use the Hellinger distance which, for the beta distribution, can be expressed as where is the beta function. To conduct the grid search, it is sensible to work in polar co‐ordinates. Therefore, we set and , where . We search in the range , solving for the value of which gives the correct value of . To find the values of and , we convert back via and . From this grid search, we can then find the corresponding and for this . We suggest a sensible choice of in Section 5.2.

Prior‐data conflict

Label the counts in the table from the diagnostic accuracy study . The posterior distributions for the sensitivity and specificity (omitting the conditioning) will be and , respectively, where The inference for the sensitivity and specificity is in the form of a weighted average of the prior and the observations, with weights determined by the relative sample sizes of each. The prior is made up of a weighted average of the observations in the analytical validity stage and the original prior. If all of the elements are in broad agreement, then the posterior distribution will provide an accurate summary of the properties of the index test in the population of interest. However, it could be the case that the prior and observations are not in agreement, which is known as prior‐data conflict. , For example, if the two studies are carried out at different times or in different locations, the spectrum of disease in the target population may not be the same. In this case, it is important to investigate why the differences are there and what action should be taken. We can evaluate prior‐data conflict by comparing the observations to the prior predictive distributions of the parameters. We consider the prior predictive distributions of the number of observations in the disease group, , and, conditional on this, the number who test positive, of those with the disease, and the number who test negative of those without the disease, . These are given by , where is in turn, is the corresponding sample size, that is, , and are the beta distribution parameter values for the prevalence, sensitivity, and specificity, respectively. We can then plot the prior predictive distributions and calculate probabilities of the form , for observed number of individuals . If the observed value lies in the body of the associated prior predictive distribution, then that prior is consistent with the data. Otherwise, this provides evidence of prior‐data conflict.

A BIOMARKER TEST FOR VAP

Using published results, , we consider the development of a biomarker test for VAP. The development of the test involved four stages; an exploratory study to look at possible biomarkers for VAP diagnosis, a single center observational study to choose suitable biomarkers, a multicenter diagnostic accuracy study to develop biomarker cut offs and validate accuracy and a randomized controlled trial of clinical utility. At each stage, the target population was patients on a ventilator with suspected VAP. The reference standard test was the growth of pathogens at colony forming units per milliliter of bronchoalveolar fluid. All patients with suspected VAP receive antibiotics, although only 20% to 60% of patients will have VAP confirmed by the reference standard, leading to overuse of antibiotics. Microbiology culture and sensitivities take up to 72 hours to return results to clinicians, which delays the opportunity to discontinue antibiotics in patients who do not have infection. A rapid, highly sensitive biomarker test could allow for early stopping of antibiotics. We consider planning the diagnostic accuracy study. The sample size was originally chosen to reduce the width of the 95% confidence interval for the post‐test probability of VAP to 0.16, and resulted in . Estimates from the single center observational study were used to calculate the sample size. The estimated sensitivity and prevalence in the single center observational study were and , respectively, for the most promising biomarker, IL‐1. If instead the sample size had been chosen based on a confidence interval for the sensitivity, using the Wald interval, a larger sample size of 196 would have been required.

Choosing the sample size using assurance

To use assurance to determine the sample size, we require the prior parameters for the sensitivity, , and the prevalence, , before the biomarker selection study. In the initial exploratory study, there were 55 patients, 12 of whom were confirmed by the reference test to have VAP. Assuming exchangeability, a suitable prior for the prevalence is . The most promising biomarker gave an estimated sensitivity of 0.93. Since it was unclear which biomarker(s) would be used in the final test, it is not reasonable to make an exchangeability assumption for the test results in the two stages. A more suitable prior for the sensitivity is more diffuse but with a mean around this value, such as . These priors are represented by the dashed lines in Figure 1.
FIGURE 1

Left: The prior distributions for the sensitivity (red) and the prevalence (black) for the biomarker selection study (dashed lines) and the diagnostic accuracy study (solid lines). Right: The assurance curve showing the assurance achieved at different sample sizes for the diagnostic accuracy study

Left: The prior distributions for the sensitivity (red) and the prevalence (black) for the biomarker selection study (dashed lines) and the diagnostic accuracy study (solid lines). Right: The assurance curve showing the assurance achieved at different sample sizes for the diagnostic accuracy study In the biomarker selection study, the contingency table is provided in Table 1B for the most promising biomarker, IL‐1. We assume that these patients are exchangeable with those in the diagnostic accuracy study as they are randomly sampled from the same population. Therefore, the prior distributions for the diagnostic accuracy study are and (see Section 2), illustrated by solid lines in the left‐hand side of Figure 1. Suppose we would like to estimate the sensitivity of the test to within 0.16 in a 95% symmetric probability interval and choose a sample size to give 80% assurance. Based on the priors above, we use the BAM to obtain a sample size of . This is significantly smaller than the original sample size of (which would give an assurance of 88%). The full assurance curve for is provided in the right‐hand side of Figure 1. Note that the assurance curve has a different shape to a power curve, and is monotonically increasing between 0 and 1. To assess the sensitivity of the sample size and assurance to the prior distribution, we use the approach outlined in Section 4.2. In particular, we conduct a grid search for both the sensitivity and prevalence priors using a value of (equivalent to a mean shift in a standard normal random variable of 0.1). The resulting values of the beta distribution parameters are provided in Section B.3 of the supplementary material for the sensitivity and prevalence. The corresponding smallest and largest values of the assurance and sample size are provided in Table 2.
TABLE 2

The smallest and largest values of the assurance, , and the smallest and largest sample sizes, , found in the local sensitivity analysis

Measure min{A(nT)} max{A(nT)}
Sensitivity0.730.86
Prevalence0.800.81
Measure min{nT} max{nT}
Sensitivity82130
Prevalence104108
The smallest and largest values of the assurance, , and the smallest and largest sample sizes, , found in the local sensitivity analysis Changes to the prevalence prior have little effect on the sample size or the assurance at . The effect is slightly larger for the sensitivity prior but, even for the most extreme prior, a sample size of 130 would be sufficient (which is considerably less than the sample size of 150 in the study). The results from the diagnostic accuracy study with the 150 patients are summarized in Table 1C for the biomarker IL‐1. The resulting posterior distributions for the sensitivity of and prevalence are and , respectively. The corresponding 95% posterior probability interval for the sensitivity is , and so we meet the target of 0.16 on the width of the interval. To assess possible prior‐data conflict, we use the approach detailed in Section 4.3 and compare the observations to the prior predictive distributions. The prior predictive distributions of the number of patients with VAP (left) and the number of patients with VAP who tested positive (right) are provided in Figure 2, with the observation shown as a red dashed line. A color version of this figure can be found in the electronic version of the article.
FIGURE 2

The prior predictive distributions of the number of patients with VAP (left) and the number of VAP patients who test positive (right) together with the observations (red)

The prior predictive distributions of the number of patients with VAP (left) and the number of VAP patients who test positive (right) together with the observations (red) We see the number of patients correctly diagnosed with VAP lies within the main body of the prior predictive distribution. The observed number of patients with VAP lies in the body of the distribution, but is closer to the upper tail, in the 99th percentile. The observed number of patients correctly diagnosed lies in the 76th percentile. This provides some evidence of prior‐data conflict for the number of patients with VAP, so we may choose a prior on the prevalence which is not based on the single center observational study. The posterior mean and 95% posterior probability interval for the prevalence are 0.296 and (0.244, 0.351), respectively. The same quantities using a flat prior with are 0.355 and (0.281, 0.433), respectively, which would not affect the inference on the sensitivity. However, if we believe the sub‐populations with VAP are different between the two stages we may also consider an alternative prior for sensitivity.

ALTERNATIVE APPROACHES

In this section, we compare properties of the proposed BAM to alternative commonly used methods. Assume we wish to obtain the number of individuals with the disease, , required to estimate the sensitivity to within a particular degree of accuracy. The alternative methods are based on a hypothesis test of against the two‐tailed alternative conducted at a significance level of . We take the value of to be , that is, the maximum likelihood estimate of sensitivity using the analytical validity data. The sample size can be chosen according to a desired power of to detect a difference of size . As discussed in Section 1, there are several possible approaches; we consider the following. The first is based on a normal approximation. In this case, to achieve a power of we choose the sample size in the disease group as , where is the upper percentile of a standard normal distribution. We construct a % confidence interval based on this normal approximation, known as the Wald interval. The second approach is based on an exact binomial test to give the CP interval. The third approach combines the normal approximation with an adjustment to the hypothesized value as the center of the interval to give the AC interval. In practice, the standard way of obtaining the required sample size is to use the appropriate sample size formula (if available), or in‐built functions within statistical software (eg, the binDesign function from the binGroup R package ). However, these often give rise to unreliable sample sizes and, in our investigation, are shown to perform poorly over the range of parameter values considered; see Section E of the supplementary material. We instead rely on simulation. That is, we choose the smallest sample size to give the correct proportion of intervals below the desired target width , based on simulating confidence intervals repeatedly and finding the power empirically. The total number of individuals to recruit, , is found by scaling with respect to the estimated prevalence , that is, . The same procedure is used to obtain the number of individuals without the disease, , required to estimate the specificity to a certain degree of accuracy. In this case, ).

Comparison of sample sizes

In this section, we compare the sample sizes required for a diagnostic accuracy study using the methods outlined above. We consider a significance level of , a power/assurance of , and aim to estimate sensitivity to within 0.18 in a two‐sided interval. We vary the sensitivity over the range and the prevalence over the range . For the proposed BAM, we consider three prior sample sizes of , and to represent “small,” “medium,” and “large” analytical validity studies. The results for all scenarios and methods are illustrated in Figure 3.
FIGURE 3

A comparison of the sample sizes required based on power calculations (dashed) using a Wald interval (dark blue), Clopper‐Pearson (red), Agresti‐Coull (green), assurance (black, solid), and assurance based on non‐informative analysis priors (light blue). In each plot, there are three black curves relating to prior sample sizes of (from top to bottom) 25, 50, and 75

A comparison of the sample sizes required based on power calculations (dashed) using a Wald interval (dark blue), Clopper‐Pearson (red), Agresti‐Coull (green), assurance (black, solid), and assurance based on non‐informative analysis priors (light blue). In each plot, there are three black curves relating to prior sample sizes of (from top to bottom) 25, 50, and 75 Note that the power calculations are based on the true parameter values. The assurance calculation, however, uses beta priors with parameters for the sensitivity and for the prevalence. An assurance calculation with non‐informative priors for the analysis is also considered. This is based on a design prior from the “small” analytical validity study to represent a reasonable “worst case” scenario. In Figure 3, we observe similar patterns across the frequentist approaches (represented by the colored lines) for each prevalence. CP always results in the largest sample size, with Wald and AC giving similar, slightly smaller, sample sizes. In comparison to assurance, the frequentist methods produce larger sample sizes when the prevalence is high. In some scenarios, they result in smaller sample sizes. For example, when the prior sample size is 25 below a prevalence of 0.5, when the prior sample size is 50 below a prevalence of 0.3, and when the prior sample size is 75 around a prevalence of 0.2. However, as the sensitivity increases, the required sample size based on assurance reduces quicker than the frequentist approaches, which are known to perform poorly as the sensitivity approaches one. Further details are provided in Section C of the supplementary material, including an assessment of different target interval widths. The message is consistent across the parameter combinations considered: assurance for the sensitivity reduces the required sample size in the majority of cases, particularly in moderate to high prevalence populations and when a highly accurate test is required. High prevalence situations are common in secondary care, where patients have already been triaged (such as in a suspected stroke ), or in cancer pathways by the time an invasive test, such as a biopsy, is used. When the BAM is applied to even lower prevalences of 0.1, 0.05, and 0.01, the sample sizes required for a sensitivity of 0.9, and based on a medium analytic validity study, are 681, 1643, and 2770, respectively. Such low prevalences may be the case in large‐scale geographic prevalence surveys, for example.

Comparison of interval widths

A smaller sample size will not be useful if the corresponding interval estimates are very wide. Therefore, we conduct a simulation study, outlined below, to assess the width of the intervals resulting from each approach. First, we sample values of the sensitivity and prevalence from uniform distributions. These are used to sample analytical validity results, and , from their respective binomial distributions based on a “medium” total sample size of . From these data, we find estimates of the sensitivity and prevalence for the power calculations and set the prior distributions for the assurance calculations. We then find the required sample size for each method. We sample the results of the diagnostic accuracy study, and , from their respective binomial distributions and use these to calculate % intervals for the sensitivity. Finally, we calculate the width of the intervals. By repeating this process 100 times, we consider the distributions of widths of the intervals, which are shown in Figure 4 for a power/assurance, , of 0.5 (left) and 0.8 (right). In all cases, and .
FIGURE 4

The width of 95% confidence or posterior probability intervals based on 100 simulations for the Wald interval (Wald), Clopper‐Pearson (CP), Agresti‐Coull (AC), Assurance (BAM), and Assurance using a non‐informative analysis prior (Non‐inf). The power/assurance used to choose the sample size was 0.5 (left) and 0.8 (right). The horizontal line is at the desired width of

The width of 95% confidence or posterior probability intervals based on 100 simulations for the Wald interval (Wald), Clopper‐Pearson (CP), Agresti‐Coull (AC), Assurance (BAM), and Assurance using a non‐informative analysis prior (Non‐inf). The power/assurance used to choose the sample size was 0.5 (left) and 0.8 (right). The horizontal line is at the desired width of For both and , the approaches produce intervals with a similar distribution of widths. When , the median width of each approach lies approximately at the target width. When , the target width is around, or slightly above, the upper quartile for each method. Thus, the different sample sizes observed in the previous section do not come at the expense of less precision in inference. The simulations were repeated with interval widths of and . The corresponding results are provided in Section D of the supplementary material. The main conclusions remain: for a power/assurance of 0.5, all of the distributions are approximately centered on the target width, and for a power/assurance of 0.8, each approach produces intervals which include the target width in the upper 25% of its empirical distribution. In addition, we have investigated the properties of the BAM when assuring both sensitivity and specificity together, in terms of the sample size required and the resulting interval widths. This is provided in Section F of the supplementary material.

DISCUSSION

In this article, we have proposed the novel BAM to determine sample sizes for diagnostic accuracy studies. Bayesian assurance fulfills a similar role to power and, as we have shown, can offer benefits when suitable prior information is available. In particular, representing uncertainty in unknown test characteristics using prior distributions, and utilizing information from different stages of the development pathway, allows for a wider range of evidence to be seamlessly incorporated in the design and analysis of a diagnostic accuracy study. Consequently, we have shown that this has the potential to reduce the sample size, thus increasing efficiency in evidence development. If no prior information is available, or accessible, from earlier stages of development, expert elicitation can be used to form the necessary prior distributions. Elicited distributions can include opinions from multiple experts, or be combined with data from other sources. The larger the prior sample size, the more informative the prior distribution will be which, as shown in Figure 3, typically corresponds to a smaller sample size in the diagnostic accuracy study. If it is not appropriate to use an informative prior for the analysis (eg, to mitigate researcher bias), a skeptical or flat prior can be used instead. The BAM has the flexibility to allow for distinct prior distributions in the design and analysis stages, as illustrated in Section B of the supplementary material. The proposed BAM can be used regardless of whether the final analysis is frequentist or Bayesian. Some assurance calculations may not result in closed form solutions (eg, if a Bayesian analysis uses a non‐conjugate analysis prior), in which case, simulation and numerical methods are required. Thus, calculating assurance can be challenging and, unlike power, is not available in standard software packages. To increase accessibility of the BAM, R code is provided and an R Shiny application is currently under development. This work focuses on assuring sensitivity and specificity as measures of diagnostic accuracy. We have also shown how the BAM can be used to assure sensitivity and specificity jointly, for which no existing approaches are available, to our knowledge. The assurance calculations can be modified to obtain sample sizes for other quantities, such as positive and negative predictive values or the area under the curve. Moreover, the assurance calculations could be extended to allow for multiple categorical results, or results in the form of continuous measures, which is area of further work. In this article, we considered the evaluation of a single diagnostic test, but further work could explore how the proposed method extends to multiple tests. To reflect standard practice in diagnostic accuracy studies, we have inherently assumed that the sampling plan will be produced prior to the study, carried out accordingly and then the data analyzed at the end of the study. Future work could extend the approach so that it can be applied sequentially, participant‐by‐participant (or in blocks), to monitor the width of the posterior interval until the desired value is attained, at which point the study would terminate. This would reduce the sample size required. However, it would require a change in the way that diagnostic accuracy studies are routinely implemented. Data S1 Supplementary material Click here for additional data file. Data S2 Supplementary material Click here for additional data file.
  22 in total

1.  A new method for choosing sample size for confidence interval-based inferences.

Authors:  Michael R Jiroutek; Keith E Muller; Lawrence L Kupper; Paul W Stewart
Journal:  Biometrics       Date:  2003-09       Impact factor: 2.571

2.  Three principles to define the success of a diagnostic study could be identified.

Authors:  Werner Vach; Oke Gerke; Poul Flemming Høilund-Carlsen
Journal:  J Clin Epidemiol       Date:  2011-10-13       Impact factor: 6.437

Review 3.  Sample sizes of studies on diagnostic accuracy: literature survey.

Authors:  Lucas M Bachmann; Milo A Puhan; Gerben ter Riet; Patrick M Bossuyt
Journal:  BMJ       Date:  2006-04-20

4.  A comment on sample size calculations for binomial confidence intervals.

Authors:  Lai Wei; Alan D Hutson
Journal:  J Appl Stat       Date:  2012-11-05       Impact factor: 1.404

5.  A potential for seamless designs in diagnostic research could be identified.

Authors:  Werner Vach; Eric Bibiza; Oke Gerke; Patrick M Bossuyt; Tim Friede; Antonia Zapf
Journal:  J Clin Epidemiol       Date:  2020-09-28       Impact factor: 6.437

6.  Bayesian Phase II optimization for time-to-event data based on historical information.

Authors:  Anja Bertsche; Frank Fleischer; Jan Beyersmann; Gerhard Nehmiz
Journal:  Stat Methods Med Res       Date:  2017-12-28       Impact factor: 3.021

7.  A Review of Bayesian Perspectives on Sample Size Derivation for Confirmatory Trials.

Authors:  Kevin Kunzmann; Michael J Grayling; Kim May Lee; David S Robertson; Kaspar Rufibach; James M S Wason
Journal:  Am Stat       Date:  2021-04-22       Impact factor: 8.710

8.  Diagnostic accuracy of pulmonary host inflammatory mediators in the exclusion of ventilator-acquired pneumonia.

Authors:  Thomas P Hellyer; Andrew Conway Morris; Daniel F McAuley; Timothy S Walsh; Niall H Anderson; Suveer Singh; Paul Dark; Alistair I Roy; Simon V Baudouin; Stephen E Wright; Gavin D Perkins; Kallirroi Kefala; Melinda Jeffels; Ronan McMullan; Cecilia M O'Kane; Craig Spencer; Shondipon Laha; Nicole Robin; Savita Gossain; Kate Gould; Marie-Hélène Ruchaud-Sparagano; Jonathan Scott; Emma M Browne; James G MacFarlane; Sarah Wiscombe; John D Widdrington; Ian Dimmick; Ian F Laurenson; Frans Nauwelaers; A John Simpson
Journal:  Thorax       Date:  2014-10-08       Impact factor: 9.102

9.  Diagnostic importance of pulmonary interleukin-1beta and interleukin-8 in ventilator-associated pneumonia.

Authors:  Andrew Conway Morris; Kallirroi Kefala; Thomas S Wilkinson; Olga Lucia Moncayo-Nieto; Kevin Dhaliwal; Lesley Farrell; Timothy S Walsh; Simon J Mackenzie; David G Swann; Peter J D Andrews; Niall Anderson; John R W Govan; Ian F Laurenson; Hamish Reid; Donald J Davidson; Christopher Haslett; Jean-Michel Sallenave; A John Simpson
Journal:  Thorax       Date:  2009-10-12       Impact factor: 9.139

10.  The Importance of Diagnostic Testing during a Viral Pandemic: Early Lessons from Novel Coronavirus Disease (COVID-19).

Authors:  Philip J Rosenthal
Journal:  Am J Trop Med Hyg       Date:  2020-05       Impact factor: 2.345

View more
  1 in total

1.  Bayesian sample size determination for diagnostic accuracy studies.

Authors:  Kevin J Wilson; S Faye Williamson; A Joy Allen; Cameron J Williams; Thomas P Hellyer; B Clare Lendrem
Journal:  Stat Med       Date:  2022-04-10       Impact factor: 2.497

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.