Literature DB >> 35332674

SCI: A Bayesian adaptive phase I/II dose-finding design accounting for semi-competing risks outcomes for immunotherapy trials.

Yifei Zhang1,2, Beibei Guo3, Sha Cao2,4, Chi Zhang4,5, Yong Zang2,4.   

Abstract

An immunotherapy trial often uses the phase I/II design to identify the optimal biological dose, which monitors the efficacy and toxicity outcomes simultaneously in a single trial. The progression-free survival rate is often used as the efficacy outcome in phase I/II immunotherapy trials. As a result, patients developing disease progression in phase I/II immunotherapy trials are generally seriously ill and are often treated off the trial for ethical consideration. Consequently, the happening of disease progression will terminate the toxicity event but not vice versa, so the issue of the semi-competing risks arises. Moreover, this issue can become more intractable with the late-onset outcomes, which happens when a relatively long follow-up time is required to ascertain progression-free survival. This paper proposes a novel Bayesian adaptive phase I/II design accounting for semi-competing risks outcomes for immunotherapy trials, referred to as the dose-finding design accounting for semi-competing risks outcomes for immunotherapy trials (SCI) design. To tackle the issue of the semi-competing risks in the presence of late-onset outcomes, we re-construct the likelihood function based on each patient's actual follow-up time and develop a data augmentation method to efficiently draw posterior samples from a series of Beta-binomial distributions. We propose a concise curve-free dose-finding algorithm to adaptively identify the optimal biological dose using accumulated data without making any parametric dose-response assumptions. Numerical studies show that the proposed SCI design yields good operating characteristics in dose selection, patient allocation, and trial duration.
© 2022 The Authors. Pharmaceutical Statistics published by John Wiley & Sons Ltd.

Entities:  

Keywords:  adaptive design; immunotherapy; late-onset outcome; phase I/II clinical trial; semi-competing risks

Mesh:

Year:  2022        PMID: 35332674      PMCID: PMC9481656          DOI: 10.1002/pst.2209

Source DB:  PubMed          Journal:  Pharm Stat        ISSN: 1539-1604            Impact factor:   1.234


INTRODUCTION

Immunotherapy (IT) has changed the direction of cancer care. Unlike cytotoxic agents, which attack tumor cells directly, IT boosts the body's natural defenses to fight cancer and represents the most promising new cancer treatment approach since the first chemotherapies' development in the late 1940s. , , There are several types of IT, including monoclonal antibodies, non‐specific immunotherapies, oncolytic virus therapy, T‐cell therapy, and cancer vaccines. IT has already led to major treatment breakthroughs for several cancers, including brain cancer, breast cancer, bladder cancer, and other types of cancer. The emergence of IT has challenged the assumption that both efficacy and toxicity response rates increase monotonically with the dose, which most of the phase I dose‐finding trials rely on to identify the maximum tolerated dose (MTD) solely based on toxicity outcomes. Indeed, IT enhances the immune system's innate power to stop the growth of cancer cells and prevent cancer from spreading to other parts of the body. Consequently, IT is not necessarily administered at its MTD to achieve an optimal therapeutic effect, making the conventional phase I trial design targeting MTD inappropriate for IT trials. Instead, the primary goal of a dose‐finding trial for IT is to identify the optimal biological dose (OBD), which achieves the optimal overall therapeutic effect among all the candidate doses. To find the OBD for IT requires monitoring both the toxicity and efficacy outcomes in a single trial, which is typically referred to as the phase I/II clinical trial. Due to different biological mechanisms of cytotoxic agents (e.g., radiotherapy and chemotherapy) and IT, the definitions of the efficacy responses also differ. In general, a patient responds favorably to a cytotoxic agent if he/she has achieved at least partial remission of the tumor after receiving the treatment. On the other hand, because IT aims to stop the growth of tumors rather than kill the cancer cell, stable disease is often considered a positive signal, and only the occurrence of disease progression is treated as a negative efficacy response. As a result, the progression‐free survival rate rather than the objective tumor response rate is often used as the efficacy outcome in the IT trial. Using the progression‐free survival rate as the response outcome in a phase I/II dose‐finding trial causes many essential and practical issues. First of all, patients developing disease progression in oncology trials are typically seriously ill. Indeed, many patients will develop cancer‐related death sooner after experiencing disease progression. Even if not, they will be treated off the trial with other second‐line drugs because it is highly unethical to continue treating a seriously ill patient with an ineffective drug. Consequently, if the toxicity event does not happen before the disease progression, it will be censored at the disease progression time. In other words, the happening of disease progression will terminate the happening of toxicity, but not vice versa. In the statistical community, this kind of data is often referred to as the semi‐competing risks data. Secondly, the evaluation of progression‐free survival rate requires a relatively long follow‐up time compared with evaluating the objective tumor response rate, which may cause the late‐onset outcome issue in addition to the semi‐competing risks. The late‐onset happens when at least part of the patients' response outcomes are missing due to incomplete follow‐up at the interim analysis time and maybe ascertainable long after that time. The missing data caused by late‐onset in a dose‐finding trial are not‐missing‐at‐random and should be appropriately handled to avoid biased statistical inference. The naive solution of temporarily suspending the trial to allow all enrolled patients to be fully evaluated before a new dose assignment is often unrealistic because it may result in an unfeasibly long trial, wastes resources, and causes a tremendous administrative burden. Besides, it is ethically undesirable to delay a new patient's treatment while waiting for previous patients' outcomes. We develop a novel Bayesian adaptive dose‐finding design accounting for semi‐competing risks outcomes for IT trials in the presence of late‐onset issue. We first decompose the joint toxicity‐efficacy probability into the product of marginal and conditional probabilities and then re‐construct the likelihood function based on each patient's actual follow‐up time. We also develop a data augmentation method to efficiently draw the posterior samples for the parameters of interest from a series of beta‐binomial distributions, which substantially reduces the computational burden. We propose a curve‐free dose‐finding algorithm to adaptively identify the OBD based on a toxicity‐efficacy tradeoff function, making no parametric model assumptions on the dose–response relationships. Our research is motivated by a phase I/II IT trial conducted at Indiana University Melvin and Bren Simon Comprehensive Cancer Center. This trial aims to find the OBD for a novel programmed cell death protein 1 (PD‐1) immune checkpoint inhibitor for patients with acute myeloid leukemia (AML) and high‐risk myelodysplasia. This inhibitor acts to inhibit the association of the programmed death‐ligand 1 with its receptor PD‐1, which is involved in suppressing the patient's immune system. Five dose levels (0.1, 0.3, 0.5, 0.8, 1.0 mg/kg) of the inhibitor will be investigated, and the prepared doses will be administered by slow injection over 10 min. A maximum of 60 patients will be accrued to the trial. The primary efficacy outcome is the 1‐year progression‐free survival rate. The co‐primary toxicity outcome is the grade 3 and 4 hematological and non‐hematological toxicity rate associated with treatment, which will also be ascertained in a 1‐year period. Any patients experiencing disease progression will be treated off the trial with the second‐line treatment azacitidine, confounding the toxicity outcome's evaluation. Hence, the issue of semi‐competing risks arises. In addition, a new cohort of patients will enter the trial every 6 months for dose allocation before all the toxicity and efficacy outcomes become available for previous patients in the trial. Therefore, this IT trial is also subject to the late‐onset issue. Designing a rigorous and efficient dose‐finding trial handling late‐onset semi‐competing risks toxicity‐efficacy outcomes is the challenge of this study. Many phase I/II clinical trial designs for IT have been proposed in the literature. For categorical outcome, Messer et al. proposed a toxicity evaluation design for IT trials, which is based on safety hypothesis testing and an algorithm similar to a 3 + 3 design. Liu and Johnson developed a robust phase I/II dose‐finding design that modeled the toxicity and efficacy using a flexible Bayesian dynamic model and borrowed information across doses without imposing stringent parametric assumptions on the shape of the dose‐toxicity and dose‐efficacy curves. Lin and Yin proposed the STEIN design as the first toxicity‐efficacy interval design for optimal dose‐finding trials considering both efficacy and toxicity outcomes. Liu et al. developed a latent variable model to capture the correlations of the toxicity and efficacy outcomes in a phase I/II trial. Zhou et al. developed a utility‐based Bayesian optimal interval (U‐BOIN) design to identify the OBD. Han et al. developed a two‐stage nonparametric dose‐finding design (TSNP) for IT trial. For late‐onset outcomes, Liu et al. treated the late‐onset outcomes as missing data and proposed a data augmentation algorithm to impute the missing data under the Bayesian framework. The work was further extended by Jin et al. to the phase I/II trial design considering both toxicity and efficacy outcomes. Zhang and Zang proposed a conditional weighted likelihood (CWL) method to address the jointly late‐onset toxicity‐efficacy outcomes issue without making any parametric model assumption. To the best of our knowledge, this paper presents the first phase I/II clinical trial design for IT that accounts for semi‐competing risks late‐onset outcomes. Illustration of the late‐onset mechanism. The solid circle indicates that the event does not happen at the end of the follow‐up. The hollow circle indicates that the event is missing. The solid triangle indicates that the event has happened We have previously developed another Bayesian adaptive design for phase I/II clinical trials with late‐onset outcomes. The differences between the previous one and the new design proposed in this paper are listed as follows. (1) We consider the competing‐risk late‐onset outcomes in the previous design where either the happening of disease progression or dose‐limiting toxicity will terminate the follow‐up for another event. In this paper, we focus on the semi‐competing risk late‐onset outcomes such that only the happening of disease progression can early terminate a patient's follow‐up. (2) We use a piece‐wise cause‐specific hazard model to characterize the late‐onset competing risk outcome in the previous paper and propose to use the uniform distribution to approximate the late‐onset semi‐competing risk outcome in this paper. (3) We use a parametric continuation‐ratio to capture the correlation between the toxicity and efficacy outcomes in the previous design and use the strategy of decomposing the joint distribution into the marginal and conditional distributions in this paper without any parametric model assumption.

PROBABILITY MODEL

We first investigate the bias caused by semi‐competing risks. Let be the response outcomes. In the presence of semi‐competing risks, has four levels, with , if no adverse event occurs; , if the disease progression (DP) event occurs first; , if only the dose‐limiting toxicity (DLT) event occurs; , if the DLT event occurs first followed by the DP event. Let be the dose levels under investigation, we can define , which can be naturally estimated from the observed data. We use Y P and Y T to denote the binary DP and DLT outcomes. Let Y P = 1 denote DP occurring, 0 otherwise. Similarly, let Y T = 1 denote DLT occurring, 0 otherwise. Let be the joint probability of interest for and . Then, after denoting , The relationships between and are: Noticing that are the real parameters of interest and are the parameters subject to semi‐competing risks issue. Therefore, when the issue of the semi‐competing risks arises (), it will distort the estimates of the joint probabilities and . In terms of the marginal estimates, let us define as the DP rate and as the DLT rate. Then, if the semi‐competing risks issue is ignored, the DP rate and DLT rate will be represented by and so the estimate for the DLT rate will also be biased. Since the toxicity outcome (DLT) is necessary for almost any phase I and phase I/II dose‐finding trial, the issue of the semi‐competing risks must be appropriately addressed. The semi‐competing risks problem is often nested with the late‐onset issue for clinical trials because of the long follow‐up time of the response outcomes. We now investigate the late‐onset issue. First of all, in Figure 1, we provide an illustrative example to demonstrate the late‐onset mechanism. In Figure 1, a new patient will be enrolled in the trial at the beginning of every month. Once enrolled, that patient will be followed for 2 months to ascertain the DP and DLT information. When the second patient enters the trial, the first patient has only been followed for 1 month and does not experience DP or DLT, so the late‐onset issue rises. When the third patient enters the trial, the first patient has been fully followed with and . However, the second patient is still under follow‐up and is subject to the late‐onset issue. The third patient's toxicity‐efficacy information is missing when the last patient comes in due to the late‐onset issue. The second patient has now been fully followed and has developed DP in his/her 1.5 months follow‐up time. However, his/her DLT status is missing, but that is due to the semi‐competing risks issue rather than the late‐onset, and that information will always be unobservable even if the second patient has been fully followed. Because the semi‐competing risks can distort the estimates and the late‐onset issue is also non‐ignorable, , a new method is needed to solve these entangled problems, which will be developed in the following content.
FIGURE 1

Illustration of the late‐onset mechanism. The solid circle indicates that the event does not happen at the end of the follow‐up. The hollow circle indicates that the event is missing. The solid triangle indicates that the event has happened

Let U T and U P be the fixed total follow‐up time for DLT and DP. We define and as the actual follow‐up time for DLT and DP for the ith patient assigned to dose level k at an interim analysis time and we have and by trial design. Let and be the underlying true DLT occurring time and true DP occurring time for the ith patient assigned to dose k. Let , to circumvent the missing data problem caused by semi‐competing risks and late‐onset, we propose to approximate the probability as: This approximation assumes that the time‐to‐toxicity outcome is uniformly distributed over the entire follow‐up period . According to previous studies, this uniform scheme is remarkably robust and yields desirable operating characteristics. Let and , along the same line, we further approximate the probabilities and as: Let us define , , and . After some algebra, we have the following expressions: where is the indicator function. We denote if , 0 otherwise; if , 0 otherwise; if , 0 otherwise; if , 0 otherwise. Hence, the complete likelihood function in the presence of late‐onset semi‐competing risks outcomes can be expressed as We develop a data augmentation method to derive the posterior distributions of the parameters , and based on the likelihood function (4). We assign , and the following Beta prior distributions: Then, we use to denote the missing value of . The data augmentation method is summarized as follows: 1. Impute from the Bernoulli distribution as 2. Given all the imputed data, sequentially draw the posterior samples of , and from the Beta distributions as The derivation of the data augmentation method is given in Appendix. With , and , we can easily calculate the posterior distributions of the joint probability , which will be used in the following dose‐finding design.

UTILITY FOR RISK–BENEFIT TRADEOFF

The goal of a phase I/II trial is to identify the OBD, which yields the best risk–benefit tradeoff among all the candidate doses. There are various ways to define the OBD (e.g., marginal probabilities, tradeoff contour, utility function), and the utility function is arguably the most convenient and popular one, which maps the multidimensional outcomes into a single index to measure the desirability of a dose in terms of risk–benefit tradeoff. The utility function has been used in a variety of phase I/II designs to define the OBD. , , , , , , , , Moreover, the utility function‐based definition is flexible and incorporates other definitions as special cases. We use the utility function‐based definition in this paper. To construct the utility function we need to elicit from physicians the utility values associated with the probabilities for . The requirement for the utility values is because no adverse event happening is most desirable and experiencing both DLT and DP is least desirable. Constructing the utilities requires close collaboration between clinicians and statisticians and should be customized to reflect the clinical needs and practice best. In our experience, the process of elicitation of utility values is quite natural. It can be done by simply explaining what the utilities represent to the clinicians during the decision process and asking them to specify all values after fixing the ones for the worst and best outcome. In particular, we can first fix two boundaries as and , and then elicit the utility values for the remaining ones by using the two boundaries as a reference, which must be located between 0 and 100. After eliciting the utility values, the utility function for dose level can be constructed as During the trial, given the interim data , we can calculate the posterior mean utility as In addition to , we also need to construct an admissible set to safeguard patients from overly toxic and less efficacious doses. We define as the maximum tolerable DLT rate and as the maximum tolerable DP rate. We first define the admissible set of toxicity as If any dose is found to be overly toxic during the interim analysis, then all the doses no less than should be excluded from the trial because the toxicity rate generally increases as the dose level increases. Then, Let , we define the admissible set as and are cut‐off values, which can be calibrated through simulation studies. Finally, the dose that yields the highest posterior mean utility within the admissible set is the identified OBD. Based on the aforementioned content, the toxicity rate will be underestimated, and risky doses may be included in the admissible set if the issue of the semi‐competing risks is ignored. In terms of the utility function, the impact of semi‐competing risks is summarized in the following theorem. The utility function is overestimated if the issue of the semi‐competing risks is ignored. The proof of Theorem 1 is given in the Appendix. As the semi‐competing risks can distort both and , the probability model and data augmentation algorithm developed in this paper are necessary.

DOSE‐FINDING ALGORITHM

We propose the following dose‐finding algorithm to find the OBD for a phase I/II trial with the utility function and admissible set constructed. The proposed probability model does not model the untried dose. As a result, the dose‐finding procedure is restricted to tired doses and may miss the global optimal dose. To solve this problem, we propose an additional dose‐escalation rule. Specifically, let be the highest dose that has been tired so far. If during any dose‐finding step, we have , indicating that the true OBD may be greater than , we will do the dose‐escalation and allocate the next cohort of patients at dose level if it is possible to further explore the dose–response profiles. Once the trial ends, we select the dose with the highest posterior mean utility within the admissible set as the recommended OBD. The first cohort of patients is treated at the lowest dose level or another physician‐specified dose level. Update the posterior distribution of all the parameters of interest at the current dose level. Use the updated posterior distribution to construct the admissible set and identify the OBD. If the admissible set is empty, early terminate the trial and conclude that no dose can be selected as the OBD. Treat the next cohort of patients at the identified OBD. Repeat steps 1–4 until the maximum sample size is reached. Dose skipping is not allowed when a dose‐escalation occurs.

TRIAL APPLICATION

As an essential step to apply the proposed design to the motivating AML trial, we first evaluate the operating characteristics of the proposed design. Reporting operating characteristics is often required in trial protocols when a new design is involved. We considered a phase I/II trial with five doses and a maximum sample size of 60 in cohorts of size 3. We specified a DP upper bound with a cut‐off value , a DLT upper bound of with a cut‐off value . We elicited the utility values as , , and . We specify independent non‐informative beta priors for , and . Table 1 shows the simulation results with different dose–response curves, including the OBD selection probability, the average number of patients treated at each dose level, the average percentages of patients experiencing DP, DLT, the average sample size and the average trial duration across 5000 simulated trials. A boldface font emphasizes the result for the true OBD. We specify a fixed total follow‐up time of 2 months for every patient and an inter‐arrival time month. A new cohort of patients will enter the trial every month. We first simulate the time‐to‐DLT outcome from a Weibull distribution to generate the survival outcomes. Then, we simulate two independent time‐to‐DP outcomes conditional on or 1, also from the Weibull distributions. The parameters in the Weibull distributions are calibrated to match the pre‐specified response rates. We compare the proposed SCI design with two alternatives. The first one uses the observable data only during the interim analysis, and we name it the observed‐data design. Any patients with missing DP or DLT are ignored under the observed‐data design. The second one suspends the trial during the interim analysis in the presence of late‐onset outcome and reopens the trial until all the patients have been completely followed, and we name it the complete‐data design. In scenario 1, none of the doses is acceptable due to overly toxic or less efficacious. All the designs under comparison yield almost surely probabilities of terminating the trial early with no dose selected. When the true OBD does exist (scenarios 2–6), as we expect, the complete‐data design always yields the best performances in terms of OBD selection and patient allocation because of the fully observable data being used. However, compared with the SCI and observed‐data designs, the complete‐data design doubles the trial duration when the true OBD exists, making the design impractical. It is also highly unethical to frequently suspend the trial and let the patients already in the trial wait until all the data become observable. The SCI design outperforms the observed‐data design across all the scenarios under consideration, and the difference can be substantial. For example, under scenario 2, the true OBD selection rate for the SCI design is 49.3%, which is about 18% higher than that for the observed‐data design. Also, the number of patients at the OBD for the SCI design is 8.5 higher than the number for the observed‐data design. The results for scenarios 3–6 are similar.
TABLE 1

Operating characteristics of the SCI, observed‐data, and complete‐data designs based on 5000 replicates

DesignDose levelDLT/DP (%) N Duration (month)
12345None
Scenario 1(0.95,0.25)(0.9,0.4)(0.85,0.5)(0.45,0.7)(0.4,0.75)
Utility22.021.121.338.239.2
SCI% Selected0.00.00.20.20.0 99.6
# Patients6.97.06.04.82.546.9/77.727.29.1
Observed data% Selected0.10.10.10.30.0 99.4
# Patients6.06.46.58.65.753.3/69.633.111.1
Complete data% Selected0.00.00.00.30.0 99.7
# Patients3.53.93.85.22.351.3/71.418.712.5
Scenario 2(0.35,0.03)(0.32,0.25)(0.3,0.28)(0.28,0.35)(0.28,0.4)
Utility72.464.964.963.261.0
SCI% Selected 49.3 21.115.98.74.70.3
# Patients 23.7 12.510.37.65.819.4/31.959.921.0
Observed data% Selected 31.4 23.919.815.69.00.3
# Patients 15.2 12.811.911.19.024.3/30.659.921.0
Complete data% Selected 50.1 18.514.012.14.50.8
# Patients 23.0 11.89.79.45.820.3/31.859.739.8
Scenario 3(0.45,0.05)(0.25,0.08)(0.23,0.27)(0.2,0.4)(0.2,0.5)
Utility64.177.670.366.461.8
SCI% Selected17.8 50.9 22.86.71.20.6
# Patients11.4 23.6 12.77.74.218.8/27.759.720.9
Observed data% Selected18.5 39.1 30.310.41.30.4
# Patients10.6 17.6 15.19.96.722.4/26.659.921.0
Complete data% Selected11.2 57.3 21.38.61.10.5
# Patients8.9 25.6 12.88.04.419.0/26.659.839.8
Scenario 4(0.7,0.1)(0.45,0.2)(0.3,0.22)(0.4,0.28)(0.45,0.4)
Utility43.957.967.658.049.8
SCI% Selected1.722.4 48.5 18.23.55.7
# Patients6.613.2 21.4 11.15.623.1/41.557.820.2
Observed data% Selected0.221.7 43.7 24.77.02.7
# Patients4.811.9 20.9 14.07.724.2/40.859.320.7
Complete data% Selected1.619.4 53.4 17.43.15.1
# Patients5.011.8 24.9 11.05.423.4/39.857.938.6
Scenario 5(0.6,0.05)(0.48,0.1)(0.45,0.12)(0.2,0.15)(0.35,0.25)
Utility53.159.861.278.062.8
SCI% Selected4.912.312.9 57.7 11.70.5
# Patients7.19.49.3 25.3 8.814.0/35.259.920.9
Observed data% Selected4.213.517.0 43.2 21.60.5
# Patients6.29.310.6 21.3 12.514.7/35.959.921.0
Complete data% Selected2.58.57.8 69.6 10.90.7
# Patients5.27.77.6 30.5 8.814.6/32.259.739.8
Scenario 6(0.45,0.1)(0.4,0.15)(0.35,0.2)(0.3,0.23)(0.1,0.25)
Utility62.063.564.967.280.4
SCI% Selected13.114.711.915.3 43.9 1.1
# Patients10.69.99.010.2 19.7 19.7/28.659.420.8
Observed‐data% Selected13.316.518.019.0 32.9 0.3
# Patients9.210.311.011.6 18.0 20.0/28.959.921.0
Complete‐data% Selected9.48.89.711.4 60.2 0.5
# Patients8.28.17.98.5 27.0 20.7/25.059.739.8

Notes: The probability pairs in parentheses are the probabilities of DP occurring and DLT occurring for each dose level. The percentage of trials with no dose selected is denoted by “None.” DLT/DP (%) is the percentage of patients experiencing DLT and the percentage of patients experiencing DP. N is the total number of patients. The numbers in bold font indicate the OBD selection rates and patient allocation under the true OBD.

Operating characteristics of the SCI, observed‐data, and complete‐data designs based on 5000 replicates Notes: The probability pairs in parentheses are the probabilities of DP occurring and DLT occurring for each dose level. The percentage of trials with no dose selected is denoted by “None.” DLT/DP (%) is the percentage of patients experiencing DLT and the percentage of patients experiencing DP. N is the total number of patients. The numbers in bold font indicate the OBD selection rates and patient allocation under the true OBD. We also conducted two sensitivity analyses to investigate the robustness of the proposed SCI method. In Figure 2, we report the OBD selection percentages of the observed‐data, SCI, and complete‐data designs with different data generating distributions such as the Weibull, Lognormal, and Gamma. Figure 3 reports the OBD selection percentages of all three designs with different inter‐arrival times . The results show that the SCI performs robustly under different data generating distributions and inter‐arrival time . In conclusion, the pattern of the design comparison keeps consistent across all the tables and figures.
FIGURE 2

Sensitivity analysis with different time‐to‐event generating functions

FIGURE 3

Sensitivity analysis with different

Sensitivity analysis with different time‐to‐event generating functions Sensitivity analysis with different

CONCLUSION

We develop a phase I/II clinical trial design to identify the OBD of IT in the presence of semi‐competing risks. We propose reconstructing the likelihood based on the actual follow‐up time and developing a data augmentation method to derive the posterior distributions for the parameters of interest. Simulation results confirm the desirable performance of the proposed SCI design. The complete‐data design significantly prolongs the trial duration and should not be considered. Considering that the proposed SCI design is more complicated than the naive observed‐data design, the latter can be considered as an alternative to the SCI design in the presence of a mild late‐onset issue. The issue of semi‐competing risks has been intensively studied in the statistical community, and a lot of sophisticated survival analysis models have been proposed, such as the multi‐state model , and latent failure times model, , , , which treat the data as time‐to‐event. The benefits of treating the semi‐competing risks data as time‐to‐event and using the survival modeling approaches include incorporating the length of time into consideration, handling censored data, and estimating the hazard ratio for association studies, among others. However, the high learning cost impedes the application of these complicated models in practical trial conduct. Moreover, the performances of the survival models rely on a series of parametric model assumptions, which are hard to justify and verify for a dose‐finding trial with a limited sample size. On the other hand, the SCI design treats the censored data as missing and imputes them from the multinomial distribution for categorical data, which is more transparent and user‐friendly for the clinical community. The only model assumption for the SCI design is the uniform distribution on the censored data, which is proved to be a fair and reasonable assumption. , , Indeed, the censoring status is dynamically updated among different interim decision‐making times. As the trial goes on, more and more patients change their status from censoring to observed, so the uniform distribution assumption is gradually relieved and eventually removed for the final optimal dose selection. The SCI design makes no parametric assumption on the dose–response curve and toxicity‐efficacy correlation and therefore is a robust design against parametric model assumption. Another feature of the SCI design is that it provides a unified solution for both the semi‐competing and late‐onset outcomes. The proposed data augmentation method treats the censoring data due to either the semi‐competing risks or late‐onset as the same type of missing data and imputes them using one Gibbs sampling algorithm. As a result, the SCI design can be directly applied for a mixture of semi‐competing risks and/or late‐onset data. One possible extension of the SCI design is to incorporate the immune response. Immune response measures IT's biological efficacy in activating the immune system and is closely correlated with the toxicity and efficacy outcomes. By accounting for the immune response, we may further improve the performance of the SCI design. Also, IT is often used together with other drugs. Therefore, it is vital to extend the SCI design to identify the optimal dose pair for the drug–drug combination trial.

CONFLICT OF INTEREST

The authors declare no conflict of interest.
  29 in total

1.  Regression modeling of semicompeting risks data.

Authors:  Limin Peng; Jason P Fine
Journal:  Biometrics       Date:  2007-03       Impact factor: 2.571

2.  STEIN: A simple toxicity and efficacy interval design for seamless phase I/II clinical trials.

Authors:  Ruitao Lin; Guosheng Yin
Journal:  Stat Med       Date:  2017-08-07       Impact factor: 2.373

3.  Robust EM Continual Reassessment Method in Oncology Dose Finding.

Authors:  Ying Yuan; Guosheng Yin
Journal:  J Am Stat Assoc       Date:  2011-09-01       Impact factor: 5.033

Review 4.  De-novo and acquired resistance to immune checkpoint targeting.

Authors:  Nicholas L Syn; Michele W L Teng; Tony S K Mok; Ross A Soo
Journal:  Lancet Oncol       Date:  2017-12       Impact factor: 41.316

5.  A Bayesian Phase I/II Trial Design for Immunotherapy.

Authors:  Suyu Liu; Beibei Guo; Ying Yuan
Journal:  J Am Stat Assoc       Date:  2018-06-28       Impact factor: 5.033

6.  Using joint utilities of the times to response and toxicity to adaptively optimize schedule-dose regimes.

Authors:  Peter F Thall; Hoang Q Nguyen; Thomas M Braun; Muzaffar H Qazilbash
Journal:  Biometrics       Date:  2013-08-19       Impact factor: 2.571

7.  Using Data Augmentation to Facilitate Conduct of Phase I-II Clinical Trials with Delayed Outcomes.

Authors:  Ick Hoon Jin; Suyu Liu; Peter F Thall; Ying Yuan
Journal:  J Am Stat Assoc       Date:  2014       Impact factor: 5.033

8.  Utility-based optimization of combination therapy using ordinal toxicity and efficacy in phase I/II trials.

Authors:  Nadine Houede; Peter F Thall; Hoang Nguyen; Xavier Paoletti; Andrew Kramar
Journal:  Biometrics       Date:  2009-08-10       Impact factor: 2.571

9.  A utility-based Bayesian optimal interval (U-BOIN) phase I/II design to identify the optimal biological dose for targeted and immune therapies.

Authors:  Yanhong Zhou; J Jack Lee; Ying Yuan
Journal:  Stat Med       Date:  2019-10-17       Impact factor: 2.373

10.  A Bayesian adaptive phase I/II clinical trial design with late-onset competing risk outcomes.

Authors:  Yifei Zhang; Sha Cao; Chi Zhang; Ick Hoon Jin; Yong Zang
Journal:  Biometrics       Date:  2020-08-08       Impact factor: 1.701

View more
  1 in total

1.  SCI: A Bayesian adaptive phase I/II dose-finding design accounting for semi-competing risks outcomes for immunotherapy trials.

Authors:  Yifei Zhang; Beibei Guo; Sha Cao; Chi Zhang; Yong Zang
Journal:  Pharm Stat       Date:  2022-03-24       Impact factor: 1.234

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.