| Literature DB >> 35401935 |
Tianyu Zhan1, Alan Hartford2, Jian Kang3, Walter Offen4.
Abstract
In confirmatory clinical trials, it has been proposed to use a simple iterative graphical approach to construct and perform intersection hypotheses tests with a weighted Bonferroni-type procedure to control Type I errors in the strong sense. Given Phase II study results or other prior knowledge, it is usually of main interest to find the optimal graph that maximizes a certain objective function in a future Phase III study. In this article, we evaluate the performance of two existing derivative-free constrained methods, and further propose a deep learning enhanced optimization framework. Our method numerically approximates the objective function via feedforward neural networks (FNNs) and then performs optimization with available gradient information. It can be constrained so that some features of the testing procedure are held fixed while optimizing over other features. Simulation studies show that our FNN-based approach has a better balance between robustness and time efficiency than some existing derivative-free constrained optimization algorithms. Compared to the traditional stochastic search method, our optimizer has moderate multiplicity adjusted power gain when the number of hypotheses is relatively large. We further apply it to a case study to illustrate how to optimize a multiple testing procedure with respect to a specific study objective.Entities:
Keywords: Clinical trial optimization; Constrained optimization; Deep neural network; Family-wise error rate control; Graphical approach
Year: 2020 PMID: 35401935 PMCID: PMC8992139 DOI: 10.1080/19466315.2020.1799855
Source DB: PubMed Journal: Stat Biopharm Res ISSN: 1946-6315 Impact factor: 1.586
Figure 1.A motivating example of a graphical approach for multiplicity control of two doses and two endpoints.
Figure 2.Feedforward neural networks with two hidden layers.
Parameter specifications for simulations.
| Scenario | Marginal power | Correlation structure | Correlation magnitude |
|---|---|---|---|
|
| (0.8,0.8,0.6,0.6,0.4,0.4) | Compound symmetry | 0 |
|
| 0.3 | ||
|
| 0.5 | ||
|
| (0.9,0.9,0.8,0.8,0.6,0.6) | Compound symmetry | 0.3 |
|
| AR(1) | ||
|
| Banded Toeplitz | ||
|
| (0.9,0.8,0.7,0.6,0.5,0.4) | Compound symmetry | 0.3 |
|
| (0.9,0.9,0.7,0.7,0.6,0.6) | ||
|
| (0.95,0.95,0.8,0.8,0.6,0.6) |
Optimal identified by FNN, ISRES, COBYLA, and SSM with the maximum solution highlighted in bold.
| Scenario | Optimal | Convergence time (min) | ||||||
|---|---|---|---|---|---|---|---|---|
| FNN | COBYLA | ISRES | SSM | FNN | COBYLA | ISRES | SSM | |
|
|
| 55.5% | 47.8% | 53.4% | 25.0 | 13.8 | – | 3.6 |
|
|
| 57.4% | 48.7% | 55.6% | 29.2 | 18.2 | – | 4.5 |
|
|
| 58.8% | 48.2% | 54.0% | 30.8 | 20.2 | – | 3.8 |
|
|
| 74.4% | 68.9% | 72.6% | 30.2 | 18.3 | – | 4.4 |
|
|
| 73.8% | 69.7% | 72.7% | 28.0 | 17.4 | – | 4.5 |
|
|
| 73.7% | 69.7% | 72.9% | 28.0 | 15.2 | – | 4.6 |
|
|
| 63.7% | 55.6% | 61.5% | 30.7 | 18.7 | – | 4.1 |
|
|
| 71.6% | 65.7% | 69.3% | 30.6 | 20.3 | – | 4.4 |
|
|
| 79.2% | 74.6% | 78.0% | 34.3 | 23.8 | – | 4.7 |
Figure 3.The optimal objective function identified by FNN, COBYLA, ISRES, and SSM.
Figure 4.FNN residuals of estimating the working objective function.
Optimal and identified by FNN, ISRES, COBYLA, and SSM.
| Method |
|
|
|
|
|
|
|---|---|---|---|---|---|---|
| FNN | 78.0% | 95.0% | 86.7% | 78.4% | 54.6% | 48.5% |
| ISRES | 77.4% | 95.0% | 86.6% | 75.2% | 56.8% | 47.6% |
| COBYLA | 77.2% | 95.0% | 84.7% | 80.8% | 54.5% | 48.4% |
| SSM | 76.6% | 95.0% | 84.0% | 79.0% | 54.4% | 49.7% |
Figure 5.Optimal graph identified by FNN, ISRES, COBYLA, and SSM.