Literature DB >> 34559958

Tutorial for $DESIGN in NONMEM: Clinical trial evaluation and optimization.

Robert J Bauer1, Andrew C Hooker2, France Mentre3.   

Abstract

This NONMEM tutorial shows how to evaluate and optimize clinical trial designs, using algorithms developed in design software, such as PopED and PFIM 4.0. Parameter precision and model parameter estimability is obtained by assessing the Fisher Information Matrix (FIM), providing expected model parameter uncertainty. Model parameter identifiability may be uncovered by very large standard errors or inability to invert an FIM. Because evaluation of FIM is more efficient than clinical trial simulation, more designs can be investigated, and the design of a clinical trial can be optimized. This tutorial provides simple and complex pharmacokinetic/pharmacodynamic examples on obtaining optimal sample times, doses, or best division of subjects among design groups. Robust design techniques accounting for likely variability among subjects are also shown. A design evaluator and optimizer within NONMEM allows any control stream first developed for trial design exploration to be subsequently used for estimation of parameters of simulated or clinical data, without transferring the model to another software. Conversely, a model developed in NONMEM could be used for design optimization. In addition, the $DESIGN feature can be used on any model file and dataset combination to retrospectively evaluate the model parameter uncertainty one would expect given that the model generated the data, particularly if outliers of the actual data prevent a reasonable assessment of the variance-covariance. The NONMEM trial design feature is suitable for standard continuous data, whereas more elaborate trial designs or with noncontinuous data-types can still be accomplished in optimal design dedicated software like PopED and PFIM.
© 2021 The Authors. CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals LLC on behalf of American Society for Clinical Pharmacology and Therapeutics.

Entities:  

Mesh:

Year:  2021        PMID: 34559958      PMCID: PMC8674001          DOI: 10.1002/psp4.12713

Source DB:  PubMed          Journal:  CPT Pharmacometrics Syst Pharmacol        ISSN: 2163-8306


INTRODUCTION

The use of nonlinear mixed‐effect models for the design and analysis of clinical trials has dramatically increased as the implementation of model informed drug development (MIDD) has accelerated. , One key aspect of the design of trials in drug development that will be analyzed with models is the assessment of model fit. Specifically, that relevant models will be identifiable, and have enough precision in parameter estimates to be useful in the next MIDD step. One way to assess model identifiability and parameter precision is through the use of clinical trial simulations (CTS). By assuming a model and design and simulating many clinical trials one can evaluate if a model can be fit to the data/design combination and if the parameters have enough precision for future model‐based steps. This process is very useful and many different types of evaluations can be made, but the process can be slow, especially if estimation time is long, resulting in, potentially, few designs being evaluated. A second, faster approach, to assess parameter precision and model identifiability given a model/design combination is through an assessment of the Fisher Information Matrix (FIM), which gives an evaluation (a lower bound estimate) of the expected model parameter uncertainty via an evaluation of the curvature of the expected likelihood surface. Because the evaluation of the FIM is more efficient than CTS, more designs can be investigated, and the possibility of optimizing the design of a clinical trial can be considered. For example, one could try to find the optimal sampling time points, dose levels, and number of time points appropriate for a clinical trial. Very complex designs considering the best time samples and dose may be considered in optimal designs which could be missed in trial‐and‐error clinical trial simulations. A number of methodological advances and software tools have been developed to utilize this approach of evaluating and optimizing study designs using the FIM. , , , , , As one would hope, the different software tools will generally give the same answer to the same calculations. However, the tools do differ, with some methods and approaches only implemented in certain software. In addition, for NONMEM users, the implementation of models in these software programs will require learning a new modelling language (often implemented in R or MATLAB ). In this work, we present the $DESIGN tool in NONMEM, which implements basic evaluation and optimization of the FIM. The tool is meant as an on‐ramp to design evaluation and optimization, with more advanced design calculations available in other tools. , , , , , The design algorithms in NONMEM have been modeled after PopED by Hooker et al. (https://andrewhooker.github.io/PopED), and PFIM by Mentre et al. (http://www.pfim.biostat.fr), as described in references. , , , , , These articles outline the theory of clinical trial evaluation, of which a brief description is given in the section Design Theory in Supplementary Materials. A description of the options to the $DESIGN are listed in the Introduction to NONMEM 7 manual (intro7.pdf) to the NONMEM software, version 7.5 and higher. In general, to evaluate or optimize a trial design using the FIM you will need a model, a set of model parameter values, a design, and the tasks that are to be performed (and a design space, if one wants to optimize the design). At its most basic $DESIGN can be used with any NONMEM model file and dataset combination to retrospectively evaluate the model parameter uncertainty one would expect given that the model generated the data (simply by updating the model parameter estimates and replacing the $ESTIMATION line with a $DESIGN line in a NONMEM control file). In a more advanced setting, one can prospectively evaluate or optimize a design. Several optimization algorithms, several optimization criteria, and tools to defined design constraints are available. The present tutorial is composed of seven examples of increasing complexity illustrating main features of $DESIGN in NONMEM for a user nonspecialized in optimal design. It is our hope that the examples demonstrate how a design calculation can be done. We would also like to make clear the assumptions made in the calculations (model, parameters, and allowed design space). The examples will attempt to illustrate how to relax these assumptions.

GENERAL TRIAL DESIGN LAYOUT IN NONMEM

Population designs are usually composed of one or several elementary designs and the number of subjects associated with each elementary design. Each elementary design is defined by design elements, such as the number of samples and their allocation in time, they may also contain a dosing protocol, response measurements, and covariates. In NONMEM, the data file pointed to by the $DATA record in an NMTRAN control stream contains the layout of the elementary designs, where the ID data item refers to the elementary design number. Thus, an elementary design is actually a subject template of records in a data file, indicating its dosing protocol, sample times, covariates, etc., just as if this data set were being used for simulation. A very basic elementary design structure will be introduced in the first example. By default, each elementary design represents one subject, and elementary designs are equally weighted. We shall see how this can be altered by using GROUPSIZE to multiply the weighting of the entire template data set, and in a later example by using the STRAT and STRATF data items to alter the weighting among elementary designs. Any design elements that are data record specific (such as dose, time, covariate, etc.) are placed in the data file, much as what one would do for clinical trial simulation. It is most economical to place these design elements in the data file, rather than specifying these data‐record specific details in the control stream. NONMEM users who are used to using the data file as a template for simulation will be familiar with using a data file template for specifying data‐record specific design elements for optimal design/evaluation. The $DESIGN record in the NMTRAN control stream specifies general design evaluation or optimization directives, and options pertaining to general components are introduced here. The most important options are introduced with each example below. A design criterion is selected, which is generally some measure of the standard errors that would result from a particular design. The most typical one is the negative logarithm of the determinant of the FIM, or logarithm of the variance‐covariance matrix of the estimates, and the smaller this value, the lower the standard errors. The $DESIGN record can be used to set up the design criteria, and can be used for finding the best design elements (optimal design), or for evaluating design elements proposed by the user (design evaluation). The diagram in Figure 1 shows the flow of input, decision making, and output for the $DESIGN step in NONMEM for evaluation or optimization.
FIGURE 1

Diagram representing the input and output components, design elements, and elementary designs involved in conducting a clinical trial design or optimization

Diagram representing the input and output components, design elements, and elementary designs involved in conducting a clinical trial design or optimization

EXAMPLE 1: WARFARIN PHARMACOKINETIC DESIGN EVALUATION

We first consider a simple example of design evaluation. Using the example of warfarin.ctl (control stream shown in Table 1). We see that most of the records will be familiar to NONMEM users. Note, however, the new record specific for trial design:
TABLE 1

Control Stream of warfarin.ctl (example 1)

The control stream file allows the user to provide the model, parameter values, and particularly the $DESIGN record allows specification of design evaluation criteria. The data files specified on the $DATA record serves as the description for elementary designs.

Control Stream of warfarin.ctl (example 1) The control stream file allows the user to provide the model, parameter values, and particularly the $DESIGN record allows specification of design evaluation criteria. The data files specified on the $DATA record serves as the description for elementary designs. The csv file specified in the $DATA record for this problem is very simple, and contains a single elementary design with a bolus dose record and three observation records. Although the csv file contains only one subject (that is one subject type or elementary design), the subject number size may be specified with GROUPSIZE in the $DESIGN record. Here, we request the variance‐covariance expected from a design with 32 subjects, all with the same elementary design (equivalent time values, doses, and covariates) defined in the csv file. We define for each patient the three following sampling times: one, four, and eight. There can be more than one elementary design in the csv, each with their own design (different doses, sample times, etc.), as shown in a later example. Each elementary design would be multiplied by the GROUPSIZE scale factor. The FIMDIAG = 1 option indicates evaluation of the FIM with block‐diagonal modality, as described in the introduction (the default in NONMEM is FIMDIAG = 0, but FIMDIAG = 1 is considered usually the most appropriate one to use). , The problem may be executed using the standard NONMEM script as follows: where warfarin.ctl is the control stream and warfarin.res will be our NONMEM report file. This model has MU referencing of its thetas. MU referencing is not required, but can be helpful in evaluating FIM analytically, providing greater significant digit precision and speed. The report file (Table 2) will contain the computed FIM and its derived components, including the expected standard errors (SEs) of the parameter estimates, the design criterion objective function (in this case, the default criterion −log(det(FIM))), and the expected shrinkage of the individual random effects in the model given the design. These values are also reported in separate files for easy import into other software tools (Figure 1). For example, the.coi file (Supplementary Materials) contains the FIM table, which is sometimes useful for more detailed analysis.
TABLE 2

Final results in NONMEM report file for example 1

Shown are standard errors for population parameters, and shrinkage information.

Final results in NONMEM report file for example 1 Shown are standard errors for population parameters, and shrinkage information. From Table 2, we see that the design itself is quite poor, with an SE of the variance for CL that is very high (6 times larger than the parameter value, 730% relative SE), rendering this variability practically nonestimateable given the design. A correspondingly large expected shrinkage for the individual ETA (empirical Bayes estimate) associated with this parameter (94.8%) can also be seen. The reported values for this example are very similar to results computed by both PFIM and PopED (results not shown). Note that design evaluation of an existing dataset is possible using, simply, the command “$DESIGN” and a realized dataset substituted for the elementary design dataset. For standard NONMEM analyses, $DESIGN can be used before parameter estimation, using some reasonable initial estimate of parameters to explore if parameters using this model and particular data set are expected to be estimable. In addition, after estimation using the estimated parameters obtained from, say, a first order conditional estimation analysis, $DESIGN can be used to see what the RSE of parameters one would expect given that the model generated the data.

EXAMPLE 2: WARFARIN PHARMACOKINETIC DESIGN OPTIMIZATION

In this example, we attempt to optimize the trial design proposed in example 1. We add the following specifications. First, additional data items describing the designs space TSTRAT, TMIN, and TMAX are needed (we shall see later how they are used), and added to the elementary design: These are then referred to in the $INPUT record of the control stream (complete controls stream warfarin2.ctl and data file are in Supplementary Materials); Next, we have added several options to the $DESIGN record: We specify MAXEVAL>0, to indicate that we wish to have a design optimization performed. If we do not specify the method (NELDER/FEDOROV/STGR/RS/DISCRETE), the default is NELDER (Simplex method in PFIM software). We also ask that intermittent results every 20 iterations be printed out (PRINT = 20). Next, we specify the design elements that are to be optimized. For this example, we ask that design element (DESEL) TIME be optimized. The optimization of TIME will occur for each unique value of the stratification variable (DESELSTRAT) TSTRAT, with possible values of each time being bounded (specified by DESLMIN and DESLMAX) by the values listed in column TMIN and TMAX. Each group of times that share the same TSTRAT value are constrained to be the same value during the optimization. If the TSTRAT value is 0, then the TIME of this particular record will not be optimized. We shall see how TSTRAT is used as we consider more complex examples in this tutorial. Looking at the values in the design space columns in the csv file, we notice that the initial times are given in data column TIME of the csv file with three observation records per subject, but are to be optimized between TMIN and TMAX. All rows have unique TSRTAT values, so the times of each of these records are to be independently varied during optimization. The initial, dosing, row will not be varied during optimization (TSTRAT = 0). Finally, among the output, results are to be those of optimized sampling times, we therefore need to have a $TABLE record that will output these new times: When the problem is run, intermittent output showing the improvement of the objective function is shown, just as any typical optimization process in NONMEM. The improvement in the SEs is important, so these are outputted in the raw output file (warfarin2.ext, Supplementary Materials), rather than the population parameters themselves, which do not change. There is no progress of the changing TIME values shown, but, when the analysis is completed, the $TABLE record outputs the final TIME values estimated to file warfarin2.tab (Supplementary Materials). The −log(det(FIM) (which is the default optimality criterion) and SEs are obtained from the raw output file(.ext) or the report file (warfarin2.res, Supplementary Materials), shrinkage is obtained from the warfarin2.shk or the report file, and final sample times are obtained from the $TABLE file (warfarin2.tab). The initial/evaluation values and the optimized values of these components are compiled in Table 3 for comparison. The −log(det(FIM)) has improved considerably, as have the RSEs and shrinkage for most parameters, with optimization.
TABLE 3

Optimized Results for warfarin (example 1, example 2, and example 3), compared to the starting values

ItemEvaluated (warfarin, example 1, 3 sample times)Optimized (warfarin2, example 2, 3 sample times)Optimized (warfarin2b, 5 sample times)Optimized (warfarin2c, 5 sample times, only 3 modeled as distinct)Evaluated (warfarin3b, 3 distinct sample times from warfarin2b, 2 samples spread between them)
−log(det(FIM))−39.518−47.533−51.5977−51.5977−51.567
%RSE(CL)36.95.004.784.784.77
%RSE(V)4.953.392.712.712.72
%RSE(KA)15.715.713.913.913.9
%RSE(var(CL))73127.226.026.026.0
%RSE(var(V))42.449.529.529.529.7
%RSE(var(KA))27.031.025.725.725.7
%RSE(var(sigma))28.156.617.717.717.7
%Shrinkage(CL) (EVBSHRINKVR)95.27.723.753.753.8
%Shrinkage(V)26.238.814.514.515.3
%Shrinkage(KA)7.1416.72.482.482.6
Sample Time TSTRAT = 111.550.130.130.13
Sample Time TSTRAT = 243.697.017.109 (3x)4.0
Sample Time TSTRAT = 38‐‐7.05‐‐7.0
Sample Time TSTRAT = 4‐‐‐‐7.11‐‐12.0
Sample Time TSTRAT = 5‐‐186.8159.3159.9160

The −log(det(FIM)) and SE’s are obtained from the raw output file(.ext) or the report file, shrinkage is obtained from the .shk or the report file, and sample times are obtained from the $TABLE file (.tab, optimization) or data file (.csv, evaluation).

Optimized Results for warfarin (example 1, example 2, and example 3), compared to the starting values The −log(det(FIM)) and SE’s are obtained from the raw output file(.ext) or the report file, shrinkage is obtained from the .shk or the report file, and sample times are obtained from the $TABLE file (.tab, optimization) or data file (.csv, evaluation). The three optimal times points are 1.55 (early in the rising phase of the curve), 3.69 (near the peak concentration), and 187 (late in the declining phase). Of note the RSE to the residual variance SIGMA(1,1) is 57%, rather high. This is because we have only three data points per subject, with the degrees of freedom of the three data points being used up to evaluate three pharmacokinetic (PK) parameters. There remain no degrees of freedom for assessing residual variance that one gets with more data points than parameters to assess. Will things be better if we allow five data points per patient? We proceed to do so with an additional example warfarin2b.ctl (available in Supplementary Materials) in which there are five time samples per subject, each with their own TSTRAT index so they vary independently, and find that the RSEs to all of the parameters are somewhat reduced, as expected when having more data points per subject, but the RSE to the residual variance is greatly improved, now being 17.7% (see Table 3). However, we can notice that there are only three distinct times, even though five times were allowed to be independently optimized. The three points are the boundary positions (0.13 and 159), and a third point at 7.0 (with some small variations among the replicates). Thus, only three distinct positions were needed to get information about three PK parameters, with the two additional sample replications at the middle point (where concentrations are high) provided more information for the residual variance. We further verify this by showing that having five data points but with just three of them being distinctly changing (we do this by having the middle 3 points share the same TSTRAT index), and note that the identical objective function value (−51.5977) is obtained (warfarin2c, see Supplementary Material). Nonetheless, we recognize that the design is computed assuming specific parameter values in the model, which will be somewhat different upon estimation, so the design would be more robust if we could spread the five data points out, so that a common set of five time positions will fit most potential model parameter values, without compromising too much on the FIM value given the assumed true parameter values. The next example shows one way of selecting distinct times for the clustered samples.

EXAMPLE 3: EVALUATE DIFFERENCES IN SAMPLE TIMES DUE TO UNCERTAINTY IN POPULATION PARAMETERS

In the previous example, we noticed that some of the optimized times cluster together, indicating that some of this repetition can be used for spreading out points to make the design more robust for variations in PK parameters. We may want to determine how differences in sample times may occur, given uncertainty in the population parameters. Thus, we use the $SIML TRUE = PRIOR with prior information to randomly generate thetas (and omegas and sigmas, if desired), and optimal time points for each of these theta sets will be obtained. The prior information may be selected based on some understanding of likely variability in thetas, omegas, and sigmas, or may have come from some firm experience of previous studies, not from the previous optimal design runs of Table 3. We add prior information and simulation instructions (complete control stream priortrue.ctl in Supplementary Materials): In this example, 1000 sub‐problems are generated, each with their own theta sets. The variance on the thetas is about 30% coefficient of variation, and by observing the optimal times listed in priortrue.tab, we get a sense of what kind of differences of optimal time points will be obtained, based on the likely variability of the thetas, in this example. A FORTRAN program, summary.f90, has been created (available in Supplementary Materials), to provide a convenient summarization of the results in priortrue.tab (STD = standard deviation, RSTD = relative standard deviation, or %STD/MEAN): The full output from summary.exe contains many percentiles, to allow for a large selection of dispersed time samples. The program summary may be compiled once, for example, as follows: and then executed (the third argument, a FORTRAN style format, is optional): For this example, we need only two time samples for dispersing the clustered set near 7.0 h, so we use 2.5 and 97.5 percentile positions, to obtain coverage of the possible thetas that may occur. Of particular interest for us is to take some sense of lower and upper bounds around the 7 h time, such as the 2.5% and 97.5% values of the cluster points (whose mean is 7.122, thus representing the 7 h). In this case, the values are 4.09 and 11.13 that we obtain from the above summary table, and we can use the floor of 4.09 (−>4) and ceiling of 11.13 (−>12) and use these as recommended times around the 7.0 h clustered time to improve robustness. We have done this, for example, for warfarin3b, where we evaluate with time samples 0.13, 4.0, 7.0, 12.0, and 160.0. The results of the design evaluation at these sample times are shown in the last column of Table 3. We may also wish to make the theta variance equal to the OMEGA intersubject variance, to take into account likely fluctuations of PK profiles among the subjects, and make the sampling robust for between subject variability. We would thus make the $THETAPV variances equivalent to the $OMEGA variances (priortrue2.ctl): The result is summarized in summary2.tab (Supplementary Materials), and we note that the median times are 0.13, a triplet clustered near 6.9, and 159.9. The clustered triplet with median 6.9 h has a 2.5% value of 1.5, and 97.5% level of 23 h. Therefore, suggested robust time samples would be 0.13, 1.5, 7.0, 23.0, and 160.0. The criterion value calculated for these times is −51.374 (warfarin3c, Supplementary Materials), very similar to the optimal −51.5977. This process of determining what variability of thetas could provide what degree of variability of ideal sample times (or other design elements) provides a robust means of assessing sensitivity between parameters and design elements. For this example, we considered only influence of variability in Thetas on sample times. A more advanced assessment could include the influence of variability in Omegas on sample times, but the predominant influence comes from Thetas, especially when using FO assessment of the FIM. There are other ways to account for uncertainties in parameters values and perform optimal robust designs in NONMEM (such as serial correlation models, as recommended in ref. 15) but they are beyond the scope of the present tutorial.

EXAMPLE 4: WARFARIN PHARMACOKINETIC/PHARMACODYNAMIC DESIGN OPTIMIZATION, DETERMINING BEST DISCRETE TIMES USING A SIMPLEX ALGORITHM

We now consider a model with two responses, the PK/pharmacodynamic (PD) of warfarin, as from ref. 16, in which warfarin PD measurements (prothrombin complex activity) are driven by the warfarin PK concentrations. Note that the problems of the present example have only proportional errors in keeping with what was done in ref. 16. For optimal design, residual errors are best modeled with a proportional and additive error (which may be fixed to avoid specifying a design point for it), as shown in the earlier examples, or just additive error. The PD model for this system is a turnover model with inhibition of the Rin parameter based on warfarin concentration (control stream, warfarin_pkpd_opt.ctl, in Supplementary Materials). Of particular note, the following portion of the control stream defines the parameters and the ordinary differential equations: We wish to create an elementary design in which four PK observations and four PD observations are obtained, with different sample times for the PK and PD samples. This is shown in Table 4 (warfarin_pkpd.csv).
TABLE 4

Elementary designs for problems warfarin_pkpd and warafarin_pkpd2

Elementary designs for problems warfarin_pkpd and warafarin_pkpd2 Notice that there is one PD (CMT = 3) record at time 0, whose TSTRAT index is 0, so this PD sample will be obtained predose, and its TIME value will not be optimized. The other seven records contain initial time positions, each having their own TSTRAT index, so that their TIME values are varied as independent design elements during design optimization. The CMT data item is two to indicate a PK sample, and three to indicate a PD sample. After optimization with the default Nelder (Simplex) method, the results (Table 5), shows that some times are repeated, and there are just six distinct times found (including the predose PD sample):
TABLE 5

The Optimized Results of warfarin_pkpd_opt and warfarin_pkpd_opt2 (example 4)

ItemOptimized (warfarin_pkpd_opt)Evaluated at selected discrete and distinct times (warfarin_pkpd_eval)ItemOptimized (warfarin_pkpd_opt2)Evaluated at selected discrete times (warfarin_pkpd_eval2)
−log(det(FIM))−118.27−117.56−log(det(FIM))−117.52−116.3
%RSE(KA)12.712.6%RSE(KA)12.612.3
%RSE(CL)3.833.83%RSE(CL)3.863.85
%RSE(V)2.802.64%RSE(V)2.612.55
%RSE(RIN)6.056.05%RSE(RIN)6.056.05
%RSE(IC50)2.342.26%RSE(IC50)2.422.42
%RSE(KOUT)1.761.76%RSE(KOUT)1.761.77
%RSE(var(KA))22.722.5%RSE(var(KA))22.621.8
%RSE(var(CL))21.721.4%RSE(var(CL))21.421.3
%RSE(var(V))31.128.8%RSE(var(V))30.429.5
%RSE(var(RIN))19.619.6%RSE(var(RIN))19.619.6
%RSE(var(IC50))31.931.4%RSE(var(IC50))38.038.1
%RSE(var(KOUT))19.719.7%RSE(var(KOUT))19.719.9
%rse(sigma1)16.415.6%rse(sigma1)13.913.5
%rse(sigma2)19.636.4%rse(sigma2)27.758.9
%SHRINKAGE(KA)12.31.8%SHRINKAGE(KA)11.29.37
%SHRINKAGE (CL)9.238.10%SHRINKAGE (CL)7.737.45
%SHRINKAGE (V)34.029.9%SHRINKAGE (V)34.832.3
%SHRINKAGE (RIN)0.02720.0357%SHRINKAGE (RIN)0.04990.122
%SHRINKAGE (IC50)35.835.0%SHRINKAGE (IC50)47.547.8
%SHRINKAGE (KOUT)0.3190.420%SHRINKAGE (KOUT)0.6291.42
Sample Time TSTRAT = 1 (CMT = 2)0.490.5Sample Time TSTRAT = 1 (CMT = 2,3)0.490.5
Sample Time TSTRAT = 2 (CMT = 2)5.0286.0Sample Time TSTRAT = 2 (CMT = 2,3)4.8063.0
Sample Time TSTRAT = 3 (CMT = 2)5.0359.0Sample Time TSTRAT = 3 (CMT = 2,3)4.8076.0
Sample Time TSTRAT = 4 (CMT = 3)18.024.0Sample Time TSTRAT = 4 (CMT = 2,3)90.696
Sample Time TSTRAT = 5 (CMT = 3)34.85596.0Sample Time TSTRAT = 5 (CMT = 2,3)0.490.5
Sample Time TSTRAT = 6 (CMT = 3)34.862120.0Sample Time TSTRAT = 6 (CMT = 2,3)70.672
Sample Time TSTRAT = 7 (CMT = 2)145.0144.0Sample Time TSTRAT = 7 (CMT = 2,3)80.096
Sample Time TSTRAT = 8 (CMT = 2,3)135145

The −log(det(FIM)) and SE’s are obtained from the raw output file(.ext) or the report file, shrinkage is obtained from the .shk or the report file, and sample times are obtained from the $TABLE file (.tab, optimization) or data file (.csv, evaluation).

PK: (0.49, 5, 145) PD: (0, 18, 34.9). The Optimized Results of warfarin_pkpd_opt and warfarin_pkpd_opt2 (example 4) The −log(det(FIM)) and SE’s are obtained from the raw output file(.ext) or the report file, shrinkage is obtained from the .shk or the report file, and sample times are obtained from the $TABLE file (.tab, optimization) or data file (.csv, evaluation). The final criterion value was −118.27. The original paper used the Fedorov algorithm in PFIM 4.0, which selects the best set of time points from a list. The list of time points permitted were: PK: 0.5, 1, 2, 3, 6, 9, 12, 24, 36, 48, 72, 96, 120 PD: 0, 24, 36, 48, 72, 92, 120, 144. Using this prespecified discrete grid of sampling times, we manually select four PK and four PD times closest to those we obtained by our continuous Nelder (Simplex algorithm) method, without repeats, so those could be: PK: (0.5, 6, 9, 144) PD: (0, 24, 36, 48). Next, a design evaluation of these discrete times results in a criterion value of −117.56 (second column of values in Table 5), similar to the ideal −118.27 in the continuous sample time assessment, so there was very little compromise here in rounding the sampling times. Another design consideration would be to require that four PK/PD samples be taken in one elementary design (PK and PD times are restricted to be taken at the same time), and another four PK/PD samples be obtained in another elementary design (so, two elementary designs or two subject types). The data template to describe this is shown in Table 4 (warfarin_pkpd2.csv), where two elementary designs are templated. Notice that each PK/PD sample pair (one from CMT = 2 and the next from CMT = 3) share the same initial TIME value, and they share a TSTRAT index. This ensures that the TIMEs of the PK/PD sample pair moves together during the optimization. Because the two designs are to have their own PK/PD times, the four PK/PD samples in design 1 (ID = 1) are numbered 1 to 4, and those of design 2 are numbered 5 to 8 (ID = 2). If we desired that they have the same times for one or more sample, then the TSTRAT index would be the same for those times, across elementary designs. The final optimized times are shown in Table 5, along with other components. The closest preset distinct set of PK/PD times are: Elementary design (subject template) 1: 0.5, 3, 6, 96 Elementary design (subject template) 2: 0.5, 72, 96, 145. This continuous sample time finding method is comparable to the discrete sample time finding method of Table 3 of design opt.iden, of ref. 16. In that paper, they determined that 22 subjects are to have design 1, and 10 subjects to have design 2. We could have also considered optimizing for proportionate representation of elementary designs 1 and 2, using the STRAT and STRATF options (to be introduced below), but we already have overparameterization, as evidenced by the time clumping that has occurred. Thus, our representation is equal proportions for elementary designs 1 and 2, for a more continuous time sample set.

EXAMPLE 5: DS‐OPTIMALITY EXAMPLE FOR FINDING THE BEST SAMPLE TIMES (USING UNINT) FOR A TWO‐COMPARTMENT PROBLEM

For this example, we now use the Ds‐optimality criteria where some parameters (structural and variance fixed effects) are declared “uninteresting” for design optimization. For this problem OFVTYPE = 6 is selected, which takes into account interesting versus uninteresting parameters. In the previous examples, we have been implicitly using the default the d‐optimality −log(det(FIM), OFVTYPE = 1, for optimization of the design. A list of various OFVTYPE option values are listed in Table S1, and in the description of the $DESIGN record in intro7.pdf to the NONMEM software. For the present example, we use a two compartment PK model and optdesign2.ctl (Supplementary Materials), and we specify that the Omega values are UNINTed (that is, are declared uninteresting), and that only the thetas and the residual proportional error are of interest: The OFVTYPE = 6 objective function is structured as −log(det(FIM)) + log(det(FIMuninteresting)) Final optimal times are listed in Table 6, as are the RSEs and shrinkage. Comparing with the results when the UNINT are removed, and OFVTYPE = 1 (second column in Table 6), and when UNINT is replaced with FIXED, and OFVTYPE = 1 (third column in Table 6), we can notice that the RSEs of V2 omega and Sigmaconstant for the UNINT setting are quite large. When the omegas are included (by removing their UNINT designations), RSEs to these are considerably reduced. For a GROUPSIZE of 100, the RSE to var(Q) is still quite large. The RSEs reduce by a factor of sqrt(N), so if we want the RSE to be no larger than 20%, then seeing that the var(Q) has the largest RSE of 46.7%, we should require 100*(46.7/20)*(46.7/20) = 545 subjects.
TABLE 6

The Optimized Results, for optdesign2, example 5

ItemUNINT on Omegas (optdesign2, OFVTYPE = 6)No UNINT, optdesign2c, OFVTYPE = 1)FIXED on Omegas (optdesign2d, OFVTYPE = 1)
Objective function−42.182−103.694−42.327
%RSE(CL) per subject1.011.041.04
%RSE(V1)1.341.341.34
%RSE(Q)3.303.773.49
%RSE(V2)1.241.131.30
%RSE(var(CL)) U/F17.617.5
%RSE(var(V1)) U/F26.228.0
%RSE(var(Q)) U/F37.646.7
%RSE(var(V2)) U/F94.233.3
%RSE(SigmaProp)12.920.811.5
%RSE(SigmaConst) U/F72.923.4
%SHK(CL)17.917.817.8
%SHK(V1)42.541.843.0
%SHK(Q)58.163.356.0
%SHK(V2)66.353.165.8
Sample Time TSTRAT = 10.0100580.01050.0103
Sample Time TSTRAT = 31.31261.58681.4347
Sample Time TSTRAT = 51.31984.78934.9097
Sample Time TSTRAT = 75.029222.2954.9146
Sample Time TSTRAT = 925.00525.025.0

The −log(det(FIM)) and SE’s are obtained from the raw output file(.ext) or the report file, shrinkage is obtained from the.shk or the report file, and sample times are obtained from the $TABLE file (.tab, optimization) or data file (.csv, evaluation). Notice that some of the RSE’s are high. RSE’s reduce by a factor of sqrt(number of subjects).

The Optimized Results, for optdesign2, example 5 The −log(det(FIM)) and SE’s are obtained from the raw output file(.ext) or the report file, shrinkage is obtained from the.shk or the report file, and sample times are obtained from the $TABLE file (.tab, optimization) or data file (.csv, evaluation). Notice that some of the RSE’s are high. RSE’s reduce by a factor of sqrt(number of subjects). There are small variations in the optimal sample times among UNINT, FIXED, and full representation in the FIM, but they all report four relatively distinct times.

EXAMPLE 6: OPTIMIZING FOR BEST TIMES AND GROUP SIZES TO ELEMENTARY DESIGNS: TARGET MEDIATED DRUG DISPOSITION EXAMPLE

Here, we optimize the sampling times and also the number of individuals for the various elementary designs. Target mediated drug disposition (TMDD) problems have complex PK and PD profiles, and so it may be a challenge to design a clinical trial just by intuition. Consider the NONMEM example6, which describes PK of antibodies (Abs), which, if no specific target is present, has a dose‐linear bi‐exponential decay profile, with volume of distribution of central compartment close to serum (Vc = 45 ml/kg), and slow distribution (K12, K21, and several hours) into extravascular spaces. Linear kinetic elimination (K10) of serum Ab is very slow, due to interaction with nonsaturable (high capacity) FcRn receptor (binding to Fc of Ab) on endothelial cells, which recycle most of the antibody back to the blood. The beta phase half‐life is typically 18–30 days in humans. With specific target present, the Fab’2 portion of Ab binds to, for example, a cell surface receptor with high affinity (KMc of say 0.01–1 nM). These receptors have a natural rate of production by their cells expressed in K03, and a natural rate of internalization of K30. When Ab interacts with receptor, the rate of internalization of the Ab‐receptor complex is expressed as Vm, and this results in increased removal of receptor, as well as increased clearance of Ab. The three differential equations are coded in the $DES record as: The complete control stream based on the NONMEM example 6 problem is in Supplementary Materials (tmdd2.ctl). The clearance at low concentrations (and therefore at low doses) is fast, with transient downmodulation of receptor that can determine the concentration of half‐maximal receptor‐mediated clearance. Higher concentrations have slower clearance, and can determine the maximum downmodulation of receptor, as well as the linear clearance component best. Therefore, we consider a trial with two elementary designs, one for a dose of 0.3 mg/kg, and another for 10 mg/kg. Whereas the 10 mg/kg PK/PD profile would traverse through high concentrations followed by low concentrations, if sampled long enough, it is desired that the sampling period be for no more than 49 days, so the 0.3 mg/kg group, which traverses through mid and low concentrations, is included to reach the lower concentrations at earlier sample times than the 10 mg/kg group. Five PK/PD blood sample pairs are taken optimized separately for each dose. In addition, we wish to know what proportion of subjects should receive each dose, so STRAT and STRATF design elements have been added. This is comparable to determining best group size, as described in an example in ref. 6. The STRAT and STRATF are data items that should be defined in the data file as follows: STRATF = data item containing fraction representation for the associated elementary design STRAT = data item containing stratification index pertaining to the STRATF design element. Because STRAT and STRATF describe a property of the entire elementary design (all data records belonging to the same ID), the only the STRAT and STRATF of the first record of the ID is considered. As an example, suppose we have a STRAT index of four, and its STRATF value is 0.4. All elementary designs that have the same STRAT of four on their first record will share a STRATF value of 0.4, and this represents their weight of influence on the FIM. If STRAT and STRATF are specified, and there is at least one STRAT value greater than 0, then the STRATF values are optimized, and represent the weight to the contribution of that elementary design to the information matrix. For STRAT values less than or equal to 0, then their STRATF values are not optimized, and remain fixed at their initial values, but are still used as weights to the information matrix. It is up to the user to ensure that the initial sum of STRATF values among unique STRAT indices sum to one. If value of STRATF is less than 0.0, then that elementary design is not included in the assessment. The effective group size for elementary designs with STRATF greater than zero is then GROUPSIZE*STRATF value for that elementary design. The data file laying out the design is shown in Supplementary Materials, tmdd2.csv. For this problem, the optimization is repeated several times. For the NELDER method, this may be an advantage so that on each initiation of NELDER, it re‐initializes the starting vertex positions of the simplex to cover a larger region, to alleviate it from a tendency toward a local minimum. The optimized sample times (TIME) and proportions (STRATF, first line of each elementary design/ID) are listed in supplementary Materials, table tmdd2.tab. The following was determined for best proportion of subjects to be given the dose, and best times (constrained between 0.01 and 49): 0.3 mg/kg: 0.59 of subjects, times 0.01, 0.17, 0.48, 2.5, 5 10 mg/kg: 0.41 of subjects, times 0.01, 0.29, 3.25, 24.3, 49.

EXAMPLE 7: BAYES OPTIMAL DESIGN TO FIND BEST SAMPLING TIMES AND BEST DOSE: TMDD EXAMPLE

We now use the previous example for illustration of the Bayesian Fisher Information (OFVTYPE = 8) method, as described in refs. 8, 17, 18, 19 and the Design Theory in Supplementary Materials. The Bayesian FIM is essentially the FIM for the maximum a posteriori estimation, and its information content (although using population information as a prior) is localized to data/elementary designs of a particular individual rather than the entire population. We evaluate a design for Bayesian estimation of individual parameters, given population parameters. In this case, the conditional variance‐covariances of the individual’s parameters are optimized, identified as ETC(,), and the final values are listed in the.phi table (Supplementary Materials). If just one subject (or subject type) is in the data file, then the criterion is that one subject’s Bayesian FIM. If more than one subject (or subject type), then the Bayesian FIM of all the subjects, averaged together, is the criterion for design. The control stream example optex6d17_8.ctl is given in Supplementary Materials. For this example, five PK sample times and five PD sample times are sought, along with a best dose. To evaluate for best dose, we add additional DESEL* options. The appropriate DMIN, DSTRAT, and DMAX data items specify optimizing the dose are needed in the data file containing the elementary design: As usual, the $TABLE user requested table optex6d17_8.tab reports the final sample times and dose, but now the table optex6d17_8.bfm (Supplementary Materials) reports the intermediate posterior variance‐covariance. Final conditional posterior variances are reported in optex6d17_8.phi (Supplementary Materials). The raw file (optex6d17_8.ext, Supplementary Materials) shows only starting and final values of the SEs of population parameters, as extra information. The average shrinkage information is reported as usual in the main report file (optex6d17_8.res, Supplementary Materials). If we compare the EBVSHRINKVR between this Empirical Bayes method, and the general population analysis method OFVTYPE = 1, these are similar (Table 7). The SEs to population parameters are slightly better assessed with OFVTYPE = 1, as expected, because OFVTYPE = 8 only assesses time points that most favor the FIM of the posterior density. Notice also that whether Bayes or population optimization is performed, only eight distinct time points arise from the total 10 that are requested. This is expected, as the dimension of the Bayes FIM is eight (with some modulatory information in the covariances), for the eight individual parameters with intersubject variability, and for the population analysis (OFVTYPE = 1), OMEGAS and SIGMAS were fixed, to make a fair comparison with the Bayes analysis.
TABLE 7

The Optimized Results, optex6d17_8, example 7

ItemOFVTYPE = 8 (optex6d17_8)OFVTYPE = 1 (tmdd2b)
Bayes Objective function−43.335−42.893
%RSE(VC) (per subject)26.826.7
%RSE(K10)29.429.1
%RSE(K12)35.535.5
%RSE(K21)36.735.8
%RSE(VM)26.227.1
%RSE(KMC)37.233.7
%RSE(K03)29.128.5
%RSE(K30)30.930.9
%SHK(VC)8.668.50
%SHK(K10)20.920.0
%SHK(K12)37.137.5
%SHK(K21)39.838.5
%SHK(VM)8.0813.3
%SHK(KMC)50.841.2
%SHK(K03)18.116.4
%SHK(K30)28.730.6
Optimal Dose5.334 mg/kg5.189 mg/kg
Fixed Sample Time predose (CMT = 3)00
Sample Time (CMT = 3)0.01 (redundant with fixed predose)‐‐‐
Sample Time (CMT = 1)0.010.01
Sample Time (CMT = 3)0.2733,0.27340.240
Sample Time (CMT = 1)0.6270.561
Sample Time (CMT = 3)2.121.93,1.97
Sample Time (CMT = 1)2.783.13
Sample Time (CMT = 1)44.741.7
Sample Time (CMT = 3)47.747.00,47.01
Sample Time (CMT = 1)49.048.5

The −log(det(FIM)) and SE’s are obtained from the raw output file(.ext) or the report file, shrinkage is obtained from the.shk or the report file, and sample times are obtained from the $TABLE file (.tab, optimization) or data file (.csv, evaluation).

The Optimized Results, optex6d17_8, example 7 The −log(det(FIM)) and SE’s are obtained from the raw output file(.ext) or the report file, shrinkage is obtained from the.shk or the report file, and sample times are obtained from the $TABLE file (.tab, optimization) or data file (.csv, evaluation).

CONCLUSIONS

To our knowledge, $DESIGN in NONMEM is the first optimal design tool implemented in a pharmacometrics software that was primarily developed for model fitting. This allows the user to easily evaluate designs using approaches based on the FIM, using already implemented models, parameters values, and sometimes designs. Using an optimal design approach, instead of CTS, for design evaluation is timely, as, indeed, many more designs can be evaluated. Furthermore “true” design optimization can presently be done extensively only by using these optimal design approaches. Note, however, that SEs and RSEs predicted by the FIM are often close, but theoretically only lower bounds of the true uncertainty for any given dataset, especially for designs of limited size. Because the FIM is presently computed using a linear approximation, we suggest the users to use this tool to define one or a few designs that should then be evaluated by clinical trial simulation. Presently, $DESIGN in NONMEM can only be used for models with continuous data as the FIM is more complex to evaluate for discrete or time to event models. We have used the Nelder method of search for these examples, but there is a risk of reaching local minima, and other search algorithms such as random search or stochastic gradient search are available in NONMEM’s $DESIGN feature which can also be used, sometimes in combination with the Nelder search method. We illustrated in the current tutorial seven examples of basic/standard optimal design, but more specifications are available for design evaluation and optimization in the documentation. However, other specific tools, such as PopED and PFIM, may have more advanced methods if needed.

DISCLAIMER

As Editor‐in‐Chief of CPT: Pharmacometrics and Systems Pharmacology, France Mentre was not involved in the review or decision process for this paper.

CONFLICT OF INTEREST

R.J.B. is a paid employee of ICON plc., and is the present developer of NONMEM. A.H. and F.M. receive no fees and no research grant from ICON plc to perform this work.

AUTHOR CONTRIBUTIONS

R.B. wrote the text of the main article and provided examples 3,5,6, and 7. A.H. contributed to the abstract and introduction, provided comments and instructions on analysis of the examples, and provided examples 1 and 2. F.M. wrote the Conclusions, provided comments and instructions on analysis of the examples, and provided examples 1,2, and 4. All authors contributed to editing and updating the final text. Supplementary Material Click here for additional data file.
  14 in total

1.  Development and implementation of the population Fisher information matrix for the evaluation of population pharmacokinetic designs.

Authors:  S Retout; S Duffull; F Mentré
Journal:  Comput Methods Programs Biomed       Date:  2001-05       Impact factor: 5.428

2.  PopED: an extended, parallelized, nonlinear mixed effects models optimal design tool.

Authors:  Joakim Nyberg; Sebastian Ueckert; Eric A Strömberg; Stefanie Hennig; Mats O Karlsson; Andrew C Hooker
Journal:  Comput Methods Programs Biomed       Date:  2012-05-27       Impact factor: 5.428

3.  Methods and software tools for design evaluation in population pharmacokinetics-pharmacodynamics studies.

Authors:  Joakim Nyberg; Caroline Bazzoli; Kay Ogungbenro; Alexander Aliev; Sergei Leonov; Stephen Duffull; Andrew C Hooker; France Mentré
Journal:  Br J Clin Pharmacol       Date:  2015-01       Impact factor: 4.335

4.  The use of a modified Fedorov exchange algorithm to optimise sampling times for population pharmacokinetic experiments.

Authors:  Kayode Ogungbenro; Gordon Graham; Ivelina Gueorguieva; Leon Aarons
Journal:  Comput Methods Programs Biomed       Date:  2005-08-31       Impact factor: 5.428

Review 5.  Model-based drug development.

Authors:  R L Lalonde; K G Kowalski; M M Hutmacher; W Ewy; D J Nichols; P A Milligan; B W Corrigan; P A Lockwood; S A Marshall; L J Benincosa; T G Tensfeldt; K Parivar; M Amantea; P Glue; H Koide; R Miller
Journal:  Clin Pharmacol Ther       Date:  2007-05-23       Impact factor: 6.875

6.  Design evaluation and optimisation in multiple response nonlinear mixed effect models: PFIM 3.0.

Authors:  Caroline Bazzoli; Sylvie Retout; France Mentré
Journal:  Comput Methods Programs Biomed       Date:  2009-11-04       Impact factor: 5.428

7.  Application of the optimal design approach to improve a pretransplant drug dose finding design for ciclosporin.

Authors:  Stefanie Hennig; Joakim Nyberg; Samuel Fanta; Janne T Backman; Kalle Hoppu; Andrew C Hooker; Mats O Karlsson
Journal:  J Clin Pharmacol       Date:  2011-05-04       Impact factor: 3.126

8.  Model-based drug development: a rational approach to efficiently accelerate drug development.

Authors:  P A Milligan; M J Brown; B Marchant; S W Martin; P H van der Graaf; N Benson; G Nucci; D J Nichols; R A Boyd; J W Mandema; S Krishnaswami; S Zwillich; D Gruben; R J Anziano; T C Stock; R L Lalonde
Journal:  Clin Pharmacol Ther       Date:  2013-03-14       Impact factor: 6.875

9.  Assessing robustness of designs for random effects parameters for nonlinear mixed-effects models.

Authors:  Stephen B Duffull; Andrew C Hooker
Journal:  J Pharmacokinet Pharmacodyn       Date:  2017-10-24       Impact factor: 2.745

10.  Tutorial for $DESIGN in NONMEM: Clinical trial evaluation and optimization.

Authors:  Robert J Bauer; Andrew C Hooker; France Mentre
Journal:  CPT Pharmacometrics Syst Pharmacol       Date:  2021-10-19
View more
  1 in total

1.  Tutorial for $DESIGN in NONMEM: Clinical trial evaluation and optimization.

Authors:  Robert J Bauer; Andrew C Hooker; France Mentre
Journal:  CPT Pharmacometrics Syst Pharmacol       Date:  2021-10-19
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.