Literature DB >> 34562342

Metaheuristics for pharmacometrics.

Seongho Kim1, Andrew C Hooker2, Yu Shi3, Grace Hyun J Kim3, Weng Kee Wong3.   

Abstract

Metaheuristics is a powerful optimization tool that is increasingly used across disciplines to tackle general purpose optimization problems. Nature-inspired metaheuristic algorithms is a subclass of metaheuristic algorithms and have been shown to be particularly flexible and useful in solving complicated optimization problems in computer science and engineering. A common practice with metaheuristics is to hybridize it with another suitably chosen algorithm for enhanced performance. This paper reviews metaheuristic algorithms and demonstrates some of its utility in tackling pharmacometric problems. Specifically, we provide three applications using one of its most celebrated members, particle swarm optimization (PSO), and show that PSO can effectively estimate parameters in complicated nonlinear mixed-effects models and to gain insights into statistical identifiability issues in a complex compartment model. In the third application, we demonstrate how to hybridize PSO with sparse grid, which is an often-used technique to evaluate high dimensional integrals, to search for D -efficient designs for estimating parameters in nonlinear mixed-effects models with a count outcome. We also show the proposed hybrid algorithm outperforms its competitors when sparse grid is replaced by its competitor, adaptive gaussian quadrature to approximate the integral, or when PSO is replaced by three notable nature-inspired metaheuristic algorithms.
© 2021 The Authors. CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals LLC on behalf of American Society for Clinical Pharmacology and Therapeutics.

Entities:  

Mesh:

Year:  2021        PMID: 34562342      PMCID: PMC8592519          DOI: 10.1002/psp4.12714

Source DB:  PubMed          Journal:  CPT Pharmacometrics Syst Pharmacol        ISSN: 2163-8306


INTRODUCTION

Nature‐inspired metaheuristic algorithms have been gaining popularity in the last 2 decades and recently have gained dominant status both in academia and industrial applications. , , , Collectively, they now form an important component in artificial intelligence. A main appeal is their simplicity, ease of implementation, and their ability to provide a quality solution to an optimization problem very fast. Another appeal is that these methods often make no assumption on the function to be optimized, with researchers from a widening range of disciplines reporting high success rates of finding an optimal or nearly optimal solution to all types of complex optimization problems in computer science and engineering where there are frequently hundreds or even thousands of variables to optimize. In the statistical literature, we are also encouraged by recent successes of meta‐heuristic algorithms to tackle different types of optimal design problems for different applications. For instance, particle swarm optimization (PSO) or a modified version of PSO was used find various types of optimal designs for biomedical applications, optimal designs under a nondifferentiable criterion, like a minimax optimal design or a standardized maximin criterion, where the latter involves solving three or four‐level nested optimization loops over very different search spaces. PSO‐based algorithms were also shown capable of solving discrete optimization problems; for example, modified versions of PSO found (a) optimal supersaturated designs with a large number of factors, (b) optimal repeated measurements over time in a Michaelis‐Menten type model, and (c) a class of two‐stage adaptive designs that extends the celebrated Simon Two‐Stage designs for a phase II trial; the extended designs solved complicated optimization problems involving 10‐integer variables subject to multiple nonlinear constraints from the type I error and II error specifications, and allow for none or only one of three alternative hypotheses to be tested at stage 2, depending on the number of responders from stage 1. Other types of metaheuristic algorithms were also recently used to solve optimization problems in statistics, and they include using the Imperialist Competitive Algorithm to find Bayesian optimal designs for a pharmacometric model or a differential evolution (DE) algorithm to find optimal designs in chemometrics. Optimal designs for nonlinear models with many interacting factors in a regression setup were also found using a DE and a variant of PSO, called quantum PSO. Despite the usefulness and simplicity of metaheuristic algorithms, they seem underutilized in pharmacometrics research. Our main aim in this paper is to demonstrate that metaheuristics is also useful in tackling optimization problems in pharmacometrics, including pharmacokinetics (PKs) and pharmacodynamics (PDs). The intent here is not to give a comprehensive review of the many nature‐inspired metaheuristic algorithms in the literature; instead, we focus on one of its members, the widely used PSO. The PSO was proposed by Eberhart and Kennedy and motivated by movement of swarms or flocks of birds. The most basic form of PSO is described in a supplemental file, and as with all such algorithms, there are many modifications after its initial version. Research has shown that PSO has the same effectiveness for finding the true global optimum as the genetic algorithm (GA; another well‐known nature‐inspired metaheuristic algorithm), but PSO requires significantly fewer function evaluations and, consequently, shorter central processing unit (CPU) time. Another reason for the popularity of PSO is that the default values for the tuning parameters have been shown to work well for a large class of problems, , including parameter estimation in nonpharmacometrics settings. , There are comprehensive reviews of PSO and its many variants and their various applications, including for model selection and parameter estimation. , , , , Hybridization is a common and often effective procedure to enhance the performance of algorithms by combining two or more of them to tackle a complex optimization problem. , , Optimization problems in pharmacometrics can be quite complex due, in part, to the model nonlinearity, multiple mixed effects, multiple objectives in the optimization, or user‐imposed constraints. For example, practical considerations may require that observations from a subject not be taken too close together or that there are varying cost structures from different sites. For such problems, metaheuristics has good potential to give satisfactory solutions where traditional methods fail. For example, hybrid algorithms were proposed for generating optimal designs for discriminating multiple nonlinear models under various error distributional assumptions, and PSO was hybridized with random forest to predict disease progression in patients with idiopathic pulmonary fibrosis. In what is to follow, we present various applications of PSO that efficiently address optimization issues in pharmacometrics problems. The section Parameter estimation via PSO tackles parameter estimation problems in a complicated nonlinear mixed‐effects model. In the section Using PSO to gain insights into statistical identifiability issues, we discuss how PSO can gain insights into statistical identifiability issues in pharmacometrics models with many compartments and parameters. The section Design construction for pharmacometrics models reviews the background for finding efficient designs for pharmacometrics problems and considers a nonlinear mixed‐effects model for a count outcome variable, where we wish to find a ‐optimal design to estimate all the parameters in the model. Discussion of finding algorithms for optimal designs for such models are limited in the pharmacometrics literature and we propose one here that demonstrates how PSO can be suitably hybridized with sparse grid (SG) for enhanced performance. Because computing the expected information matrix for nonlinear mixed‐effects models involved complicated integrals and SG has been shown to be effective for approximating integrals, we hybridize PSO with SG, call the resulting algorithm SGPSO and show it can effectively find efficient designs for mixed effects in nonlinear regression models with count outcomes. Although both PSO and SG have been separately used to find optimal designs, this is the first time they are hybridized for enhanced performance in search for efficient designs in a pharmacometrics setting. We also compare various hybridization schemes and show SGPSO seems to outperform the other schemes for the four‐parameter Poisson model. The Discussion section contains a discussion and summarizes the highlights of our review paper.

PARAMETER ESTIMATION VIA PSO

The nonlinear mixed‐effect model (NLMEM) is a useful statistical method for analyzing longitudinal data in pharmacometrics. At the first stage of an NLMEM, , observations are described by a nonlinear subject‐specific model: where N is the number of subjects, is the number of observations from the i th subject, the observation (e.g., drug concentration) at time , and a nonlinear function of a subject‐specific parameter vector . The residual error term is an independently and identically distributed normal random variable with mean zero and variance . Different error structures and distributions are discussed in great detail by Davidian and Giltinan. At the second‐stage, the subject‐specific parameter vector is modeled as where and are known design matrices for a p‐dimensional fixed‐effects vector and a ‐dimensional random‐effects vector , respectively. The random‐effects vector is assumed independent and identically distributed as the multivariate normal distribution with q zero‐mean variables and variance‐covariance matrix . Alternative distribution assumptions for random effects are discussed in Davidian and Giltinan. The parameters in the NLMEM described by the above equations are typically estimated by maximum likelihood estimation based on the marginal density of , where is the vector of observations (e.g., drug concentrations) of i th subject and the conditional density of given the random‐effects vector is denoted by , whereas is the marginal distribution of . In general, this integral in the marginal density of does not have a closed‐form as the function is nonlinear in so that several different methods are applied for approximating the marginal likelihood. Then, the objective function or likelihood function is numerically minimized with regard to the parameters . Parameter estimation of NLMEMs is a critical step in the field of pharmacometrics and so in drug development. Several software tools have been developed for estimating pharmacometrics parameters. , , , , , , , , , Of those, four are widely used in pharmacometrics and they are NONMEM (ICON Development Solutions, Ellicott, MD, USA), Monolix (Lixoft, Orsay, France), Phoenix (Certara, St. Louis, MO, USA), and nlmixr. NONMEM, Monolix, and Phoenix are commercial software tools but Monolix is the only software that provides a free license for noncommercial activities for academic and nonprofit organizations. The software nlmixr is an open‐source R package. The estimation approaches used in these tools include, but are not limited to, First Order Conditional Estimation, Stochastic Approximation Expectation‐Maximization (SAEM), Monte Carlo Parametric Expectation‐Maximization, Importance Sampling Parametric Expectation‐Maximization, and Quasi‐Random Parametric Expectation‐Maximization. Their optimization routines are generally based on expectation‐maximization (EM)‐like or gradient‐based methods. A notable challenge of EM‐like or gradient‐based methods is getting stuck at saddle points or local optima. Thus, their initial values should be close to the true optimum to achieve a global optimum. Moreover, the EM‐like or gradient‐based methods are vulnerable to numerical singularities that often occur when the model is nonlinear and/or the parameter domain has many dimensions. These challenges have stimulated the use of metaheuristics for solving global optimization problems using a gradient‐free approach in pharmacometrics analysis. Below we discuss the use of four metaheuristic algorithms that have been used in the pharmacometrics literature: Simulated Annealing (SA), GA, Bacterial Foraging Optimization (BFO), and PSO. We briefly review them in the context of the following applications and elaborate further on PSO, which is our focus example of a nature‐inspired metaheuristic algorithm in this paper. For population parameter estimation, a recent application using PSO was suggested by Kim and Li. The strategy, P‐NONMEM, is a hybrid approach, using the global metaheuristic search strategy of PSO and the local estimation strategy available in NONMEM. In the developed algorithm, initial values (particles) are generated randomly only for fixed‐effects and variance parameters by PSO. NONMEM is then implemented for each particle to find a local optimum for all parameters, including fixed‐effects, random‐effects, and variance parameters, whereas fixed‐effects and variance parameters are guided by PSO for global optimization. They demonstrated that the approach had improved convergence to a global optimum than by using NONMEM or PSO alone. The P‐NONMEM algorithm is summarized, briefly, as follows: Step I: Initialization A swarm of particles of a set of parameters (, , and ) is initialized randomly by a multivariate uniform distribution, : where is the size of the initial swarm; the p th particle for , , and ; is the range of a random variable (vector) . Step II: NONMEM estimation For particle at the k th iteration, a NONMEM‐based local estimation is performed. Let the current position of particle at iteration , be an initial value for the NONMEM estimation. Their estimates are obtained along with the random‐effect estimate through NONMEM. Then the current position is updated with the convergence estimate , i.e., . Step III: Local and global best positions The goodness‐of‐fit (GOF) for the p th particle at the k th iteration () is calculated given the update current position and the random effect estimate , . is then compared to the best previous local and global best goodness of fits (i.e., and ), and the current local and global bests are updated as follows: Updating the local best position: and if . Updating the global best position and if . Step IV: Convergence If iteration reaches to the user‐specified maximum K or all the particles converge to its global best, P‐NONMEM will stop. Otherwise, it will go to step V. Stop if or , for all and for all Go to step V otherwise The current positions will be updated to by PSO until it converges. That is, , where is the updated velocity in PSO. For parameter estimation in the nonpopulation setting a number of different hybrid metaheuristic algorithms have been developed. Türksen and Tez developed two hybrid algorithms, which are called GANMS and PSONMS, to estimate the parameters of compartment models. GANMS and PSONMS are combination of a derivative‐free local optimization, Nelder‐Mead Simplex (NMS), and GA and PSO, respectively. The local search of GA and PSO was replaced with NMS so that both GA and PSO contribute to the global search only. They further compared the performance in terms of parameter estimation and suggested that PSONMS performs better than GA, PSO, and GANMS. They also used GA to develop an approach for bias‐corrected point estimates and bias‐corrected accelerated confidence interval estimates of two‐compartment models. To do this, they used a bootstrap method upon point estimates obtained by a derivative‐based nonlinear least squares (NLS) approach or GA. Their comparison showed that the parameter estimates obtained by GA are quite unbiased compared to those obtained by NLS. Recently, Pan et al. developed a support vector regression model with PSO (PSO‐SVR) to simultaneously characterize the PKs and PDs of Moutan Cortex and Moutan Cortex charcoal widely used in traditional Chinese medicine. In PSO‐SVR, PSO was used to find parameters that fit the chemical component to the drug efficacy by optimizing the support vector regression model. They further compared PSO‐SVR with a back‐propagation neural network, which is one of the artificial neural networks and has better flexibility and accuracy, demonstrating that PSO‐SVR is a better approach for pharmacometrics than the back‐propagation neural network. Toney used SA, which is implemented in a biochemical system simulator COPASI (http://www.copasi.org), to determine free energy profiles for alanine racemase and triosephosphate isomerase with a complex kinetic mechanism (such as Michaelis‐Menten kinetics). Sowparnika et al. used bacterial foraging‐oriented PSO (BFOA‐PSO) to develop an automatic drug infusion control system during cardiovascular surgery. BFOA‐PSO is a hybrid of BFO for determining a solution by elimination and dispersal stage, and PSO for social information trade‐off. Ye et al. utilized PSO to estimate parameters of a three‐compartment toxicokinetic model for the absorption, distribution, metabolism, and elimination of tricothecenes‐2 toxin (T‐2) in shrimp.

USING PSO TO GAIN INSIGHTS INTO STATISTICAL IDENTIFIABILITY ISSUES

Parameter estimation has typically two types of identifiability issues; mathematical and statistical identifiability. Mathematical identifiability, also called structural or deterministic identifiability, is the ability to identify model parameters in noise‐free data free from design constraints (unlimited data). Statistical identifiability, also known as numerical identifiability, is the identifiability of parameters estimated from noisy data, with design constraints. , , Mathematical identifiability is largely not a challenging problem in parameter estimation in pharmacometrics, as pharmacometrics models are developed using general differential equations with solid theoretical and mathematical foundations. The critical problem with identifiability of pharmacometrics modeling mainly stems from statistical identifiability due to lack of samples and high residual variability. For example, in pharmacometrics studies, the Michaelis‐Menten (MM) equation is often used to describe intrinsic clearance: where is the maximum enzyme activity; an inverse function of the affinity between drug and enzyme; an unbound drug concentration. is also called the MM constant having the units of . However, when is much larger than the concentration (i.e., ), becomes approximately equal to . In addition, when is much smaller than the concentration (i.e., ), is approximately equal to . In other words, if the concentration is either much less or greater than , one will not be able to estimate both and separately due to statistical identifiability. , Numerous methods have been presented to deterministically detect unidentifiable parameters, such as Laplace transforms, similarity transform approaches, Voterra and generative power series approaches, differential algebra approaches, and alternating conditional expectation algorithms, but less development has been made for statistical identifiability. , , One practical approach to evaluating statistical identifiability is local sensitivity analysis. It uses the first partial derivative of the differential equation with respect to the parameters, and depends on the nonsingularity of the Fisher information matrix corresponding to the Taylor series method and the differential algebra method. , However, if the estimate is far from the actual value or the model has very complex dynamics, local sensitivity analysis is likely to make a wrong decision. Thus, a global sensitivity analysis for robust experimental design was proposed by Yue et al. based on the modified Morris method. Although Bayesian approaches do not have mathematical or statistical identification problems thanks to priors, statistical identifiability can lead to poor convergence on parameter estimation. , To speed up the convergence rate of Bayesian approaches, the Switching Monte Carlo Markov Chain (MCMC) method was introduced by Kim and Li to estimate parameters regardless of the statistically identifiable situations. It is a Bayesian approach and to randomly switch between two schemes, single component MCMC and group component MCMC. However, all of the aforementioned approaches still require initial guesses or prior knowledge regarding the underlying relationship of the parameters. For this reason, Kim and Li recently introduced an optimization approach using PSO that not only handles identifiability as a whole but also does not require preprocessing to obtain initial guesses or prior knowledge. Their developed algorithm, called LPSO, is a combination of PSO and a derivative‐free local optimization, which is a simplex search algorithm, to enhance the convergence rate of the local best. Consequently, LPSO converges to a global optimum much faster than PSO and can be applied to parameter estimation regardless of statistical or numerical singularity. They further introduced several convergence diagnostic measures to detect when to stop LPSO and to indicate whether the pharmacometrics model is statistically identifiable or not. LPSO was, in fact, used to aid in the estimation of parameters in a complex PK model. In this work, the objective was to integrate dynamic positron emission tomography and conventional plasma PK studies to characterize the plasma and tissue PKs of 1‐(2‐deoxy‐2‐fluoro‐beta‐d‐arabinofuranosyl) uracil (FAU) and 1‐(2‐deoxy‐2‐fluoro‐beta‐d‐arabinofuranosyl) 5‐methyluracil monophosphate (FMAU). An 8‐compartment model was used to simultaneously characterize the plasma and tissue pharmacokinetics of FAU and its active metabolite FMAU. The parameter estimation was performed mainly using SAEM in combination with MCMC, as implemented in Monolix (Lixoft, Orsay, France). In the study there were 12 subjects and 20 parameters to be estimated: , , , , , , , , , , , , , , , , , , , and , where indicates a volume of a compartment; CL a clearance; a maximum velocity; and an MM constant. There are six differential equations that include the MM equation which can cause issues with statistical identifiability. The complexity of the PK model and the lack of literature‐based historical data made it difficult for SAEM to estimate PK parameters in the NLMEM. In particular, SAEM‐based parameter estimation became very difficult because the statistical identifiability of each PK parameter was unknown and reasonable initial values and/or boundaries were not available. To resolve these difficulties, LPSO was applied to not only find good initial values and/or boundaries for each PK parameter but also to enable the researcher to investigate statistical identification problems for the MM equation in a more in‐depth manner.

DESIGN CONSTRUCTION FOR PHARMACOMETRICS MODELS

Background

NLMEMs are increasingly used to understand drug effects and finding an efficient design at minimum cost can be challenging. If the goal is to estimate parameters in a model, ‐optimality is appropriate and a ‐optimal design provides the most accurate estimates of the parameters among all designs using minimal resources. There are numerical approaches for finding optimal designs, but for complicated models, such as NLMEMs, many current algorithms can be slow or unable to find a ‐efficient design. In pharmacometrics, the problem can be especially computationally challenging for models with a discrete end point. At the design stage, the NLMEMs in pharmacometric experiments contain unknown parameters and so the objective function cannot be optimized directly. A simple method to overcome the problem is to replace the unknown parameters by their nominal parameter values from previous experiments on similar drugs or from a pilot study. The objective function can then be optimized and the resulting designs are called locally optimal because they depend on the nominal values. Typically, the design criterion is formulated in terms of the expected Fisher information matrix (FIM), which measures the worth of the design. , , , For NLMEMs, the FIM does not have an analytic solution so that the integral has to be approximated before the criterion is optimized. A common way to approximate the FIM is to use a first order (FO) linearization of the model around the expectation of random effects. , This has been shown to work well and the method is available in several software programs. Although this method is generally efficient, , , FO has limitations when it is applied to NLMEM with discrete outcomes. Recent work has suggested that adaptive Gaussian quadrature (AGQ) can be useful for such models with a small number of random effects, but the calculation of the FIM can become slow, especially when the number of random effects in the model increases (the curse of dimensionality). The SG techniques for numerical integration has been shown to be efficient for evaluating high dimension integrals and is less effective for low dimensional integrals. Of particular relevance is the work of Plumlee, who provided an excellent background with technical details of SG with compelling examples and asserts that “The computational savings can be several orders of magnitude when the input is located in a high‐dimensional space.” See also introductory information and illustrative applications of SG at https://www.ucl.ac.uk/uctpjyy/downloads/SparseGrid.pdf, where instructions on how to implement and use the SG codes from CRAN are available. Our findings here support the comparative advantages of SG for computing the FIM when we want to approximate a high dimensional integral (Figure 1).
FIGURE 1

CPU times (seconds) required by the AGQ and SG for evaluating two to four parameter Poisson‐type models in the section Nonlinear mixed‐effects Poisson‐type models at different accuracy levels. AGQ, adaptive Gaussian quadrature; CPU, central processing unit; SG, sparse grid

CPU times (seconds) required by the AGQ and SG for evaluating two to four parameter Poisson‐type models in the section Nonlinear mixed‐effects Poisson‐type models at different accuracy levels. AGQ, adaptive Gaussian quadrature; CPU, central processing unit; SG, sparse grid Following convention, the population log‐likelihood is dependent on data derived from the population design, which is made up of a collection of elementary designs, where each may have multiple individuals with the same design. As an example, a population design has elementary designs . If there are subjects assigned to , then the population design can be written as with the constraint that . If all subjects have identical elementary designs, then and the population design is simply . Assuming independence across the subjects and is the unknown model parameters in the NLMEM, the population FIM, , for a design , is the sum of elementary matrices from each subject, that is: where the elementary matrices are the expectation across all possible datasets of the second derivative of the log‐likelihood surface, or equivalently Our criterion is ‐optimality, which is what we want to find as a design that maximizes the determinant of the FIM after it is normalized by the number of parameters to be estimated. When nominal values for the model parameters are available, we replace in the FIM by and the FIM for the design becomes . We then optimize the following: where |.| denotes the determinant, is the set of all designs on the design space, and is the number of estimated parameters in the model. This normalized FIM determinant is the ‐criterion value; larger values indicate better designs. To compare the performance of two designs and for estimating the model parameters, we use the ratio: which is the ‐efficiency of the design relative to the design . As an example, if the above ratio is 0.5 or 50%, this means that the design needs twice as many observations for it to do as well as the design . In the population design setting, where the outcome is usually continuous, traditional optimization methods have been used to search for efficient designs. Hybridized heuristic methods for finding efficient designs that tend to work well on a wider set of problems have also been used. For more complex design problems, multiple algorithms have been applied in sequence. , However, apart from Ueckert, who used an MCMC approach, algorithms for finding efficient designs for NLMEMs with a count end point seem to be less discussed in the design setting. We next present a few specific models with a count end point and show that a hybridization of SG and PSO, called SGPSO, can find ‐efficient designs for these types of models.

Nonlinear mixed‐effects Poisson‐type models

Inspired by a previous work, we consider Poisson NLMEM with various dose levels as an example system. The responses recorded over time are non‐negative integer values (counts, ). The simplest two‐parameter model investigated describes a response probability at dose level by: where , follow a log‐normal distribution: . We assume = (1, 0.5) and . A more complicated version of this model is a 3‐parameter model: where , , follow a log‐normal distribution: . We assume = (1, 0.5, 0.1) and . We also consider a four‐parameter model: where , , follow a log‐normal distribution: and follows a normal distribution. Further, let = (1, 0.5, 0.1), and where = 0 and . For each model, we assume designs with 20 subjects and 90 observations per subject at different dose levels between 0 and 1. The reference design provides 30 observations at each of the three doses at 0, 0.4, and 0.7. In terms of notation for a population design, we have with one elementary design (i.e., ).

A comparison of using AGQ and SG for Poisson NLMEM

We compare the performance of AGQ and SG for approximating FIMs of Poisson NLMEM using models with discrete outcomes and different numbers of parameters and varying number of random effects. Following Ueckert et al., we use quasi random Monte Carlo (QRMC) to integrate out the random effects. All other conditions were fixed between SG and AGQ, including using 1000 QRMC samples and the same one‐dimensional three‐point Gaussian quadrature. The comparison criteria are CPU times required to compute the FIM and the different numbers of node points required at different accuracy levels. An SG constructed with accuracy level integrates complete polynomials of total order exactly. SG limits the total order of all the one‐dimensional monomials used to approximate the multidimensional integral using one‐dimension polynomials of lower degrees, so they are less accurate, but require fewer nodes to approximate when the integral is high‐dimensional. For example, if the integral is nine‐dimensional, SG requires 5965 nodes and the product rule for AGQ requires 195,3125 nodes. Consequently, the choice of is a compromise between computationally efficiency (having fewer nodes to evaluate the integral) and approximation accuracy. A more complete and technical definition of accuracy level is given in Shi. Figure 1 shows the CPU times required by AGQ and SG to compute the FIM at different accuracy levels for the two to four parameter Poisson‐type models. Regardless of the accuracy levels, AGQ outperforms SG in the two‐dimensional model, but SG outperforms AGQ for the three‐dimensional model, and much more so for the four‐dimensional model; at the accuracy level for the four‐dimensional model, SG requires a third of the time required by AGQ. Figure 2 shows the criterion values for the generated designs for the two to four parameter models at different accuracy levels from the AGQ‐ and SG‐generated designs. They are the determinants of the normalized FIMs and their values get closer when the accuracy level increases. At higher accuracy levels, the difference becomes indistinguishable. When the model has more parameters, higher accuracy levels are required for two criterion values to be close. The figure also shows AGQ outperforms SG in terms of criterion values consistency across accuracy levels and dimension of the models. Thus, the SG method should be seen as a way to decrease computational time, and care should be taken with calculations at low accuracy levels.
FIGURE 2

Criterion values of the generated designs for the two to four parameter models in the section Nonlinear mixed‐effects Poisson‐type models at different accuracy levels

Criterion values of the generated designs for the two to four parameter models in the section Nonlinear mixed‐effects Poisson‐type models at different accuracy levels

Algorithms with different hybridizations for finding D‐efficient designs for the four‐parameter Poisson NLMEM

This subsection compares results using different hybridization schemes for searching for a ‐efficient design for the four‐parameter Poisson NLMEM in the previous subsection. The FIM is approximated by either QRMC‐SG (SG‐PSO) or QRMC‐AGQ (AGQ‐PSO). We used 1000 QRMC samples and used the same one‐dimensional three‐points Gaussian quadrature for AGQ and SG. The accuracy level used for SG in this example is four. PSO was then used to optimize the dose levels and observation allocations based on ‐optimality for two scenarios. The first is a “Fix allocation” scenario, where we fix the observation allocation to be the same as the reference design, and find the three dose levels that maximize the ‐criterion value. The second is the “Flexible” scenario, where we search for the optimal allocation scheme without restriction. The PSO parameters used for this example were as follows: 100 maximum number of iterations, , the inertia parameter was set to linearly decrease from 0.9 to 0.4 and the swarm size was 40 particles. Table 1 reports the results for the two design scenarios. We observe that all designs are more efficient than the reference design with relative ‐efficiencies ranging from 131% to 135%. The SG‐PSO approach finds relatively equivalent designs compared to AGQ‐PSO, but in terms of CPU time, AGQ‐PSO required 1.7 times the CPU time compared to SG‐PSO.
TABLE 1

D‐optimality criterion values of the AGQ‐PSO‐ and SG‐PSO‐generated designs (at accuracy level 4) for the four‐parameter Poisson‐type model, their D‐efficiencies relative to the reference design in parentheses (relative efficiency, RE) and CPU time in seconds to find the generated designs

Dose levelsNumber of observationsD‐criterion (RE)CPU time
Reference design03057.691N/A
0.430
0.730
Fix allocation – AGQ‐PSO03073.098 (126.7%)27.164.47
0.2530
0.8830
Fix allocation – SG‐PSO03075.694 (131.2%)16.203.58
0.2530
0.9430
Flexible – AGQ‐PSO01976.699 (132.9%)27.455.37
0.2129
0.7842
Fix – SG‐PSO01976.699 (132.9%)16.259.36
0.2129
0.7842

The fixed allocation scheme designs require the algorithm to find the three best dose levels with an equal number of observations and the flexible scheme allows the algorithm to determine the three best dose levels and the optimal number of observations at each dose, given that in both cases, the total number of observations per individual is 90.

Abbreviations: AGQ, adaptive Gaussian quadrature; CPU, central processing unit; N/A, not applicable; PSO, particle swarm optimization; RE, relative efficiency; SG, sparse grid.

D‐optimality criterion values of the AGQ‐PSO‐ and SG‐PSO‐generated designs (at accuracy level 4) for the four‐parameter Poisson‐type model, their D‐efficiencies relative to the reference design in parentheses (relative efficiency, RE) and CPU time in seconds to find the generated designs The fixed allocation scheme designs require the algorithm to find the three best dose levels with an equal number of observations and the flexible scheme allows the algorithm to determine the three best dose levels and the optimal number of observations at each dose, given that in both cases, the total number of observations per individual is 90. Abbreviations: AGQ, adaptive Gaussian quadrature; CPU, central processing unit; N/A, not applicable; PSO, particle swarm optimization; RE, relative efficiency; SG, sparse grid. We next compare when SG was used but replaced PSO by three of its strong competitors: GA, DE and a version of the quantum‐inspired PSO (QPSO). We used these algorithms coupled with SG to find efficient designs for the same four‐parameter Poisson NLMEM. We consider the “Flexible” scenario. The parameters of GA are set as follows: the population size is 50, the number of generations is 100, elitism is 2, cross‐over probability is 0.8, and mutation probability is 0.1. The parameters of DE are set as follows: the population size is 80, the number of generations is 100, step size is 0.8, and cross‐over probability is 0.5. The parameters of QPSO are set as follows: the alpha is linearly decreasing from 1.4 to 0.4, the population size is 40, the maximum number of iterations is 100. These are generally default parameters for the algorithms. Table 2 displays fixed allocation scheme designs found when SG is hybridized with PSO and three other metaheuristic algorithms: QPSO, GA, and DE. It is clear that SG‐PSO finds a design with the highest ‐criterion value. Note that these results are clearly dependent on the algorithm parameters and should be seen as a guide rather than a full comparison.
TABLE 2

A comparison of algorithms when SG, with accuracy level four, is hybridized with other metaheuristic algorithms for finding efficient designs for the four‐parameter Poisson‐type model

Dose levelsNumber of observationsD‐criterion (relative efficiency)
PSO01976.70 (100%)
0.2129
0.7842
QPSO04665.08 (84.9%)
0.2629
0.9715
GA0.022969.89 (91.1%)
0.3130
0.8831
DE0.01770.24 (91.6%)
0.2247
0.9736

Each algorithm finds the best three dose levels and the optimal number of observations at each dose, subject to the constraint that they sum to 90.

Abbreviations: DE, differential evolution; GA, genetic algorithm; PSO, particle swarm optimization; QPSO, quantum‐inspired particle swarm optimization; SG, sparse grid.

A comparison of algorithms when SG, with accuracy level four, is hybridized with other metaheuristic algorithms for finding efficient designs for the four‐parameter Poisson‐type model Each algorithm finds the best three dose levels and the optimal number of observations at each dose, subject to the constraint that they sum to 90. Abbreviations: DE, differential evolution; GA, genetic algorithm; PSO, particle swarm optimization; QPSO, quantum‐inspired particle swarm optimization; SG, sparse grid.

DISCUSSION

We have demonstrated the utility of nature‐inspired metaheuristic algorithms for tackling a few optimization problems in pharmacometrics. For example, finding optimal designs for NLMEMs is a notoriously difficult problem; the FIM has to be numerically approximated before the design is found by optimizing a function of the FIM. There is no theory to confirm optimality for such designs and a common practice is to implement the design with the best criterion value among a few recommended designs by medical experts in the area. Metaheuristics is likely able to find an efficient design for such models, or a properly hybridized version of it can do so. We demonstrated an application by combining SG and PSO and showed the hybridized version can find a ‐efficient design for a Poisson model with multiple parameters. We also combine SG with a few other metaheuristic algorithms, such as QPSO, GA, and DE, and find that the SG‐PSO hybridization appears to outperform these other methods. Our examples show that SG‐PSO‐generated designs outperformed the reference designs in all instances in terms of ‐efficiencies. These results also match previous findings for different models with a continuous outcome. We note that designs found by PSO or SG‐PSO should be referred to as PSO‐generated designs or SG‐PSO‐generated designs, and not ‐optimal designs. This is because there is no guarantee that a metaheuristic algorithm will find an optimal design. Under such circumstances when the true optimum is unknown, we recommend using several types of metaheuristic algorithms and hope that they all generate similar designs. Otherwise, we can hybridize a metaheuristic algorithm with another optimization algorithm so that the hybridized version works better than either one of them alone. This optimization process is implemented in software for population optimal designs (PopED, https://andrewhooker.github.io/PopED/). We conclude by noting that metaheuristics should be used as a last resort when traditional optimization methods fail. In complex situations, the optimization problem may not be able to be formulated into one that meets all assumptions required by the traditional methods. It is in these situations that metaheuristics should be used. However, because metaheuristic algorithms are usually fast, flexible, easy to implement, and use, many researchers have used them for convenience. In addition, we note that not all metaheuristics perform equally well for all problems and some are harder to use because, in part, they are motivated differently or have different numbers of tuning parameters. Tuning the parameters for a metaheuristic algorithm for convergence can be both difficult and very time consuming, and is a perennial problem for metaheuristics. Similarly, trying to rigorously prove a nature‐inspired metaheuristic algorithm converges to the global optimum has been a stubbornly difficult problem. Both these issues are active research areas in metaheuristics, even as incremental advances are constantly being made. , In practice, however, one can frequently and convincingly argue that only an approximate optimal solution is needed. Other open problems in metaheuristics with particular relevance for health care studies are described in Tsai.

CONFLICT OF INTEREST

The authors declared no competing interests for this work. Supplementary Material Click here for additional data file.
  41 in total

1.  A numerical identifiability test for state-space models--application to optimal experimental design.

Authors:  M E Hidalgo; E Ayesa
Journal:  Water Sci Technol       Date:  2001       Impact factor: 1.915

2.  Nonlinear mixed effects models for repeated measures data.

Authors:  M L Lindstrom; D M Bates
Journal:  Biometrics       Date:  1990-09       Impact factor: 2.571

3.  Simultaneous population optimal design for pharmacokinetic-pharmacodynamic experiments.

Authors:  Andrew Hooker; Paolo Vicini
Journal:  AAPS J       Date:  2005-11-01       Impact factor: 4.009

4.  Data-based identifiability analysis of non-linear dynamical models.

Authors:  S Hengl; C Kreutz; J Timmer; T Maiwald
Journal:  Bioinformatics       Date:  2007-07-28       Impact factor: 6.937

5.  Performance and robustness of the Monte Carlo importance sampling algorithm using parallelized S-ADAPT for basic and complex mechanistic models.

Authors:  Jurgen B Bulitta; Cornelia B Landersdorfer
Journal:  AAPS J       Date:  2011-03-04       Impact factor: 4.009

6.  Extended two-stage adaptive designs with three target responses for phase II clinical trials.

Authors:  Seongho Kim; Weng Kee Wong
Journal:  Stat Methods Med Res       Date:  2017-05-23       Impact factor: 3.021

7.  The deterministic identifiability of nonlinear pharmacokinetic models.

Authors:  K R Godfrey; W R Fitch
Journal:  J Pharmacokinet Biopharm       Date:  1984-04

8.  Common enzymological experiments allow free energy profile determination.

Authors:  Michael D Toney
Journal:  Biochemistry       Date:  2013-08-16       Impact factor: 3.162

9.  Finding High-Dimensional D-Optimal Designs for Logistic Models via Differential Evolution.

Authors:  Weinan Xu; Weng Kee Wong; Kay Chen Tan; Jianxin Xu
Journal:  IEEE Access       Date:  2019-01-01       Impact factor: 3.367

10.  An approach for identifiability of population pharmacokinetic-pharmacodynamic models.

Authors:  V Shivva; J Korell; I G Tucker; S B Duffull
Journal:  CPT Pharmacometrics Syst Pharmacol       Date:  2013-06-19
View more
  1 in total

Review 1.  Metaheuristics for pharmacometrics.

Authors:  Seongho Kim; Andrew C Hooker; Yu Shi; Grace Hyun J Kim; Weng Kee Wong
Journal:  CPT Pharmacometrics Syst Pharmacol       Date:  2021-10-22
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.