Literature DB >> 36254121

Control in Probability for SDE Models of Growth Population.

Pedro Pérez-Aros1, Cristóbal Quiñinao1, Mauricio Tejo2.   

Abstract

In this paper, we consider a (control) optimization problem, which involves a stochastic dynamic. The model proposes selecting the best control function that keeps bounded a stochastic process over an interval of time with a high probability level. Here, the stochastic process is governed by a stochastic differential equation affected by a stochastic process. This setting becomes a chance-constrained control optimization problem, where the constraint is given by the probability level of infinitely many random inequalities. Since such a model is challenging, we discretize the dynamic and restrict the space of control functions to piecewise mappings. On the one hand, it transforms the infinite-dimensional optimization problem into a finite-dimensional one. On the other hand, it allows us to provide the well-posedness of the problem and approximation. Finally, the results are illustrated with numerical results, where classical model for the growth of a population are considered.
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022, Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Entities:  

Keywords:  Chance constrained optimization; Control in Probability; Growth population models

Year:  2022        PMID: 36254121      PMCID: PMC9558074          DOI: 10.1007/s00245-022-09915-7

Source DB:  PubMed          Journal:  Appl Math Optim        ISSN: 0095-4616            Impact factor:   2.194


Introduction

Models for the growth of a population have played an essential role in multiple fields long before the seminal work of T.Malthus (1798) in dynamical systems. Starting from simple Fibonacci sequence to Lotka-Volterra systems, epidemics SIR models, to more modern game theory works, we have advanced in the better understanding of the mechanisms of biological replication, ecological network stability properties and many other biological applications [3, 4]. For instance, SIR and all its extensions have been a critical element in predicting the behavior of the COVID-19 pandemic and have helped policymakers to understand the importance of data and modeling in the modern world [18]. Using classical dynamical systems and control theory has been a very prolific field in terms of not only theoretical but also applications (see e.g. [1, 12]). However, biological systems are intrinsically noisy, depending enormously on environmental conditions that cannot always be precisely determined. Even in controlled environments, models have to take into consideration small fluctuations in variables such as temperature or pressure [8] or directly consider an open system approach. Stochastic programming emerges as a suitable tool for making decisions under noisy or random phenomena, which could affect the data and solution of the optimization problem. Consequently, the choice of the optimal point should be taken before the observation. It has been well-recognized as an important area in mathematical programming and operational research with plenty of real-world applications (see, e.g., [2, 13, 19]). The use of expectation and risk measures as objective functions or constraints provides machinery to model and handle the uncertainty in the decision-making process. Notably, in recent years, optimization problems under probabilistic constraints or chance-constrained optimization problems have captured the attention of several researchers, which has the intention of model problems where the uncertainty affects the restrictions of the problem. Probabilistic constraints offer a middle point between robustness and feasibility of the constraint (nonemptiness of the feasible set) because in this case, the requirements should be satisfied not for everyone, not for the average but for an event with high probability. Here, it is necessary to mention that during the last years, many authors have made an effort to incorporate probabilistic constraints in the area of control of systems of differential equations (see, e.g., [5-7]). It has motivated the study of new classes of probability functions in finite and infinite dimensional settings (see, e.g., [10, 23, 27] and the references therein). In this paper, we are interested in studying a probabilistic approach to control a (stochastic) dynamical system. Instinctively, our desire is to find “the minimal cost" control u which allows the dynamic being lower than a quantity for all . For instance, the dynamic might represent the biomass of a grasshopper living in a crop field or some parasite in a salmon farm context. Our desire is then to have levels of infection smaller than an upper regulatory safety bound . Since the intention is not to eliminate the host population if the threshold is attained, we only focus on having a good control (e.g., use of pesticides) that ensures that we do not cross the upper safety bound with high probability. The classical approach will consider the problem of always having (independently of the possible values of the stochastic process) , which turns out to be a very restrictive scenario for real-world applications. In our setting we relax this condition as follows. Let represent the solution of a (random) dynamical systemwhere represents an “unknown" stochastic process and is the control function. As a risk-averse formulation, our desire control problem corresponds to a chance-constrained control problem given in the form ofwhere is an objective function, C the set of admissible controls, and the probability function is given by is the “unique" solution of the System (1), and are user-defined lower and upper bound safety levels. Nevertheless, the optimization problem (2) is far from a straightforward mathematical programming problem. For example, the first step in the well-posedness of (2) is the correct choice of the space of controls which should be large enough to have the existence of a solution to (2), but at the same time, the controls should have some regularity (continuity, differentiability, etc.) over the data to preserve the existence and uniqueness of (1). In order to tackle Problem (2) we propose a semi-discretization of the dynamical system (1). Formally, we consider a partition of the interval , let us say and the space of controls is considered as piecewise constant functions on , and we identify this space of functions with and we simply denote with each . Therefore, given , we end up with a sequence of nested dynamical systems for :where , , and are approximations of , , u(t) and over time interval , respectively. Our desire is to study the following chance-constrained control problemwhere is an objective function, is a nonempty closed set and the probability function is given bywhere is formally defined bywith on given by the “unique" solution of the dynamical system (DS) on the interval with initial condition . For simplicity of the calculations, we are assuming that we know exactly the initial condition value, meaning that is fixed and taken independently of u and . The rest of the paper is organized as follows: In Sect. 2 we review some definitions and notations. In Sect. 3 we start by reviewing some preliminary concepts on the well-posedness properties of the dynamical system considered, along with the conditions needed on the parameters for having consistency between (DS) and (1). We also finish that section with a revision on the most classical first order ODE models for population growth. In Sect. 4 we study the properties of the semi-probabilistic optimization problem, including geometrical properties, regularity and optimality conditions. Finally, in Sect. 5 we present a logistic growth model and by the use of the empirical mean over several simulations we approximate the probabilistic function finding how non trivial control functions that improve the full control system must be used in a classical approach.

Preliminary and Notation

In this section, we recall some notations and concepts from variational analysis and optimization. In the rest of the paper, we denote the n-dimensional Euclidean space equipped with the Euclidean norm, denoted by . We omit the subindex n when there is no possible confusion. The zero vector in is denoted by . Given a set , we say that C is convex ifLet us consider , defined on a convex set . We say that f is convex on C (only convex if there is no confusion) provided thatfor every and all . In addition if the last inequality holds strictly for all and , then the function f is said to be strictly convex on C. The function f is said to be quasiconvex if for every the sublevel is convex, and the function f is log-convex if is a convex function. Finally, the function f is said to be concave (quasiconcave or log-concave, respectively) if is convex (quasiconvex or log-convex, respectively). Given a point , we say that f is lower semicontinuous at u if for any sequence , . The function f is upper semicontinuous at u if , for any sequence . For a point and the closed ball centered at u with radius is denoted byNow, we recall some notations of normal vectors to nonconvex sets. Given a closed set C and a point , we define the regular normal cone bywhere means with . For convenience, we define for a point . A much robust object is the basic normal cone, which is given as the Painlevé-Kuratowski upper limit of the regular vectors. Formally, given a point we define the basic normal cone asIf , we set . It is important to mentioning that when the set C is closed and convex both notions, the regular and normal cone, coincide with the classical normal cone of convex analysis, that is,

Well-Posedness of the Problem

In this section, we focus our attention in the study of the probability functions defined in (4) by the piecewise solution (6) of the dynamical systems (DS). To simplify the notation, we relabel and omit the index notation of the control parameter u when no confusion is possible. We start by showing the well-posedness of Eqs. (1) and (DS) and the consistency between the simplification and the general process as .

Existence for the General Equation (1)

For a compact set , let be the space of real-valued stochastic processes X(t) such that

Lemma 1

Let be a separable measurable stochastic process with values in . Define asand assume the following: Then, the equationhas a unique solution which is almost surely absolutely continuous on . There exists a stochastic process such that for all : For any positive constant R, there exists a stochastic process such that

Remark 1

This is a simple extension from [14, Theorem 1.5], so, the proof is actually built on it.

Proof

Without loss of generality, assume that for some positive constant R. Let . By applying the Picard’s method of successive approximations of (8) on , we have:By (H1) and (H2) we obtain:andfrom which we conclude thatThen, it follows that exists and satisfies (8). By the usual argument from Grönwall’s inequality we obtain uniqueness, where the solution of (8),is clearly absolutely continuous, and its derivative at t is given by the function . Thus, the proof is finished.

Remark 2

In practice, a more realistic assumption than the established by (H1) is the locally Lipschitz condition; that is, when (H1) is satisfied locally: whenever , for some , there exists a stochastic process such that,However, this relaxation in the assumption does not guarantee the existence and uniqueness of a solution for all , since it can explode in a finite time. To avoid it, in [14, Sect. 1.3], there are additional conditions in which the explosion in finite time is avoided. It basically consists of controlling the growth of a possible solution of (8), so that there is a stochastic such thatwhere almost surely, when . To get this, it is enough to assume that, on any fixed time interval , there exists a positive bounded stochastic process such that(see also [15, Chapter 2, Theorem 3.5]). This condition will be fitted to the models that we can use in our context, where the solution of the process, with high probability, will be within some compact space.

Existence for the Discretization (DS)

The following lemma establishes the preliminary result for the well-posedness of the ordinary differential equation (DS) and some continuity properties on its parameters. Notice that in this subsection we work with given z and not with the stochastic representation , therefore our results are modifications of classical tools in bifurcation theory.

Lemma 2

For fixed, consider the differential equationswith (u, z) a vector of parameters in , and in . Assume for each value of u and z that is Lipschitz continuous in y with Lipschitz constant L locally independent of (u, z). For each fixed (u, z) and this problem has a unique solution in the interval . If furthermore is continuous, is bounded by some and f(y, u) Lipschitz continous with Lipschitz constant independent of (u, z), then is continuous in (u, z) jointly. As a consequence, the setis closed. The existence and uniqueness of the solution is a direct consequence of the Picard Global Existence Theorem for Lipschitz right hand side. Take and two vector of parameters and call and the respective solutions to the equation (12). Notice thatwhich implies thatLet , then integrating from to we findThe right-hand side goes to 0 as approaches therefore the continuity on (u, z) follows. Moreover, the upperbond does not depend on t thus the continuity is actually uniform in time t. Take now a point and a sequence of values on the set , such that . From the continuity of the map we now that for any it follows thatAssume that there is some t such that , then there is some k such thatbut since , in particularwhich is a contradiction with the definition of . In consequence, the set is closed.

Corollary 1

Under the same hypothesis of Lemma 2, for points in , and in , there exists a unique solution towith given. Moreover, the setis closed. To define the solution to the problem we use an iterative strategy. Consider the initial sub-interval and the equation:Thanks to Lemma 2 the equation has a unique solution which is continuous in . Moreover, the integral form of the equations holdsand the setis closed. Now, defineAnalogously as was argued for the first sub-interval , we have that the equationhas a unique solution which is continuous for all . Moreover, it holds thatwithIn consequence, the setis also closed. Iterating the above construction we can conclude that the processis well-defined and absolutely continuous for all . Finally, thanks to the definition of we have that

Consistency Between the Approximate System (DS) and the General Formulation (1)

Next we will construct a (DS) approximation to the general equation (1), in which such an approximation converges in probability to (1) on any closed time interval, when the size of the subintervals of approximation converges to zero. Consider a partition of the interval , , such that , where . DefineandConsider G(t, u, y, z) defined in (7), and additionally assume that: For all (u, y, z) and in , there exists a constant such that and for any , there exists a positive constant K such that . For all we have that , and for any , where is a continuous function such that .

Proposition 1

Under hypotheses (H1), (H2), (H3) and (H4), let be the process defined by unique solution ofand Y the solution of (8). Then, , as .

Remark 3

Notice that an approximation system (DS) arises by defining the mappings , , and , so that the solution is the corresponding solution of (DS). Let . We have that on ,Now, by taking expectation in the above expression we obtain:where the result is obtained by using Grönwall’s inequality. Thus, the proof is finished.

Remark 4

Similarly as mentioned in Remark 2, it is enough that the Lipschitz conditions in (H3) can be satisfied locally, as long as the existence and uniqueness of a solution of the original equation for all is guaranteed, since approximation will occur in any ball of . Typical stochastic noises that satisfy (H4) are the standard Brownian motion in , , and the -dimensional compensated Poisson process , where is an -dimensional of independent and identically distributed Poisson processes with instantaneous rate , and is the m-vector of 1’s. They are square-integrable martingales whose associated increasing process are proportional to t, in which (H4) holds. Let us notice that thanks to Lemma 2 the probability function (5) is well-defined.Therefore, the feasible set in Problem 4 is given by a so-called probabilistic/robust (probust) constraint, which has been studied in recent articles (see, e.g, [9, 23, 27]). Such a representation and a posterior study is quiet complex because, on the one hand, by the possibly infinite number of inequalities, and on the other hand, it is not clear how the variational and geometrical behavior of the control parameter u and the random inflow in the resulting dynamic are. In the following result, we show that the optimization problem (4) has a suitable representation as a joint chance-constrained optimization problems.

Theorem 1

Let , and consider as the unique solution of the System (6) with initial condition and parameter u. Suppose that there exist for such that Then, the probability function (5) associated to the solution of (6) can be rewritten aswhere the function is defined by for all for all and all . for all . Let us consider the sets for and . Let us show by induction that for all . Suppose that . Since is strictly positive on and for all we have that is well-defined and non-decreasing. Now, consider , and the system of inequalitiesOn the one hand, if , we have that is non-increasing because and is positive on . The system of inequalities (15) reduces to , which holds by our inductive hypothesis, and consequently in this case if and only if . On the other hand, suppose that , then is non-decreasing over , so the system of inequalities (15) reduces to . Now, applying the function G to the last inequality, we get that . Now, using separation of variables in (DS) we get that . Consequently, the system of inequalities (15) reduces to , which show in this case that also if and only if , and that proofs the equality of the sets. Finally, (13) follows from the equality , which ends the proof.

Remark 5

It is important to notice that the above result shows that the computation of the probability function , defined in (5), which is given by a system of infinitely many inequalities, reduces to a finite number of inequalities (classical joint chance constrained optimization problem). Therefore, numerical computation of such a probability only requires the estimation of the dynamic at points instead of the complete trajectory over . In the rest of this section we review classical growth models, which fits with the assumption of Theorem 1. For simplicity of the explanation, we will assume in the following examples that the control affects only the rate growth function .

Exponential Growth Model

One of the most well known models for population dynamics is the exponential growth model proposed by Robert Malthus. Assuming no limitation in population growth, the exponential model for a population reads:The parameter is a continuously differentiable function, let us say, of our control and an m-dimensional random vector . Assume that the growth rate function is nonnegative and bounded by some . The explicit solution to this model is a simple exponential function:Take any and consider the first interval . We have thenand for all . In the second interval, sinceit holds thatIterating the argument for next time intervals, we find the existence of the family of upper bounds and by consequence the exponential model fits the hypotheses of Theorem 1. We remark that in the exponential model, all bounds depends explicitly on the initial value of (or an upper bound for that quantity) and on the size of the time interval .

Logistic Model with Allee Effect

A much more elaborated version of the logistic model describes the situations in which the sparsity of individuals leads to a reduced survival of the offspring. In biological terms, the Allee effect is related to the extinction of the population due to critically low levels of individuals. The new dynamics is given by the equationwith the size of the Allee effect. If the initial condition fits , once again the respective solution is strictly increasing going asymptotically to k. Similarly, if the respective solution is strictly decreasing going asymptotically to 0. In both cases Theorem 1 applies by adapting the arguments presented to the exponential and -logistic models.

Semi-probabilistic Optimization Problem

Thanks to Theorem 1, the probability function (5) can be computed using only a finite number of random inequalities. Consequently, our control problem (4) can be seen as a regular chance-constrained optimization problem. Nevertheless, the dependence of each , defined in (14), on the random parameter is not explicitly. It makes it problematic, and it is not clear how to get benefits from such a formulation, and therefore how to explore further properties of the probability function (5). For that reason, it is convenient to avoid the random dependency of inside of the probability function replacing the stochastic function for some deterministic version , which depends only on the control parameter u, and at the same time split the constraint into several constraints. Formally, we consider the following optimization problemwhere is a nonempty closed set and the probability functions are given byIn order to fix ideas, the reader can think in the deterministic functionwhere defined in (14) and represents the average of the dynamic solutions defined on (6). Other possible choice for is the expected value of . Both of them can be computed numerically before the computation of each probability functions . It is worth mentioning that the probability functions , given in (20), can be seen as a (disjoint) semi-probabilistic version of (13), so (under suitable assumptions) the optimization problem (19) can be seen as a (disjoint) semi-probabilistic version of (5). In the rest of the section we will assume that the random vectors has continuous density with respect to the Lebesgue measure. It will allow us to study variational and geometric properties of the optimization problem (19).

Topological and Geometric Properties of Semi-probabilistic Model

Proposition 2

Let be a fixed index, and suppose that the functions and , given in (20), are continuous functions and that posses density with respect to the m-dimensional Lebesgue measure. Then, the probability function defined in (20) is upper semicontinuous. Moreover, is continuous at a point provided thathas null-measure (with respect to the m-dimensional Lebesgue measure). Consider . Now, let us define the sequence of functionswhere and . Let us verify thatIndeed, fix . On the one hand if , then (22) holds trivially. On the other hand suppose that , so by continuity of the functions and at , we have that the inequality must holds for large enough j, hence (22) does holds as an equality. Then, by Fatou’s lemma and Theorem 1 we get thatwhich shows the upper-semicontinuity. Now, if the set in (21) has null measure, then using similar arguments we can show that (22) holds as an equality for almost all , then using Lebesgue’s convergence theorem we get thatwhich shows the continuity of at .

Corollary 2

Let be a closed subset, and suppose that for each the functions and , given in (20), are continuous functions and that posses density with respect to the m-dimensional Lebesgue measure. Then the set is closed. Since, the assumptions of Proposition 2 hold we have that each function is upper-semicontinious, which shows that the set is necessarily closed. Finally, let us shows sufficient conditions to ensure the convexity of the feasible set of the optimization problem (19). We recall that a random vector has quasiconcave density provided that its density with respect to the Lebesgue is quasiconcave.

Proposition 3

Let be a closed and convex and suppose that the functions and , given in (20), are continuous functions. In addition suppose that is quasiconvex, is quasiconcave and assume that has quasiconcave density with respect to the m-dimensional Lebesgue measure. Then, the setis convex for all . Let us consider the set and for all . By [19, Theorem 4.39] we have that the function is quasi-concave, and consequently the set is convex for all . Then, the set is convex being the intersection of the convex sets .

Existence and Uniqueness of Solution

Now, we can provide necessary conditions to the existence and uniqueness of the solution to the semi-probabilistic optimization problem (19). The first result of this subsection shows the general existence of a solution, and the second one under convexity shows that it must be unique.

Theorem 2

Let be a nonempty closed set, let us suppose that for each the functions and , given in (20), are continuous. In addition, assume that one of the following conditions holds: Then, the optimization problem (19) has a solution provided that it is feasible. The objective function of optimization problem (19) is coercive, i.e., as . The set C is bounded. There exists such that the set is nonempty and bounded. It is easy to see that 2 and 2 follows from 2. Moreover, by Corollary 2 and under 2 we have that the optimization problem (19) reduces to a minimization of a lower semicontinuous function over a compact set, which by classical arguments has at least one solution (see, e.g., [17, Theorem 1.9] for more details).

Corollary 3

Under the assumptions of Proposition 3 suppose that the objective function of optimization problem (19) is coercive and strictly convex on . Then, optimization problem (19) is convex and has a unique solution provided that it is feasible. First, we have by Theorem 2 that the optimization problem (19) has a solution. Moreover, under the assumption of Proposition 3 the set is convex, so the optimzation problem is convex. Finally, the solution of this optimization problem is unique due to the fact that is stricly convex over the feasible set (see, e.g., [17, Theorem 2.6]).

Differentiability of

Now, we turn into the study of the differentiability fo the probability function (20). In that case, we establish results for differentiability and formulae for gradients of the probability functions using of the so-called spherical-like radial decomposition described in [25]. It is worth mentioning that such representation also can be used for computing the values of the probability function , we refer to [11, 22, 24, 26–28] for more results in same research line. It will be convenient to adopt the following notation. Given a random vector , we denote its density with respect to the m-dimensional Lebesgue measure by (when this exists). The following definition corresponds to a growth condition used to control the gradients of the data function to compute the derivatives of probability functions.

Definition 1

(-growth condition) Consider a random vector and a function such thatWe say that the mapping satisfies the -growth condition at if for some

Remark 6

(Remark on -growth condition) It is worth mentioning that for most used density distribution the choice of the function simplifies in above definition. For instance, for the Gaussian distribution it can be taken (see, e.g., [20]). More general for the class of elliptically symmetric distribution with generator function (see, [22, Definition 2.3] for its formal definition) the funciton can be chosen by , where is any real-valued function satisfying (see, e.g., [22, Definition 4.14]). Now, given a nonsingular matrix L, and a reference point we define the density-like function given bywhere, is the m-dimensional unit sphere, refers to the determinant of the matrix L, and is the Gamma function. To rewrite the probability function , given in (5), let us consider the uniform probability measure , defined bywhere denotes the Borel -algebra over . The following results gives criteria for differentiability of the probability functions defined in (20). The first one use the geometrical assumption that is convex with respect to the second argument. Similarly, the second result shows differentiability but under the assumption of concavity in the second variable instead of convexity. Both results follows from the gradient formulae found in [25] (see, also, [20, 21, 23, 27, 28]).

Theorem 3

Let be a fixed index, and suppose that the function , given in (20), is continuously differentiable and convex with respect to the second variable. In addition, assume that at our point of interest the following assumptions hold: Then, the probability function , defined in (20), is continuously differentiable at . Furthermore, given any nonsingular matrix L, the following formula holds for all u in a suitable neighbourhood of where is defined in (23), , and the radial function is given by There exists such that the function satisfies the -growth condition at , for some function . The function , given in (20), is continuously differentiable at . We notice that the probability functions defined in (20) can be rewrite aswhere . Then, applying [25, Corollary 2], we get the desire conclusion of the result.

Remark 7

It is important to mention that in many application the choice of the reference point is given by the precise media. Particularly, in the case of Gaussian distribution, that is, , the violation of the inequality implies that the probability of the even is less than 1/2 (see [20, Proposition 3.11]), which for many application means that is not a feasible point of a chance-contrained optimization, where the requirement is allays with probability level p close to 1, for instance p grater than 0.9. Furthermore, the non singular matrix L and the reference point have been used for simplifications and numerical computations of the aforementioned formulae. Particularly, when , the reference point is the media , and the non singular matrix L corresponds to the Choleski decomposition of , that is, . We refer to [11, 20–22] and the references therein for more precise results.

Theorem 4

In the setting of Theorem 3 let us suppose that the function is concave with respect to the second variable instead of being convex. Then, the probability functions , defined in (20), is continuously differentiable at . Furthermore, given any nonsingular matrix L, the following formula holds for all u in a suitable neighborhood of where is defined in (23), , and the radial function is given by Let us notice that the probability function , whereThen, we can apply [25, Corollary 2] to the probability function , and consequently we get the corresponding differentiabilty result to the probability function and its corresponding formula (26) (see also [25, Corollary 5]).

Optimality Conditions

In this section, we study optimality conditions for the control problem (19). The first result corresponds to necessary conditions to optimality, and the second one, under suitable convexity assumptions, corresponds to a sufficient condition to optimality. Given a point we consider the following qualification conditionwhere denotes the convex hull of a set A.

Theorem 5

Let us suppose that is a local solution of the optimization problem (19). Assume that the objective function and are continuously differentiable at (for instance under the assumptions of Theorem 3 or 4). Then, we have that the following optimility condition holds: There exist multipliers for all such that for all and provided that the qualification condition (27) holds at . Let us notice that the optimization problem (19) can be written asTherefore, by [16, Theorem 5.21], there are multipliers such that , for all andNow, if , we have that (29) implies that , which contradicts (27), so . Now, dividing (29) by and defining we get that (28) holds. The final result of this section shows that the condition (28) is also sufficient for a point to be a minimum of (19) under suited geometric assumptions.

Theorem 6

Under assumption of Theorem 5 let us suppose that and C are convex, the functions and , given in (5), are quasiconvex and quasiconcave, respectively and that has log-concave density with respect to the m-dimensional Lebesgue measure. Furthermore, assume that all the prescribed probability level . Then, the fullfilment of (28) by a feasible point of problem (19) implies that this point is a global minimum of the (19). Let us consider the functions . By [19, Theorem 4.39] the function is convex over the set feasible set of the problem (19). Due to (28), there are multipliers and such that for all , and . Now, let us consider a feasible point u of problem (19). By convexity of we have that for all and by convexity of and C we have , and , respectively. Consequentlywhich shows the optimality of . Let us consider the functions . By [19, Theorem 4.39] the function is convex over the set feasible set of the problem (19). Due to (28), there are multipliers and such that for all , and . Now, let us consider a feasible point u of problem (19). By convexity of we have that for all and by convexity of and C we have , and , respectively. Consequentlywhich shows the optimality of .

Numerical Example

In this last section we approximate an optimal control in a particular population model. To that aim we use as stochastic process , with and W(t) a one dimensional standard Brownian motion. For the population dynamics, we consider the logistic model with fixed k, and the following family of growth rate functions:with a minimal positive growth value and the maximal allowed growth rate. We take a and b positive constants and an increasing function, for instance This choice of growth rate functions is such thatand . Numerical approximation of the logistic growth model without control (upper panel) and fully controlled (lower panel). In dashed bold line we plot the empirical mean over simulations. The carrying capacity is (maximal biomass supported by the environment), and the upper bound safety level or limit biomass used for defining the probability function is . The growth factor function and its corresponding regular cut-off function are and , and the diffusion parameter on is For simplicity of the calculations, we are assuming that we know exactly the initial condition values, meaning that . To avoid confusions, we denote and for a given discretization of the stochastic process . Probability and cost function values obtained for the logistic model in the discrete set of control scenario Parameters are shown in Fig. 1. The lower bound safety level p is 0.75. In the case (third column) the best result is obtained for the control (1, 1, 1, 0) meaning that the system reduced the growth rate for times close to the initial time , which is in concordance to the fact that as increases, the effect of in the cost function also does. In the opposite case (fourth column), the best result is obtained for the control (0, 1, 1, 1) meaning that the system reduced the growth rate for times away from
Fig. 1

Numerical approximation of the logistic growth model without control (upper panel) and fully controlled (lower panel). In dashed bold line we plot the empirical mean over simulations. The carrying capacity is (maximal biomass supported by the environment), and the upper bound safety level or limit biomass used for defining the probability function is . The growth factor function and its corresponding regular cut-off function are and , and the diffusion parameter on is

In Fig. 1 we show the numerical approximation for the system with no control and with full control for . In all simulations performed in the present section we use , independently of the values of u and z. Regarding the costs and possible control functions, we consider the following time dependant expression:we vary the values of c to capture the effect on the optimal when the future costs are more/less important than current ones. In this setting, we look for a solution of the chance-constrained control problem (4) with probability level . Moreover, for numerical purposes, we consider first the finite set of control functions:In Table 1, we show the results for both models for and (see the caption of Fig. 1 for parameter values) by approximating the probability by the empirical rate positive/total. In this case, the total number of possible control functions is thus we show all the costs for and . We see that the uncontrolled system has a small probability of having . When , activating the coordinate for further times contribute more to than initial times. In this scenario, we find that the optimal control is (1, 1, 1, 0). Naturally, if then previous argument flips and the optimal control is now (0, 1, 1, 1).
Table 1

Probability and cost function values obtained for the logistic model in the discrete set of control scenario

ControlIncreasing (in time) controlDecreasing (in time) control
Probability \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varphi $$\end{document}φCost \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\psi $$\end{document}ψProbability \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varphi $$\end{document}φCost \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\psi $$\end{document}ψ
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$u^N$$\end{document}uN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$c=1/5$$\end{document}c=1/5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$c=-1/5$$\end{document}c=-1/5
(1 1 1 1)0.974836.17200.97530.8507
(1 1 1 0)0.775416.08640.77540.8009
(1 1 0 1)0.781826.68420.78210.7453
(1 1 0 0)0.45636.59870.45670.6955
(1 0 1 1)0.808331.69030.80740.6276
(1 0 1 0)0.448511.60470.44620.5778
(1 0 0 1)0.447822.20250.44810.5222
(1 0 0 0)0.29242.11700.29250.4724
(0 1 1 1)0.838334.05510.83830.3783
(0 1 1 0)0.440213.96940.43990.3285
(0 1 0 1)0.434524.56720.43210.2729
(0 1 0 0)0.26124.48170.26230.2231
(0 0 1 1)0.425629.57330.42520.1552
(0 0 1 0)0.24369.48770.24410.1054
(0 0 0 1)0.242720.08550.24290.0498
(0 0 0 0)0.165500.16510

Parameters are shown in Fig. 1. The lower bound safety level p is 0.75. In the case (third column) the best result is obtained for the control (1, 1, 1, 0) meaning that the system reduced the growth rate for times close to the initial time , which is in concordance to the fact that as increases, the effect of in the cost function also does. In the opposite case (fourth column), the best result is obtained for the control (0, 1, 1, 1) meaning that the system reduced the growth rate for times away from

In a second stage, we consider the hyper-cube as possible control functions, i.e.:We performed a simple gradient search with adaptive step increment, starting with different controls. When , we use increasing initial control and we find a local minimum at with and . If , we use a decreasing initial control to find a local minimum at , and . We see that in both cases, the result outperforms the discrete scenario.
  2 in total

1.  A simple criterion to design optimal non-pharmaceutical interventions for mitigating epidemic outbreaks.

Authors:  Marco Tulio Angulo; Fernando Castaños; Rodrigo Moreno-Morton; Jorge X Velasco-Hernández; Jaime A Moreno
Journal:  J R Soc Interface       Date:  2021-05-12       Impact factor: 4.118

2.  Systematic review of predictive mathematical models of COVID-19 epidemic.

Authors:  Subramanian Shankar; Sourya Sourabh Mohakuda; Ankit Kumar; P S Nazneen; Arun Kumar Yadav; Kaushik Chatterjee; Kaustuv Chatterjee
Journal:  Med J Armed Forces India       Date:  2021-07-26
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.