Literature DB >> 27264780

Noise Response Data Reveal Novel Controllability Gramian for Nonlinear Network Dynamics.

Kenji Kashima1.   

Abstract

Control of nonlinear large-scale dynamical networks, e.g., collective behavior of agents interacting via a scale-free connection topology, is a central problem in many scientific and engineering fields. For the linear version of this problem, the so-called controllability Gramian has played an important role to quantify how effectively the dynamical states are reachable by a suitable driving input. In this paper, we first extend the notion of the controllability Gramian to nonlinear dynamics in terms of the Gibbs distribution. Next, we show that, when the networks are open to environmental noise, the newly defined Gramian is equal to the covariance matrix associated with randomly excited, but uncontrolled, dynamical state trajectories. This fact theoretically justifies a simple Monte Carlo simulation that can extract effectively controllable subdynamics in nonlinear complex networks. In addition, the result provides a novel insight into the relationship between controllability and statistical mechanics.

Entities:  

Year:  2016        PMID: 27264780      PMCID: PMC4893695          DOI: 10.1038/srep27300

Source DB:  PubMed          Journal:  Sci Rep        ISSN: 2045-2322            Impact factor:   4.379


Control, i.e., external forcing aimed at achieving desirable dynamical trajectories, of nonlinear large-scale dynamical networks is of major interest in many research fields such as gene regulatory networks, infection spreads, human brains, financial markets, smart grids, to list just a few1234. To investigate how difficult such networks are to control, controllability, originally defined in control theory56, has attracted much attention, mainly in physics research12789101112131415. Among them, Kalman’s controllabilty matrix has played an important role to determine whether every dynamical state of a linear system is reachable1. Beyond this controllability determination, the so-called controllability Gramian, that is only defined for linear systems, provides much of the quantitative information concerning this problem. For example, every dynamical state is reachable if and only if the Gramian is nonsingular. Moreover, the minimum control energy required to drive the current state to a target one is represented as a quadratic form associated to the inverse of the Gramian, which is utilized to analyze the effect of connection topology7. In this context, the condition number of the Gramian is a meaningful index to characterize the nonlocality of linear complex networks8. The controllability of complex networks with nonlinear dynamics is also being actively investigated1617. The Lie bracket gives a natural extension of Kalman’s controllability matrix rank condition for the controllability determination16. However, an analogous controllability Gramian for nonlinear dynamics has not yet been developed, even in control theory181920212223, although the controllability Gramian of a linearized system is useful in some applications. One of only a few existing approaches is the empirical Gramian24 that appears in simulation-based model reduction methods2526 mainly developed in computational physics and numerical analysis. The empirical Gramian is constructed using simulation data, which is in stark contrast to the controllability Gramian. Furthermore, it has been widely applied to nonlinear large-scale systems2427. However, although this is equal to the controllability Gramian when the dynamics are linear, there are no theoretical underpinnings for such an application to nonlinear cases. The goal of this paper is to introduce a novel matrix measure for the controllability quantification of nonlinear network dynamics, to reveal its specific feature under stochastic noise, and to provide a simulation-based method for dynamical network reduction, together with its theoretical justification. To this end, we first extend the notion of the controllability Gramian to nonlinear systems from a statistical mechanics viewpoint, and show the validity through its application to controllability quantification. Then we show that, when the network is open to environmental noise, the newly proposed Gramian is equal to the covariance matrix of the uncontrolled dynamics. This equality brings about new insights into the relationship between controllability, simulation data, and stochasticity. This work is largely inspired by the path integral approach proposed by Kappen28. Although this concept is not directly used as a numerical procedure to solve the optimal control problem below, it is a key building block to prove the main result.

Results

Controllability function and Gramian.

Consider the nonlinear controlled dynamics: where t represents time, x(t) = [x1(t), …, x(t)] and u(t) are the state and input variables, and smooth functions f and g describe the autonomous dynamics and the input effect, respectively. In (1), the initial state x0 is fixed, which affects both controllability determination and quantification. This can be arbitrarily chosen, although typically the initial state is fixed to a stable equilibrium in the conventional controllability quantification results of nonlinear systems22. Moreover, all the results in this paper hold for any probabilistic initial state (i.e., x0 is a random variable) and multi input cases as far as x0 is independent of the input noise below. For a final time τ > 0, the minimum control effort to achieve is referred to as a controllability function denoted by . When the dynamics are linear, i.e., f(x) = Ax and g(x) = B with constant matrices A, B of compatible dimensions, the matrix is called the controllability Gramian. It is well known that provided G is nonsingular when x0 = 0 6818. However, this definition of G cannot straightforwardly be extended to nonlinear systems. Here, the controllability function and Gramian were introduced independently, and then a simple quadratic relation was shown. By changing our way of thinking, let us define G(L), which we call Gibbs Gramian, in terms of the Gibbs distribution associated with the given controllability function When , the Gaussian integral formula shows G(L) = G. Therefore, this definition is consistent with the conventional one for linear dynamics. It should be noted that the definition of the controllability function does not assume linearity. Thus, this definition can readily be employed also for nonlinear cases. Another important feature is that we do not need to care about the reachability of each state. Even if some are not reachable by any finite energy input (e.g., linear dynamics for which G is singular), causes no problem in (2) because it simply leads to . This means we can handle network dynamics that evolve on a specific domain due to the dynamics’ structure or physical constraints. For linear systems with the initial state at the origin, eigenstructure of the controllability Gramian G is useful for identifying directions in the state space that require small control energy to be reached. The Gibbs Gramian enjoys a similar property. Specifically, principle component analysis on G(L) reveals all effectively reachable directions. For instance, by setting , we observe for the linear case that the principle eigenvector of G(L) = G minimizes , that is, the control effort divided by the squared distance takes its minimum value when the final state lies on the principle eigenvector. Another interpretation is that, of every possible direction, with a fixed control energy the state can be driven the furthest from the origin by driving it to a destination state that lies along the principle eigenvector. Interpretation for the nonlinear case has similarities with the linear case. Note that, for any unit vector e, large implies that a small energy input can be used (i.e., small , and consequently large ) to place the state far from the origin along the direction of e (i.e., large ) at the final time. Then, its spatial integral over all final states satisfies the following theorem, which readily follows from the equality : The integral is maximized when e is the principal eigenvector of G(L). In this sense, the principal eigenvector of the Gibbs Gramian captures the direction along which the states can be reached furthest from the origin by a control effort that is small on average. In addition, it is trivial to change the reference point. For example, one can modify the definition as to evaluate the travelling distance instead of the distance from the origin. Similarly, the secondary, and further, eigenvectors enable us to characterize an effectively reachable subspace. See also the subsection entitled Dimensionality reduction of nonlinear network dynamics below for another interpretation in terms of the minimal projection error. The conclusion is that the Gibbs Gramian introduced G(L) in (2) is a proper extension of the conventional controllability Gramian G for nonlinear dynamics.

Stochasticity connects Gibbs Gramian and simulation data

For linear dynamics, G is given as a solution to a linear matrix equation. On the other hand, the controllability function L is given as a solution to a nonlinear partial differential equation for nonlinear dynamics22. Therefore, it is not realistic to compute L, and consequently G(L), even for small-scale cases. However, the situation drastically changes when the input is disturbed by random noise: where T > 0 is a noise level or temperature, ξ(t) is white noise such that 〈ξ2(t)〉 = 1, and the expectation is taken over noise samples2930. There are several theoretical results concerning the controllability determination of stochastic systems (e.g., approximate controllability3132). In this paper, we define a stochastic controllability function for the controllability quantification as where the infimum is taken over all feedback control laws and we define Φ such that with the Dirac’s delta function δ. Then, the terminal cost is an alternative representation of the terminal boundary constraint , since when . Therefore, can be viewed as the minimum expected value of the control effort to regulate ; see Fig. 1a. Similarly to the deterministic case, we do not require the boundedness of . It should be emphasized that the resulting Gibbs distribution is not identical to , and depends on T. Next, we refer to the uncontrolled (u(t) = 0), but randomly excited dynamics as noise response
Figure 1

Typical behavior of controlled and uncontrolled dynamics open to environmental noise.

(a) Sample paths of (3) controlled by a fixed feedback law that regulates . The corresponding control effort is measured by , which is the average of over these sample paths. Then, is the minimum of these average values over all such control laws. (b) Sample paths of the noise response in (6).

whose sample path is shown in Fig. 1b. The key finding in this paper is the following theorem, the proof of which is in the Methods section: The probability density function of is given by , that is, the noise response obeys the Gibbs distribution associated with . This result means that the noise response data completely characterizes the minimum required input energy for each target state . An intuitive reason for this nontrivial relation to hold is that the noise in (6) is added through the input channel. This type of noise is known to have an ability to search for the solution to a wide class of optimal control problems28. By this connection, the noise response data inherently contains information about the control energy minimization problem. Therefore, this bridges the gap to the controllability function that is defined via the minimum energy control input. Note that the evaluation of over the whole state space based on the density function estimation of is still computationally intractable. However, in this paper, we focus on the Gramian induced by the stochastic controllability function, which is given by the spatial integral in (2), and is much easier to determine than the pointwise evaluation of . Actually, the theorem above yields the following equality for the stochastic Gibbs Gramian : This equality tells us that the stochastic Gibbs Gramian can easily be calculated via Monte Carlo sampling of the uncontrolled dynamics open to environmental noise. Furthermore, both this computation and also the principle component analysis of are efficiently implementable because various algorithms to achieve computational scalability exist for both Monte Carlo sampling (e.g., importance sampling) and matrix eigenvalue analysis. Thus, the novel equality (7) characterized in this paper leads to the first numerically tractable procedure to find an effectively reachable subspace of large scale nonlinear dynamics, when they are open to environmental noise, and is particularly useful for network dynamics for which only simulation algorithms, or time-series data collected in a noisy environment, are available.

Dimensionality reduction of nonlinear network dynamics

The controllability quantification enables us to characterize subspaces that require a large control energy to be reached. By eliminating such subspaces, we can obtain a reduced order model, which is expected to well approximate the state trajectories as long as the input energy is not large. Actually, the dimensionality reduction of (mainly linear1833) dynamical systems, which is helpful for understanding the hidden core mechanism, or to perform efficient numerical simulation, is an important application of the controllability quantification. In this section, we investigate two conceptually different nonlinear model reduction methods in the light of the Gibbs Gramian. Let an integer k( Then, the reduced state can approximately recover the original one by x(t) ≈ z(t). Hence, we expect the Galerkin projection given as to be a good reduced order model of dynamics (1). In what follows, we focus on the problem of finding such a . In computational physics, the Proper Orthogonal Decomposition (POD), or Karhunen-Loeve method, has a long history of intensive research2526. This is a simulation-based model reduction method, and is widely used for the simulation of nonlinear large-scale dynamical systems as found in computational fluid dynamics and aerospace engineering. Suppose we replace the requirement (8) by where the error is evaluated at multiple, given time instances . (This optimization criterion is equivalent to the maximal singular value of (I − ) [x(τ1), x(τ2), …]). This is the fundamental idea of the POD, and is referred to as the method of snapshots. If we need to approximate only the autonomous system dx(t)/dt = f(x(t)), this optimization is computationally tractable even for nonlinear large-scale dynamics, and the resulting Galerkin projection yields a satisfactory reduced model. However, when controlled dynamics (1) are of interest, we need to determine which input signal u(t) should be injected when collecting snapshots, because we cannot simulate the trajectories corresponding to all possible input signals. Many practically useful techniques, as well as theoretical analysis tools, for this have been developed; see34 and references therein. On the other hand, from a controllability quantification viewpoint, it is also reasonable to find such that if is reachable with a small energy input u(t). For this purpose, the Gramian-based model reduction for linear systems employs a that maximizes Trace(G). The Galerkin projection associated with this choice extracts effectively reachable subdynamics, in that the resulting projection eliminates a subspace on which is large. However, as mentioned at the beginning of the previous section, for nonlinear dynamics it is unrealistic to compute the controllability function L, which is no longer a quadratic form. This is the main reason why there have been no practical methods for the control-theoretic model reduction of general nonlinear large-scale systems20212234. This limited applicability shows a clear contrast to the POD. There are many results that attempt to solve optimal control problems by the POD353637. However, the relation between the simulation-based and Gramian-based model reductions has not yet been fully understood. The remainder of this section is devoted to forming a theory-bridge to connect these two model reduction approaches that were developed independently for similar purposes. Concerning the input selection for the POD, the impulse signals for the empirical Gramian24, or the sinusoidal signals for the frequency domain POD2125, may be suitable for linear systems. Actually, the POD with these input signals is equivalent to the Gramian-based model reduction for linear systems [34, Chapter 5], [18, Section 9.1]. However, although it is technically easy to inject the same inputs for nonlinear systems, there is no solid justification for their use. An interesting solution is to choose white noise ξ(t) for the input signal, and minimize the snapshots’ ensemble average of the squared projection error, that is, . Note that (7) leads to Therefore, the approximation error of the noise response data is equal to the projection error weighted by the Gibbs distribution associated with the stochastic controllability function . Consequently, the POD evaluates the projection errors on the trajectories that are realizable by a small control effort, without computing any minimum energy input. In this sense, the POD with noise response data can be regarded as an easily implementable nonlinear model reduction method that explicitly takes the controllability into account. Note that (9) is equal to . Therefore, the stochastic Gibbs Gramian-based reduction (the maximization of ) is equivalent to the best approximation of effectively reachable states (the minimization of the right-hand-side of (9)). This is another justification for the conclusion that the Gibbs Gramian is a proper extension of the conventional controllability Gramian. Furthermore, equality (9) holds even for any nonlinear projection in the place of , although its optimization is nontrivial. In the case of linear systems, observability, a dual concept of the controllability, is also investigated, and often referred to as the balanced POD34. Extensions in this direction are currently under investigation.

Discussion

The dynamics’ nonlinearity makes the controllability sensitive to T. This temperature dependency is discussed in this section. First, in (7) and (9) indicates that the input cost is inversely proportional to the noise level, that is, a less noisy (accurate) control channel is more expensive. In particular, as T → 0, the criterion evaluates the error at the snapshots located almost on the trajectory of the autonomous system dx(t)/dt = f(x(t)). Its interpretation from a controllability perspective is as follows: The input weight T−1 becomes unboundedly large, and consequently the states reachable with small control energy are limited to a small neighborhood around the autonomous trajectory (the noise is negligible because T → 0). Next, we demonstrate the nontrivial effect of the noise level by means of a numerical example of p identical, coupled neuronal oscillators of the FitzHugh-Nagumo model. The individual neuron generates the stable limit cycle shown in Fig. 2a. The state variable of the i-th neuron is denoted by v(t) = [v(t) w(t)], and the dimension of the entire system’s state x(t) = [v1(t) v2(t) … v(t)] is n = 2p. The dynamics of the i-th neuron, subject to the diffusive coupling with nonuniform intensity and external forcing, are given by
Figure 2

A phase portrait of the FitzHugh-Nagumo neuronal oscillator in the (v, w)-plane.

(a) The stable limit cycle of the noise-free individual dynamics. (b) A sample path of v1(t) for T = TL. (c) A sample path of v1(t) for T = TH.

By using (7), we computed for based on 1000 paths of the uncontrolled trajectories with u(t) = 0, and its normalized eigenvector e corresponding to the i-th largest eigenvalue λ. Low (TL = 0.052) and high (TH = 0.52) noise levels are considered. Let p = 4 and for all i, which means that only a common input is allowed. The symmetric coupling strengths η(=η) are given by η12 = η34 = 0.1, η23 = 0.005, and 0 for other pairs. The initial states are v1(0) = −v3(0) = [−1 0], v2(0) = −v4(0) = [0 2]. For the uncontrolled trajectories with u(t) = 0, apart from the fluctuation shown in Fig. 2, we observed the following 3 (de)synchronization phenomena with a high probability: (A) (v1 − v2) and (v3 − v4) quickly decayed due to their strong couplings, (B) (v2 − v3) decayed only slowly for T = TL because their coupling is weak, (C) (v2 − v3) quickly decayed for T = TH because noise-induced synchronization occurred3839. See Fig. 3 for these phenomena observed in a sample path.
Figure 3

(De)Synchronization phenomena in a sample path.

For both noise levels, (w1(t) − w2(t)) and (w3(t) − w4(t)) quickly decay due to the synchronization caused by the strong couplings. Synchronization is not observed in (w2(t) − w3(t)) for T = TL because the coupling strength η23 is small. It shows a clear contrast to the quick noise induced synchronization for T = TH.

As explained in Table 1, e1 and e2 approximately span the subspace given by
Table 1

The eigenvectors of   for the example.

 T = TLT = TH
ρ1
ρ2
ρ3
ρ4

For both noise levels, λ/λ1 < 0.15 for i = 3, 4, …, 8. The eigenvectors e1, e2 are given by with ρ listed above. Based on the standard correlation analysis, we conclude that ρ1 ≈ ρ2, ρ3 ≈ ρ4, ρ2 ≉ ρ3 for T = TL, and ρ1 ≈ ρ2 ≈ ρ3 ≈ ρ4 for T = TH.

Recall that  = [e1 e2] minimizes (9) because is maximized; see the previous section. Thus, the Galerkin projection onto this subspace extracts core subdynamics in the following two senses. First, from a POD perspective, this subspace best approximates the noise response data; see the left-hand-side of (9). This is confirmed by the fact that quick convergence to this subspace is nothing but the aforementioned (de)synchronization phenomena. Second, from a controllability perspective, this subspace best approximates the effectively reachable states; see the right-hand-side of (9). In other words, even if we apply the optimal feedback control, it is expensive to avoid the (de)synchronization phenomena. This can also be understood from the structure of the dynamics. Concerning (A) and (C), since only the common input is allowed, the synchronization induced by the strong coupling and noise is difficult to prevent, independently of T. On the other hand, concerning the desynchronization in (B), even though some well designed entrainment signals exist38, they are not effective enough when the input weight T−1 is large. As observed in this example, controllability of highly nonlinear phenomena can be suitably captured from the noise-driven simulation data. Note that reduced order models for controlled complex networks obtained by the proposed method do not always allow such a simple interpretation. In other words, this method can extract nontrivial core dynamics purely from time series data. In summary, we have proven that the noise response of the uncontrolled dynamics reveals the temperature-dependent controllability of general nonlinear network dynamics. This contribution consists of the following two achievements: One is a novel extension of the celebrated controllability Gramian for linear systems. To the author’s best knowledge, this is the first nonlinear extension of the controllability Gramian, which was introduced over half a century ago and played a central role in the development of modern control theory5. The second achievement is equality (7), which mathematically proves that, when the system is open to environmental noise, the newly introduced Gramian is equal to the covariance matrix of the noise response data. This result forms a theory-bridge connecting controllability quantification and time series data analysis. An important outcome is that the equality (9) yields an easily implementable method to a control-theoretic nonlinear model reduction problem for the first time. An extensive amount of noisy data is presently being gathered, and has been gathered to date, for a variety of uncontrolled systems. The equality (7) makes such data useful to glean insight into the modeling/controllability of controlled systems. We believe that this result can provide new methods and viewpoints in many research fields in view of the fact that much controllability related work is inspired by the pivotal contribution by Liu et al.1 For example, the condition number of the Gibbs Gramian should characterize the effect of nonlinearity on the network nonlocality analogously to the linear case8. Also, in view of the numerical simulation above, the relation between the noise effect and the connection topology of the dynamical complex networks40 can be analyzed. Furthermore, the fact that any uncontrolled nonlinear dynamics subject to environmental noise obey the Gibbs distribution associated with , which is the minimum input energy divided by the temperature, suggests a nontrivial link to the canonical distribution that is used in statistical mechanics.

Methods

For simplicity of exposition we let T = 1, but note that general results can be shown similarly. It is well known28 that the optimal value of the stochastic control problem in (4) satisfies where the real scalar function is the solution to the Hamilton-Jacobi-Bellman equation Next, the logarithmic transformation yields the linear PDE This form allows us to apply the Feynman-Kac formula29 to obtain Based on and (5), we have and consequently, for an arbitrary smooth function w(x) defined on , where we exchanged the order of expectation and spatial integral. The arbitrariness of w(x) in (11) means the probability density function of is given by . Finally, (11) with w(x) = xx yields (7).

Additional Information

How to cite this article: Kashima, K. Noise Response Data Reveal Novel Controllability Gramian for Nonlinear Network Dynamics. Sci. Rep. 6, 27300; doi: 10.1038/srep27300 (2016).
  16 in total

1.  Optimizing controllability of complex networks by minimum structural perturbations.

Authors:  Wen-Xu Wang; Xuan Ni; Ying-Cheng Lai; Celso Grebogi
Journal:  Phys Rev E Stat Nonlin Soft Matter Phys       Date:  2012-02-22

2.  Noise bridges dynamical correlation and topology in coupled oscillator networks.

Authors:  Jie Ren; Wen-Xu Wang; Baowen Li; Ying-Cheng Lai
Journal:  Phys Rev Lett       Date:  2010-02-04       Impact factor: 9.161

3.  Robustness of the noise-induced phase synchronization in a general class of limit cycle oscillators.

Authors:  Jun-Nosuke Teramae; Dan Tanaka
Journal:  Phys Rev Lett       Date:  2004-11-12       Impact factor: 9.161

4.  Controlling complex networks: how much energy is needed?

Authors:  Gang Yan; Jie Ren; Ying-Cheng Lai; Choy-Heng Lai; Baowen Li
Journal:  Phys Rev Lett       Date:  2012-05-23       Impact factor: 9.161

5.  Realistic control of network dynamics.

Authors:  Sean P Cornelius; William L Kath; Adilson E Motter
Journal:  Nat Commun       Date:  2013       Impact factor: 14.919

6.  Controllability transition and nonlocality in network control.

Authors:  Jie Sun; Adilson E Motter
Journal:  Phys Rev Lett       Date:  2013-05-14       Impact factor: 9.161

7.  Functional network organization of the human brain.

Authors:  Jonathan D Power; Alexander L Cohen; Steven M Nelson; Gagan S Wig; Kelly Anne Barnes; Jessica A Church; Alecia C Vogel; Timothy O Laumann; Fran M Miezin; Bradley L Schlaggar; Steven E Petersen
Journal:  Neuron       Date:  2011-11-17       Impact factor: 17.173

8.  Nodal dynamics, not degree distributions, determine the structural controllability of complex networks.

Authors:  Noah J Cowan; Erick J Chastain; Daril A Vilhena; James S Freudenberg; Carl T Bergstrom
Journal:  PLoS One       Date:  2012-06-22       Impact factor: 3.240

9.  Intrinsic dynamics induce global symmetry in network controllability.

Authors:  Chen Zhao; Wen-Xu Wang; Yang-Yu Liu; Jean-Jacques Slotine
Journal:  Sci Rep       Date:  2015-02-12       Impact factor: 4.379

10.  Exact controllability of complex networks.

Authors:  Zhengzhong Yuan; Chen Zhao; Zengru Di; Wen-Xu Wang; Ying-Cheng Lai
Journal:  Nat Commun       Date:  2013       Impact factor: 14.919

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.