Literature DB >> 28410394

Non-fragile mixed H∞ and passive synchronization of Markov jump neural networks with mixed time-varying delays and randomly occurring controller gain fluctuation.

Chao Ma1.   

Abstract

This paper studies the non-fragile mixed H∞ and passive synchronization problem for Markov jump neural networks. The randomly occurring controller gain fluctuation phenomenon is investigated for non-fragile strategy. Moreover, the mixed time-varying delays composed of discrete and distributed delays are considered. By employing stochastic stability theory, synchronization criteria are developed for the Markov jump neural networks. On the basis of the derived criteria, the non-fragile synchronization controller is designed. Finally, an illustrative example is presented to demonstrate the validity of the control approach.

Entities:  

Mesh:

Year:  2017        PMID: 28410394      PMCID: PMC5391947          DOI: 10.1371/journal.pone.0175676

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

There have been significant attentions on dynamic behaviors of neural networks, since they have various current and future potential applications, i.e., signal processing, optimization problems, pattern recognition and so forth. [1-9]. In particular, the study of Markov jump neural networks has been a significant topic during the past years, since this model can better describe the neural networks with different structures in real life. Generally speaking, the mode jumps displayed in the Markov jump neural networks are commonly considered to be governed by an ideal homogeneous Markov chain. With the help of analysis and synthesis for Markov jump systems, some remarkable results on Markov jump neural networks have been reported in the literature and the references therein [10-19]. On another research line, the synchronization problem has become a hot topic in the fields of neural networks [9, 12]. When one neural network can synchronize the other neural network, they can display complicated dynamic behaviors, which can give an insight into the characteristics of neural network. As is well known, time delays exist in neural networks, such that there is a need for the synchronization problem with time delays [20, 21]. Moreover, it is noted that another unavoidable factor affecting the synchronization in neural networks is the disturbance. As a result, several effective synchronization strategies for neural networks with disturbances have been proposed, especially for some finite-time cases [22-28]. It is worth mentioning that the theory of passivity has provided a powerful tool in analyzing and synthesis of complex dynamic systems [29, 30]. Note that some initial researches are on the mixed H∞ and passive filtering design, which can provide a more flexible design than common H∞ approach [31]. In addition, the non-fragile synchronization controller should be designed for controller gain fluctuation attenuation [32]. Furthermore, it can be found that the controller gain fluctuation can happen in a stochastic way [33]. Consequently, a natural question arises: how to solve the synchronization problem for Markov jump neural networks? Unfortunately, up to now, such a question has not been fully addressed and remains open. This paper is to deal with the above question. In this paper, a stochastic variable is adopted for describing the controller gain fluctuation. Based on stochastic methods, synchronization criteria are first established, such that the drive and response Markov jump neural networks can be synchronized in the presence of mixed time-varying delays and disturbance. Base on the derived results, a design procedure is given for the synchronization controller. The remainder of the paper is arranged as follows. The Markov jump neural network model is first introduced, and the non-fragile synchronization problem is formulated. The main results of the synchronization problem are then provided. Moreover, the simulation results are given and this paper is concluded in the end. Notation: denotes the n dimensional Euclidean space, denotes the set of m × n matrices. denotes the space of square-integrable vector functions over [0, ∞). is a probability space, Ω is the sample space, is the σ-algebra of subsets of the sample space and is the probability measure on . Pr{α} means the occurrence probability of the event α, and Pr{α|β} means the occurrence probability of α conditional on β. means the expectation of the stochastic variable x and means the expectation of the stochastic variable x conditional on the stochastic variable y. * denotes the ellipsis in symmetric block matrices and denotes a block-diagonal matrix.

Methods

Consider the Markov jump neural networks with mixed time-varying delays in : where x(t) = [x1(t), x2(t), …, x(t)] denotes the state of the neuron; f(x(t)) = [f1(x1(t)), f2(x2(t)), …, f(x(t))] is the neuron activation function; is a diagonal matrix with positive entries; Matrices A(r(t)) = (a(r(t))), B(r(t)) = (b(r(t))) and D(r(t)) = (d(r(t))) represent the connection weight matrix, the discretely delayed connection weight matrix and the distributively delayed connection weight matrix, respectively; τ(t) and d(t) denote the discrete delay and distributed delay, respectively, which satisfy where , μ, and are known positive constants. The initial condition of Eq (1) is given by x(s) = ϕ(s), . {r(t), t ≥ 0} is a right continuous continuous-time Markov process with described as with Δt > 0, lim(o(Δt)/Δt) = 0 and π ≥ 0 () is the transition rate from mode i at time t to mode j at time t + Δt, while for . Assumption 1. The activation function f(x(t)) in Eq (1) is continuous and bounded, and satisfies where f(0) = 0, α, , α ≠ β and and are known real constants. Denote Eq (1) as the drive neural network. For the sake of simplicity, we denote the Markov process r(t) by i indices. Moreover, it is assumed that the mode of the drive and the response neural networks can be identical all the time [34]. Then, the response neural network can be given by where u(t) denotes the control input and is the disturbance. Define synchronization error as then it follows that We develop the following mode-dependent controller as: where is the mode-dependent controller gain and ΔK is the controller gain fluctuation with where H and E are known constant matrices, satisfies is defined by with where δ ∈ [0, 1] is a known constant. Consequently, System (8) can be rewritten as The following definitions and lemmas are introduced. Definition 1. [31] System (15) is said to have mixed H∞ and passive performance γ, if there exists a constant γ > 0 such that for all t > 0 and any non-zero where θ ∈ [0, 1] represents the parameter that defines the trade-off between H∞ and passive performance. Definition 2. The mixed H∞ and passive synchronization of the Markov jump neural networks Eqs (1) and (6) is said to be achieved if System (15) can achieve the mixed H∞ and passive performance with the prescribed γ. Lemma 1. [35] For any positive symmetric constant matrix , scalars h1, h2 satisfying h1 < h2, a vector function , such that the integrations concerned are well defined, then Lemma 2. [36] For any matrix M > 0, scalars τ > 0, τ(t) satisfying 0 ≤ τ(t)≤τ, vector function such that the concerned integrations are well defined, then where Lemma 3. [37] Let L = L, H and E be real matrices of appropriate dimensions with F(t) satisfying F(t)F(t) ≤ I. Then, L + HFE + E F H < 0, if and only if there exists a scalar ε > 0 such that L + ε−1 HH + εE E < 0, or equivalently

Results

In this section, delay-dependent synchronization conditions will be developed, based on which the non-fragile synchronization controller is designed. Theorem 1. For given upper bounds of mixed time-varying delays and , and given scalars θ and γ, the mixed H∞ and passive synchronization of the Markov jump neural networks Eqs (1) and (6) can be achieved in the sense of Definition 1 and 2, if there exist mode-dependent symmetric matrices P > 0, symmetric matrices Q > 0, R > 0 and a constant ε > 0 such that hold for all , respectively, where proof. Choose the Lyapunov-Krasovskii function as: where The infinitesimal operator of V(t) is defined by Then for each mode i, by taking the derivative of Eq (31) along the solution of System (15), one has By Lemma 1 and Lemma 2, it holds that: It follows from Assumption 1 that such that the following inequality holds Define Noting that , it can be deduced that where By Schur complement [38], it can be obtained that is equivalent to , where By performing congruence transformation of to Eq (58) and considering the inequality −P R−1 P ≤ R − 2P, can be further rewritten as where Then, it can be derived by Lemma 3 that holds, if Π < 0. Therefore, under zero initial condition, it can be obtained by integrating both sides of Eq (48) that J ≤ 0 holds, which means that the mixed H∞ and passive synchronization of the Markov jump neural networks is well achieved according to Definition 2 and completes the proof. Theorem 2. For given upper bounds of mixed time-varying delays and , and given scalars θ and γ, the mixed H∞ and passive synchronization of the Markov jump neural networks Eqs (1) and (6) can be achieved in the sense of Definition 1 and 2, if there exist mode-dependent symmetric matrices P > 0, mode-dependent matrices V, symmetric matrices Q > 0, R > 0 and a constant ε > 0 such that hold for all , respectively, where , are defined in Eq (22) and the controller gain can be obtained as . proof. Let V = P K. The rest of the proof can follow from the proof of Theorem 1 directly.

Discussion

In order to verify our designed synchronization scheme, the following simulation example is presented. Consider the Markov jump neural networks with two modes, where and the neuron activation function is taken as which satisfies Assumption 1 with and , such that In the simulation, the transition probability matrix is given as where the time step is set as Δt = 0.01. The time-varying delays are assumed to be τ(t) = 0.25 + 0.05 sin t and d(t) = 0.15 + 0.05 cos t, such that and . The disturbance is supposed to be The parameters δ, θ and γ are set by δ = 0.5, θ = 0.4 and γ = 0.2. The controller gain fluctuation satisfies the condition Eq (9) with By solving Ψ < 0, i = 1, 2 in Theorem 2, the mode-dependent controller gains can be obtained as follows: Set the initial values as x(t) = [1, −1] and y(t) = [−5, 5], respectively. Under the modes evolution shown in S1 Fig, it can be seen from S2 and S3 Figs that the synchronization can be achieved with the designed mode-dependent controllers, which demonstrates our control scheme.

Conclusion

The non-fragile mixed H∞ and passive synchronization problem of Markov jump neural networks with mixed time-varying delays is addressed. By utilizing the stochastic stability theory, delay-dependent criteria are derived for ensuring that the desired synchronization is achieved and the non-fragile synchronization controller is designed. An interesting further research direction is extending the derived results to the uncertainty cases.

The jumping modes of the neural networks.

(TIF) Click here for additional data file.

The controlled synchronization error of the neural networks.

(TIF) Click here for additional data file.

The control input of the neural networks.

(TIF) Click here for additional data file.
  15 in total

1.  Delay-dependent exponential stability analysis of delayed neural networks: an LMI approach.

Authors:  Xiaofeng Liao; Guanrong Chen; Edgar N Sanchez
Journal:  Neural Netw       Date:  2002-09

2.  Exponential H(infinity) synchronization of general discrete-time chaotic neural networks with or without time delays.

Authors:  Donglian Qi; Meiqin Liu; Meikang Qiu; Senlin Zhang
Journal:  IEEE Trans Neural Netw       Date:  2010-07-01

3.  Stability and synchronization of discrete-time Markovian jumping neural networks with mixed mode-dependent time delays.

Authors:  Yurong Liu; Zidong Wang; Jinling Liang; Xiaohui Liu
Journal:  IEEE Trans Neural Netw       Date:  2009-05-26

4.  Synchronization of discrete-time neural networks with delays and Markov jump topologies based on tracker information.

Authors:  Xinsong Yang; Zhiguo Feng; Jianwen Feng; Jinde Cao
Journal:  Neural Netw       Date:  2016-10-27

5.  Lévy noise induced switch in the gene transcriptional regulatory system.

Authors:  Yong Xu; Jing Feng; JuanJuan Li; Huiqing Zhang
Journal:  Chaos       Date:  2013-03       Impact factor: 3.642

6.  Synchronization control of memristor-based recurrent neural networks with perturbations.

Authors:  Weiping Wang; Lixiang Li; Haipeng Peng; Jinghua Xiao; Yixian Yang
Journal:  Neural Netw       Date:  2014-01-28

7.  Robust synchronization of an array of neural networks with hybrid coupling and mixed time delays.

Authors:  Yanke Du; Rui Xu
Journal:  ISA Trans       Date:  2014-04-05       Impact factor: 5.468

8.  Robust stochastic stability of discrete-time fuzzy Markovian jump neural networks.

Authors:  A Arunkumar; R Sakthivel; K Mathiyalagan; Ju H Park
Journal:  ISA Trans       Date:  2014-06-03       Impact factor: 5.468

9.  Adaptive cluster synchronization of directed complex networks with time delays.

Authors:  Heng Liu; Xingyuan Wang; Guozhen Tan
Journal:  PLoS One       Date:  2014-04-24       Impact factor: 3.240

10.  Stability and synchronization for discrete-time complex-valued neural networks with time-varying delays.

Authors:  Hao Zhang; Xing-yuan Wang; Xiao-hui Lin; Chong-xin Liu
Journal:  PLoS One       Date:  2014-04-08       Impact factor: 3.240

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.