Literature DB >> 29071005

Enhanced robust finite-time passivity for Markovian jumping discrete-time BAM neural networks with leakage delay.

C Sowmiya1, R Raja2, Jinde Cao3, G Rajchakit4, Ahmed Alsaedi5.   

Abstract

This paper is concerned with the problem of enhanced results on robust finite-time passivity for uncertain discrete-time Markovian jumping BAM delayed neural networks with leakage delay. By implementing a proper Lyapunov-Krasovskii functional candidate, the reciprocally convex combination method together with linear matrix inequality technique, several sufficient conditions are derived for varying the passivity of discrete-time BAM neural networks. An important feature presented in our paper is that we utilize the reciprocally convex combination lemma in the main section and the relevance of that lemma arises from the derivation of stability by using Jensen's inequality. Further, the zero inequalities help to propose the sufficient conditions for finite-time boundedness and passivity for uncertainties. Finally, the enhancement of the feasible region of the proposed criteria is shown via numerical examples with simulation to illustrate the applicability and usefulness of the proposed method.

Entities:  

Keywords:  LMIs; Markovian jumping systems; bidirectional associative memory; discrete-time neural networks; leakage delay; passivity and stability analysis

Year:  2017        PMID: 29071005      PMCID: PMC5635139          DOI: 10.1186/s13662-017-1378-9

Source DB:  PubMed          Journal:  Adv Differ Equ        ISSN: 1687-1839


Introduction

Over the past decades, delayed neural networks have found successful applications in many areas such as signal processing, pattern recognition, associative memories and optimization solvers. In such applications quantitative behavior of dynamical systems is an important step for the practical design of neural networks [1]. Therefore, the dynamic characteristics of discrete-time neural networks have been extensively investigated, for example, see [2-12]. The study on neural networks is mostly in the continuous-time setting, but they are often discretized for experimental or computational purposes. Also, neural networks with leakage delay is one of the important types of neural networks. Hence, time delay in the leakage term has a great impact on the dynamics of neural networks. Although, time delay in the stabilizing negative feedback term has a tendency to destabilize a neural network system [13-16], the delay in the leakage term can destroy the stability neural networks. Gopalasamy [17] initially investigated the dynamics of bidirectional associative memory (BAM) network model with leakage delay. Based on this work, authors in [15] considered the global stability for a class of nonlinear systems with leakage delay. Li and Cao discussed the stability of memristive neural networks with both reaction-diffusion term and leakage delay, and some easily checked criteria have been established by employing differential inclusion theory and Jensen’s integral inequality [18]. The (BAM) neural network model, proposed by Kosko [19, 20] is a two-layered nonlinear feedback network model, where the neurons in one layer always interconnect with the neurons in the another layer, while there are no interconnections among neurons in the same layer. In the current scenario, due to its application in many fields, the study of bidirectional associative memory neural networks has attracted the attention of many researchers, and they have studied the stability properties of neural networks and presented various sufficient conditions for asymptotic or exponential stability of the BAM neural networks [4, 15, 21–23]. On the one hand, time delay is one of the main sources of instability, which is encountered in many engineering systems such as chemical processes, long transmission lines in pneumatic systems, networked control systems, etc. Over the past years, the study of time delay systems has received considerable attention, and a great number of research results on time delay systems exist in the literature. The stability of time delay systems is a fundamental problem because it is important in the synthesis and analysis of such neural network systems [3, 5, 24, 25]. The exponential stability of stochastic BAM networks with mixed delays was discussed by Lyapunov theory [25]. On the other hand, the theory of passivity was implemented first in circuit analysis and generates increasing interest among the researchers. It is a useful tool in obtaining the stability analysis of both linear and nonlinear systems, especially for high-order systems. It is evidentally true that the passive properties can ideally keep the systems internally stable. Due to its importance and applicability, the problem of passivity analysis for delayed dynamic systems has been investigated, and lots of results have been reported in the literature [26-30]. For instance, in [28] the authors Wu et al. derived the passivity condition for discrete-time switched neural networks with various functions and mixed time delays. Moreover, the passivity and synchronization of switched neural networks were investigated in [30], and some delay-dependent as well as delay-independent criteria were provided. In [6, 7], the authors delivered the concept of passivity which is stable or not. However, using the conventional Lyapunov asymptotic stability theory, it should be mentioned that all these existing studies about the passivity analysis are performed with definition over the infinite-time interval. The concept of finite-time (or short-time) analysis problem was first initiated by Dorato in 1961 [31]. Communication network system, missile system and robot control system are the examples of systems which work in a short-time interval. In the present years, many biologists have been focusing on the transient values of the actual network states. In [32-34], many interesting results for finite-time stability of various types of systems can be found. Recently, an extended finite-time control problem for uncertain switched linear neutral systems with time-varying delays was investigated in [27], and also the concept of time-varying delays was proposed in [22, 35]. The results for finite-time stabilization of neural networks with discontinuous activations was proposed in [36]. Motivated by the aforementioned discussions, in this paper we focus on the finite-time boundedness and passivity of uncertain discrete-time Markovian jumping BAM neural networks with leakage time-delays. Here we use a new type of LKF to handle the given range of time delay interval together with free weighting matrix approach to derive the main results. Our main contributions are highlighted as follows: The finite-time passivity result for discrete-time Markovian jumping uncertain BAM neural networks with leakage delay is proposed for the first time. Reciprocally convex combination approach is used to handle the triple summation terms and a new type of zero inequalities is introduced. Delay-dependent results for finite-time boundedness and finite-time passivity are derived by using the finite-time stability method and the Lyapunov-Krasovskii functional approach. The rest of this paper is well organized as follows. Problem formulation and mathematical preliminaries are presented in Section 2. Section 3 gives the main result of this paper, and it also contains the subsection on finite-time boundedness. Robust finite-time passivity is derived in Section 4. Numerical examples are demonstrated in Section 5 to illustrate the effectiveness of the proposed method. Finally, we give the conclusion of this paper in Section 6.

Notations

The notations in this paper are standard. Throughout this paper, and denote, respectively, the n-dimensional Euclidean space and the set of all real matrices. I denotes the identity matrix with appropriate dimensions and denotes the diagonal matrix. denotes the transpose of matrix A. k denotes the set of positive integers. For real symmetric matrices X and Y, the notation (resp., ) means that the matrix is positive semi-definite (resp., positive definite). and stands for the Euclidean norm in . (resp., ) stands for the maximum (resp., minimum) eigenvalue of the matrix X. and represent the identity matrix and zero matrix, respectively. denotes the space of square summable infinite vector sequences. The symbol ∗ within a matrix represents the symmetric term of the matrix.

Problem formulation and mathematical preliminaries

Let be a complete probability space with filtration satisfying the usual condition (i.e., it is right continuous and contains all p-null sets); stands for the mathematical expectation operator with respect to given probability measure P. Let , be a Markovian chain taking values in a finite space with probability transition matrix given by where () is a transition rate from i to j and , . Consider the following BAM uncertain discrete-time Markovian jumping neural network with time-varying delays, and leakage delay is described by where is the neural state vector, , is the exogenous disturbance input vector belonging to and , is the output vector of the neural network, , is the neuron activation function, the positive integer , denotes the time-varying delay satisfying and for all , where and , and are constant positive scalars representing the minimum and maximum delays, respectively. in which , represent the state feed back coefficient matrix with , , , , , , respectively, the connection weights and the delayed connection weights, the initial function , is continuous and defined on , . Further the uncertainty parameters are defined as follows: where , , , , , M are known constant matrices of appropriate dimensions and is an unknown time-varying matrix with Lebesgue measurable elements bounded by . The following assumptions help to complete the main result.

Assumption I

For any , there exist constraints , , , such that For presentation convenience, in the following we denote

Remark 2.1

In biologically inspired neural networks, the activation function is usually an abstraction representing the rate of action potential firing in the cell. Non-monotonic functions can be more suitable than other activation functions. In many electronic circuits, the input-output functions of amplifiers may be neither monotonically increasing nor continuously differentiable. The constants are positive, negative or zero in the above assumption. So, the activation function may be non-monotonic and more widespread than usual sigmoid functions and Lipschitz functions. Such conditions are discourteous in qualifying the lower and upper bounds of the activation functions. Therefore, by using the LMI-based technique, the generalized activation function is considered to reduce the possible conservatism.

Assumption II

The disturbance input vector and is time-varying and, for given , satisfies , . Before deriving our main results, the following definitions and lemmas will be stated.

Definition 2.2

[37] DNN (1), (3) is said to be robustly finite-time bounded with respect to , where and , if ∀ , , holds for any nonzero , satisfying Assumption II.

Definition 2.3

[37] DNN (1), (3) with output (2), (4) is said to be robustly finite-time passive with respect to , where is a prescribed positive scalar and and , iff DNN (1), (3) with output (2), (4) is robustly finite-time bounded with respect to and under the zero initial condition the output , satisfies for any nonzero , satisfying Assumption II.

Remark 2.4

The concept of finite-time passivity is different from that of usual passivity. If the states in the system exceed their recommended bounds, it is usual passivity. Here, in this paper, Assumption II and Definition 2.2 should be in the given bounds, which helps to prove the finite-time passivity in the main result.

Lemma 2.5

[12] For any symmetric constant matrix , , two scalars and satisfying , and a vector-valued function (), we have

Lemma 2.6

[38] Let have positive values in an open subset of  . Then the reciprocally convex combination of over satisfies subject to

Remark 2.7

There are two main methods to find lower bounds. The first one is based on Moon’s inequality. The second method is the so-called reciprocally convex combination lemma, and this approach also helps to reduce the number of decision variables. The conservatism induced by these two inequalities is independent. While, in some cases, such as stability conditions resulting from the application of Jensen’s inequality, the reciprocally convex combination lemma is in general more conservative than Moon’s inequality. Also, note that the reciprocally convex combination approach is successfully applied on double summation terms.

Robust finite-time boundedness

The main concern in this subsection is that the sufficient conditions for the finite boundedness of DNN (1), (3) and the LMI-based robust conditions will be derived using the Lyapunov technique.

Theorem 3.1

Under Assumptions I and II, for given scalars , , , , , , DNN model (1), (3) is robustly finite-time bounded with respect to if there exist symmetric positive definite matrices , , W, , , R, , , , , , , , , matrices , , , , , , , , positive diagonal matrices , , , and positive scalars , , , , , , , , , , , , , , , , ϵ, , , , , such that the following LMIs hold for : where

Proof

To prove the required results, the following LKF for finite-time passivity BAM DNN model (1)-(4) is considered: where where and . Calculating the forward difference of V(k) by defining along the solution of (1) and (3), we obtain Using Lemma 2.5(i), the first summation term in can be written as Similarly, Further, using Lemma 2.5(ii), the second summation term in becomes By reciprocally convex combination Lemma 2.6, if LMIs in (8) hold, then the following inequalities hold: where , , , with , , , . Similarly, By reciprocally convex combination Lemma 2.6, if LMIs in (9) hold, then the following inequalities hold: where , , , with , , , . Then inequalities (19) and (20) can be rewritten as Similarly, It is noted that when or and or , we have or and or , respectively. So, inequalities (20) and (21) still hold. For any matrices , , , , the following equalities hold: On the other hand, from Assumption I, we have which is equivalent to and where denotes the unit column vector having the element 1 on its rth row and zero elsewhere. Let , , , . Then Similarly, Similarly, one can get and Then from (13) and adding (25)-(30) gives where and Next, in view of Schur complement [39], the RHS of (31) can be written as Similarly, for Then, by using uncertainty description (2), (4) and procedure as in Lemma 2.6, we have where where Hence if LMIs ((6), (7), (8), (9) hold, it is easy to get where and . Simple computation gives Noticing and , it follows that Further, from (12), we can get Letting we obtain where , , , , , , , , , , , , , , , . On the other hand, from (12), we can obtain that Put . From (35) to (36), we get Therefore, from (11), we get that Then, using Definition 2.2, DNN (1), (3) is robustly finite-time bounded. □

Remark 3.2

Leakage time delay in the stabilizing negative feedback term has a tendency to destabilize a system. The term , in system (1) and (3) corresponds to a stabilizing negative feedback of the system which acts instantaneously with time delay. The term is variously known as leakage (or forgetting) term which is considered as a time delay.

Robust finite-time passivity

In this subsection, we focus on the robust finite-time passivity of DNN (1), (3) with output (2), (4). In order to deal this, we introduce and .

Theorem 4.1

Under Assumptions I and II, for given scalars , , , , , , DNN model (1), (3) is robustly finite-time passive with respect to , if there exist symmetric positive definite matrices , , W, , , R, , , , , , , , , matrices , , , , , , , , positive diagonal matrices , , , and positive scalars , , , , , , , , , , , , , , , , ϵ, , , , , , ω, such that the following LMIs (8), (9) hold for : where and the parameters are defined in the above theorem. The proof is followed from the theorem above by choosing in I and in J. Using similar lines of (34), it follows that By simple computation, Under the zero initial condition and noticing for all , we have Noticing that , we have Let Therefore, from (40), it is easy to get the inequality in Definition 2.2. Hence it can be concluded that DNN model (1), (3) is robustly finite-time passive. This completes the proof. □

Remark 4.2

If leakage terms and become zero, then the neural networks system (1)-(4) is

Numerical simulation

In this section, we present one numerical example with its simulations to guarantee the superiority and validity of our theoretical results.

Example 5.1

Consider two-dimensional robust finite-time passivity Markovian jumping discrete-time BAM neural networks with (1)-(4) with , ; denotes right-continuous Markovian chains taking values in with generators Leakage delay is defined as , and the scalars are as follows: ; ; ; ; ; ; ; ; ; ; ; . Lower bounds and upper bounds of finite-time passivity BAM neural networks system (1)-(4) are , , , and . Now, take the activation functions as follows: Now, the feasible solutions are as follows: The trajectory of finite-time passivity BAM neural networks system (1)-(4) is shown in Figure 2.
Figure 2

State trajectories of finite-time passivity BAM neural networks ( )-( ).

According to Theorem 4.1, we can obtain that system (1)-(4) with the above given parameters is exponentially stable. With the help of Lyapunov functions and state trajectories , , , , the above finite-time passivity BAM neural networks are depicted in Figures 1, 2 and 3.
Figure 1

The state response , of ( )-( ) with leakage delay.

Figure 3

denotes Markovian jump of ( )-( ).

The state response , of ( )-( ) with leakage delay. State trajectories of finite-time passivity BAM neural networks ( )-( ). denotes Markovian jump of ( )-( ). The performance of Markovian jumping for system (1)-(4) is given in Figure 3.

Conclusion

Passivity result for uncertain discrete-time Markovian jumping BAM neural networks with leakage delay has been investigated. By using the Lyapunov theory together with zero inequalities, convex combination and reciprocally convex combination approaches, the finite-time boundedness and passivity are derived in terms of LMI which can be easily verified via the LMI toolbox. Leakage delay has been considered as a time-varying delay. Utilizing the reciprocal convex technique, conservatism of the proposed criteria has been reduced significantly. A numerical example has been provided to illustrate the effectiveness of the results and their improvement over the existing results. Optimal values of for different and
Table 1

Optimal values of for different and

\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\tau_{M}$\end{document}τM:121416182022
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\sigma_{M}$\end{document}σM:101214161820
χ 5.3810.61213.8915.04
  6 in total

1.  Adaptive bidirectional associative memories.

Authors:  B Kosko
Journal:  Appl Opt       Date:  1987-12-01       Impact factor: 1.980

2.  New passivity criteria for memristive uncertain neural networks with leakage and time-varying delays.

Authors:  Jianying Xiao; Shouming Zhong; Yongtao Li
Journal:  ISA Trans       Date:  2015-10-03       Impact factor: 5.468

3.  Exponential passivity of memristive neural networks with time delays.

Authors:  Ailong Wu; Zhigang Zeng
Journal:  Neural Netw       Date:  2013-09-18

4.  Nonsmooth finite-time stabilization of neural networks with discontinuous activations.

Authors:  Xiaoyang Liu; Ju H Park; Nan Jiang; Jinde Cao
Journal:  Neural Netw       Date:  2014-01-11

5.  Matrix measure strategies for stability and synchronization of inertial BAM neural network with time delays.

Authors:  Jinde Cao; Ying Wan
Journal:  Neural Netw       Date:  2014-02-18

6.  Extended dissipative state estimation for memristive neural networks with time-varying delay.

Authors:  Jianying Xiao; Yongtao Li; Shouming Zhong; Fang Xu
Journal:  ISA Trans       Date:  2016-06-02       Impact factor: 5.468

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.