Literature DB >> 25143974

Improved stability criteria of static recurrent neural networks with a time-varying delay.

Lei Ding1, Hong-Bing Zeng2, Wei Wang3, Fei Yu4.   

Abstract

This paper investigates the stability of static recurrent neural networks (SRNNs) with a time-varying delay. Based on the complete delay-decomposing approach and quadratic separation framework, a novel Lyapunov-Krasovskii functional is constructed. By employing a reciprocally convex technique to consider the relationship between the time-varying delay and its varying interval, some improved delay-dependent stability conditions are presented in terms of linear matrix inequalities (LMIs). Finally, a numerical example is provided to show the merits and the effectiveness of the proposed methods.

Entities:  

Mesh:

Year:  2014        PMID: 25143974      PMCID: PMC3988971          DOI: 10.1155/2014/391282

Source DB:  PubMed          Journal:  ScientificWorldJournal        ISSN: 1537-744X


1. Introduction

During the past decades, recurrent neural network (RNN) has been successfully applied in many fields, such as signal processing, pattern classification, associative memory design, and optimization. Therefore, the study of RNN has attracted considerable attention and various issues of neural networks have been investigated (see, e.g., [1-4] and the references therein). As the integration and communication delay is unavoidably encountered in implementation of RNN and is often the main source of instability and oscillations, much efforts have been expended on the problem of stability of RNNs with time delays (see, e.g., [5-14]). RNNs can be classified as local field networks and static neural networks based on the difference of basic variables (local field states or neuron states) [15]. Recently, the stability of static recurrent neural networks (SRNNs) with time-varying delay was investigated in [16], where sufficient conditions were obtained guaranteeing the global asymptotic stability of the neural network. Nevertheless, some negative semi-definite terms were ignored in [16], which lead to the conservatism of the derived result. By retaining these terms and considering the low bound of the delay, some improved stability conditions were derived for SRNNs with interval time-varying delay in [17]. In [18], an input-output framework was proposed to investigate the stability of SRNNs with linear fractional uncertainties and delays. Based on the augmented Lyapunov-Krasovskii functional approach, some new conditions were derived to assure the stability of SRNNs in [19-22], but the results can be further improved. In this paper, the problem of stability of SRNNs with time-varying delay is investigated based on the complete delay-decomposing approach [12]. By employing a reciprocally convex technique, some sufficient conditions are derived in the forms of linear matrix inequalities (LMIs). The effectiveness and the merit are illustrated by a numerical example. Notations. Through this paper, N and N −1 stand for the transpose and the inverse of the matrix N, respectively; P > 0  (P ≥ 0) means that the matrix P is symmetric and positive definite (semipositive definite); ℝ denotes the n-dimensional Euclidean space; diag⁡{⋯} denotes a block-diagonal matrix; ||z|| is the Euclidean norm of z; the symbol ∗ within a matrix represents the symmetric terms of the matrix; for example, . Matrices, if not explicitly stated, are assumed to have compatible dimensions.

2. System Description

Consider the following delayed neural network: where x(t) = [x 1(t), x 2(t),…, x (t)] ∈ ℝ and J = [j 1, j 2,…, j ] ∈ ℝ denote the neuron state vector and the input vector, respectively; f(·) = [f 1(·), f 2(·),…, f (·)] ∈ ℝ is the neuron activation function; ϕ(t) is the initial condition; A = diag⁡(a 1, a 2,…, a ) > 0 and W are known interconnection weight matrices; and τ(t) is the time-varying delay and satisfies Furthermore, the neuron activation functions satisfy the following assumption.

Assumption 1

The neuron activation functions are bounded and satisfy where l ≥ 0 for i ∈ 1,2,…, n. For simplicity, denote L = diag⁡(l 1, l 2,…, l ). Under Assumption 1, there exists an equilibrium x* of (1). Hence, by the transformation z* = x(·) − x*, (1) can be transformed into where z(t) = [z 1(t), z 2(t),…, z (t)] is the state vector; ψ(t) = ϕ(t) − x* is the initial condition; and the transformed neuron activation functions g(Wz(·)) = f(Wz(·) + Wx* + J) − f(Wx* + J) satisfies Notice that there exists an equilibrium point z(t) ≡ 0 in neural network (5), corresponding to the initial condition ψ(t) ≡ 0. Based on the analysis above, the problem of analyzing the stability of system (1) at equilibrium is changed into a problem of analyzing the zero stability of system (5). Before presenting our main results, we first introduce two lemmas, which are useful in the stability analysis of the considered neural network.

Lemma 2 (see [23])

Let M = M > 0 be a constant real n × n matrix, and suppose with h > 0 such that the subsequent integration is well defined. Then, one has where ζ(t) = col   {x(t), x(t − h)}.

Lemma 3 (see [24])

Let H 1, H 2,…, H : ℝ ↦ ℝ be given finite functions, and they have positive values for arbitrary value of independent variable in an open subset M of ℝ. The reciprocally convex combination of H   (i = 1,2,…, N) in M satisfies subject to

3. Main Results

In the sequel, following the method proposed in [13], we decompose the delay interval into m equidistant subintervals, where m is a given integer; that is, with . Thus, for any t ≥ 0, there should exist an integer k ∈ {1,2,…, m}, such that τ(t) ∈ [(k − 1)δ, kδ]. Then the Lyapunov-Krasovskii functional candidate is chosen as with where are to be determined, ,  , and W   denotes the ith row of matrix W.

Remark 4

Notice that a novel term V 4(x ) being continuous at τ(t) = τ is included in the Lyapunov-Krasovskii functional (10), which plays an important role in reducing conservativeness of the derived result. Next, we develop some new delay-dependent stability criteria for the delayed neural networks described by (5) and (6) with τ(t) satisfying (2) and (3). By employing the Lyapunov-Krasovskii functional (10), the following theorem is obtained.

Theorem 5

For a given positive integer m, scalars and μ, the origin of system (5) with the activation function satisfying (6) and a time-varying delay satisfying conditions (3) is globally asymptotically stable if there exist and D = diag⁡{d 1, d 2,…, d } ≥ 0, T 1 = diag⁡{t 11, t 12,…, t 1} ≥ 0, T 2 = diag⁡{t 21, t 22,…, t 2} ≥ 0, and G , j = 1,2,…, m, with appropriate dimensions, such that, for k = 1,2,…, m, where with

Proof

From Assumption 1, it can be deduced that, for any diagonal matrices T ≥ 0,  i = 1,2, Now, calculating the derivative of V(z )| along the solutions of neural network (5) yields where By Lemmas 2 and 3, it can be deduced that where ξ = [x(t − τ(t))   x(t − (k − 1)δ)   x(t − kδ)]. Next, we introduce a new vector as where Then, rewrite system (5) as Adding the right sides of (18) to (19) and applying (21) yield where For all k = 1,…, m, if Ω ( + δ 2Γ∑ Z Γ < 0, which is equivalent to LMIs (14) in the sense of Schur complement [25], then for any ζ(t) ≠ 0. Note that V(z ) is continuous at τ(t) = τ , so the system (5) is globally asymptotically stable. This completes the proof.

Remark 6

In the proof of Theorem 5, τ(t)−(k − 1)δ and kδ − τ(t) are not simply enlarged to δ as [16] does. By employing reciprocally convex approach to consider this information, Theorem 5 may be less conservative, which will be verified by the simulation results in the next section.

Remark

In previous works such as [16, 19], considerable attention has been paid to the case that the derivative of the time-varying delay satisfies (3). However, in the case of satifying the treatment in [16, 19] means that in (27) is enlarged to , which may lead to conservativeness inevitably. By contrast, the case above can be taken fully into account by replacing μ with μ in Theorem 5. For the case that the time-varying delay τ(t) is nondifferentiable or is unknown, setting 𝒬 = 0,  j = 1,2,…, m, in Theorem 5, a delay-dependent and rate-independent criterion is easily derived as follows.

Corollary 8

For a given positive integer m, scalars , the origin of system (5) with the activation function satisfying (6) and a time-varying delay satisfying condition (2) is globally asymptotically stable if there exist with appropriate dimensions, such that, for k = 1,2,…, m, LMIs in (15) and (29) hold where , and Γ2 are defined in Theorem 5.

4. Numerical Examples

In this section, we will provide a numerical example to show the effectiveness of the presented criteria.

Example 1

Consider neural network (1) with the following parameters: The activation functions satisfy (6) with This example has been discussed in [16-22]. By using Theorem 5 and Corollary 8 with m = 2, for various μ, the upper bounds that guarantee the global asymptotic stability of neural network (1) are computed and listed in Table 1. It can be concluded that the upper bounds obtained by our method are much better than those in [16-22]. Obviously, the conditions proposed in this paper are an improvement over the existing ones.
Table 1

Allowable upper bounds of for different μ.

μ 0 0.10.5 0.9 Any μ
[16] 1.3323 0.8245 0.3733 0.2343 0.2313
[19] 1.3325 0.8404 0.4265 0.3217 0.3211
[20] 1.3324 0.8402 0.4266 0.3225 0.3218
[17]1.33230.84020.42640.32140.3209
[18] (N = 1)1.51570.92790.42670.3212
[18] (N = 2)1.53300.93310.42680.3215
[21]0.84110.42670.32270.3215
[22]1.55750.94300.44170.36320.3632

The proposed (m = 2) 1.76851.04310.4382 0.3668 0.3644

5. Conclusions

This paper has studied the stability of SRNNs by constructing a complete delay-decomposing Lyapunov-Krasovskii functional. Some improved delay-dependent stability conditions have been derived by utilizing a reciprocally convex technique to consider the relationship between the time-varying delay and its varying interval, which are formulated in linear matrix inequalities (LMIs). Finally, a numerical example has been provided to show the effectiveness of the proposed methods.
  9 in total

1.  A comparative study of two modeling approaches in neural networks.

Authors:  Zong-Ben Xu; Hong Qiao; Jigen Peng; Bo Zhang
Journal:  Neural Netw       Date:  2004-01

2.  Delay-slope-dependent stability results of recurrent neural networks.

Authors:  Tao Li; Wei Xing Zheng; Chong Lin
Journal:  IEEE Trans Neural Netw       Date:  2011-10-06

3.  Global exponential stability of generalized recurrent neural networks with discrete and distributed delays.

Authors:  Yurong Liu; Zidong Wang; Xiaohui Liu
Journal:  Neural Netw       Date:  2005-07-20

4.  Diagonal recurrent neural networks for dynamic systems control.

Authors:  C C Ku; K Y Lee
Journal:  IEEE Trans Neural Netw       Date:  1995

5.  New Lyapunov-Krasovskii functionals for global asymptotic stability of delayed neural networks.

Authors:  Xian-Ming Zhang; Qing-Long Han
Journal:  IEEE Trans Neural Netw       Date:  2009-02-13

6.  Delay-dependent stability for recurrent neural networks with time-varying delays.

Authors:  Hanyong Shao
Journal:  IEEE Trans Neural Netw       Date:  2008-09

7.  Complete delay-decomposing approach to asymptotic stability for neural networks with time-varying delays.

Authors:  Hong-Bing Zeng; Yong He; Min Wu; Chang-Fan Zhang
Journal:  IEEE Trans Neural Netw       Date:  2011-03-17

8.  A unified approach to the stability of generalized static neural networks with linear fractional uncertainties and delays.

Authors:  Xianwei Li; Huijun Gao; Xinghuo Yu
Journal:  IEEE Trans Syst Man Cybern B Cybern       Date:  2011-10

9.  A new method for stability analysis of recurrent neural networks with interval time-varying delay.

Authors:  Zhiqiang Zuo; Cuili Yang; Yijing Wang
Journal:  IEEE Trans Neural Netw       Date:  2009-12-18
  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.