• Non ci sono risultati.

Improved results on passivity analysis of neutral-type neural networks with mixed time-varying delays

N/A
N/A
Protected

Academic year: 2022

Condividi "Improved results on passivity analysis of neutral-type neural networks with mixed time-varying delays"

Copied!
7
0
0

Testo completo

(1)

Improved results on passivity analysis of neutral-type neural networks with mixed time-varying delays

THONGCHAI BOTMART Khon Kaen University Department of Mathematics Khon Kaen 40000, THAILAND

thongbo@kku.ac.th

NARONGSAK YOTHA

Rajamangala University of Technology Isan Department of Applied Mathematics and Statistics

Nakhon Ratchasima 30000, THAILAND narongsak.yo@rmuti.ac.th

Corresponding author KANIT MUKDASAI

Khon Kaen University Department of Mathematics Khon Kaen 40000, THAILAND

kanit@kku.ac.th

WAJAREE WEERA University of Pha Yao Department of Mathematics Pha Yao 56000, THAILAND

wajaree@up.ac.th

Abstract: This paper is considered with problem of the passivity analysis for neutral-type neural networks with mixed time-varying delays. By constructing an augmented Lyapunov-Krasovskii functionals and using the double integral inequality with approach to estimate the derivative of the Lyapunov-Krasovskii functionals, sufficient conditions are established to ensure the passivity of the considered neutral-type neural networks, in which some useful information on the neuron activation function ignored in the existing literature is taken in to account. Finally, a numerical example is given to demonstrate the effectiveness of the proposed method.

Key–Words: Passivity, Neutral-type neural networks, Time delay, Integral inequality

1 Introduction

In the past few decades, delayed neural networks (NNs) have been an important issue due to their appli- cations in many areas such as signal processing, pat- tern recognition, associative memories, fixed-point, computations, parallel computation, control theory and optimization solvers [1, 2, 3]. The state esti- mation problems for NNs with discrete interval and distributed time-varying delays have been extensively studied in [4, 5, 6]. On the other hand, it is com- mon that the time delay of system state, have been many phenomenon such as automatic control, chem- ical reactors, distributed networks, heat exchanges, etc. The systems containing the information of past state derivatives are called neutral-type neural net- works (NTNNs). The existing work on the state es- timator of NTNNs with mixed delays are only [7, 8]

at present. In [9], the authors considered the prob- lem of global passivity analysis of interval neural net- works with discrete and distributed delays of neutral type. Consequently, the passivity analysis of NTNNs has also been received considerable attention and lots of works were reported in recent years.

The problem of passivity performance analysis has also been extensively applied in many areas such

as signal processing, sliding mode control, and net- worked control [10, 11, 12]. The main idea of the passivity theory is that the passive properties of a system can keep the system internally stable. In [13, 14, 15, 16, 17], authors investigated the passiv- ity of neural networks with time-varying delay, and gave some criteria for checking the passivity of neu- ral networks with time-varying delay. Passivity anal- ysis for neural networks of neutral type with Marko- vian jumping parameters and time delay in the leakage term has been presented in [18] . Robust exponential passive filtering for uncertain neutral-type neural net- works with time-varying mixed delays via wirtinger- based integral inequality has been presented in [19] is Wirtinger-based integral inequality [20]. Recently, a new double integral inequality for time-delay system was proposed in [21, 22], which is less conservative.

These motivate our research.

Motivated by above discussing, this paper inves- tigates the passivity analysis for NTNNs with dis- crete and continuous distributed time-varying delays.

Based on the constructed Lyapunov-Krasovskii func- tional, free-weighting matrix approach, and double in- tegral inequality for estimating the derivative of the LyapunovKrasovskii functional, the delay-dependent

(2)

passivity conditions are derived in terms of LMIs, which can be easily calculated by MATLAB LMIs control toolbox. Numerical example is provided to demonstrate the feasibility and effectiveness of the proposed criteria.

Notation Rn is the n-dimensional Euclidean space; Rm×n denotes the set of m × n real ma- trices; In represents the n-dimensional identity ma- trix; λ(A) denotes the set of all eigenvalues of A;

λmax(A) = max{Reλ; λ ∈ λ(A)}; C([0, t], Rn) de- notes the set of all Rn-valued continuous functions on [0, t]; L2([0, t], Rm) denotes the set of all the Rm- valued square integrable functions on [0, t]; The nota- tion X ≥ 0 (respectively, X > 0 ) means that X is positive semidefinite (respectively, positive definite);

diag(· · ·) denotes a b lock diagonal matrix; Matrix di- mensions, if not explicitly stated, are assumed to be compatible for algebraic operations.

2 Problem formulation and prelimi- naries

Consider the following NTNNs with time-varying dis- crete and distributed delays described by:

x(t) = −Ax(t) + W g(x(t))˙ +W1g(x(t − τ (t))) +W2Rt−k(t)t g(x(s))ds +W3x(t − h(t)) + u(t),˙ y(t) = g(x(t)),

x(t) = φ(t), t ∈ [−τmax, 0], τmax= max{τ2, k2, h2},

(1)

where x(t) = [x1(t), x2(t), ..., xn(t)] ∈ Rn is the state of the neural, A = diag(a1, a2, . . . , an) >

0 represents the self-feedback term, W, W1, W2 and W3 represents the connection weight matrices, g(·) = (g1(·), g2(·), . . . , gn(·))T represents the acti- vation functions, u(t) and y(t) represents the input and output vectors, respectively; φ(t) is an initial con- dition. The variables τ (t), k(t) and h(t) represents the interval time-varying and time-varying delays with satisfy the following conditions:

0 < τ1≤ τ (t) ≤ τ2, τ (t) ≤ τ˙ 3, 0 ≤ h(t) ≤ h2, h(t) ≤ h˙ 3, 0 ≤ k(t) ≤ k2, ∀t ≥ 0,

(2)

where known scalars τ1, τ2, τ3, h2, h3 and k2.

The neural activation functions gk(·), k = 1, 2, . . . , n satisfy gk(0) = 0 and for s1, s2 ∈ R, s16= s2,

lk ≤ gk(s1) − gk(s2)

s1− s2 ≤ l+k, (3) where lk, l+k are known real scalars.

Definition 1 [13]: The system (1) is said to be pas- sive if there exists a scalar γ such that for all tf ≥ 0,

2 Z tf

0 yT(s)u(s) ds ≥ −γ Z tf

0 uT(s)u(s) ds, and for all solutions of (1) with x(0) = 0.

Lemma 2 [21]: For a positive definite matrix S >

0, and any continuously differentiable function x : [a, b] → Rn, The following inequality holds:

Z b a

T(s)S ˙x(s) ds ≥ 1

b − aΠT11+ 3 b − a

×ΠT22+ 5

b − aΠT33, where

Π1= x(b) − x(a),

Π2= x(b) + x(a) − b−a2 Rabx(s)ds, Π3= x(b) − x(a) + b−a6 Rabx(s)ds

(b−a)122

Rb a

Rb

θ x(s)dsdθ.

Lemma 3 [22]: For a positive definite matrix S >

0, and any continuously differentiable function x : [a, b] → Rn, the following inequality holds:

Z b a

Z b θ

T(s)S ˙x(s) dsdθ ≥ 2ΠT44+ 4ΠT55 +6ΠT66,

where

Π4= x(b) − b−a1 Rabx(s)ds,

Π5= x(b) + b−a2 Rabx(s)ds − (b−a)6 2

Rb a

Rb

θ x(s)dsdθ, Π6= x(b) − b−a3 Rabx(s)ds + (b−a)242

Rb a

Rb

θ x(s)dsdθ

(b−a)60 3 Rab

Rb θ

Rb

s x(λ)dλdsdθ.

3 Main results

For presentation convenience, in the following, we denote

L+= diag(l+1, l+2, · · · , l+n), L= diag(l1, l2, · · · , ln),

ζ(t) = [ xT(t), xT(t − τ (t)), xT(t − τ1), xT(t − τ2), gT(x(t)), gT(x(t − τ (t))), gT(x(t − τ1)), gT(x(t − τ2)), Rt−τt 1xT(s)ds, Rt

t−τ2xT(s)ds, Rt−τt 1RθtxT(s)dsdθ, Rt

t−τ2

Rt

θxT(s)dsdθ, Rt−τt 1RθtRstxT(λ)dλdsdθ, Rt

t−τ2

Rt θ

Rt

sxT(λ)dλdsdθ, Rt−k(t)t gT(x(s))ds, x˙T(t − h(t)), uT(t) ]T,

(3)

and ei ∈ Rn×17nis defined as

ei= [0n×(i−1)n, In, 0n×(17−i)n] for i = 1, 2, . . . , 17.

Theorem 4 For given scalars τ1, τ2, τ3, h2, h3and k2 the system (1) with (3) is passive for any delays sat- isfying (2), if there exist real positive matrices P ∈ R7n×7n, Q, S ∈ Rn×n, (i = 1, 2, 3), real positive diagonal matrices U1, U2, Ts, Tab(s = 1, 2, 3, 4; a = 1, 2, 3; b = 2, 3, 4; a < b), with appropriate dimen- sions, and a scalar γ > 0 such that the following LMIs holds:

Ω = Ω1+ Ω2+ Ω3+ Ω4+ Ω5+ Ω6+ Ω7 < 0, (4) where

1= ΠT1P Π2+ ΠT2P Π1+ (Π3+ Π4)T3+ Π4,

2= 3eT1Q1e1− eT3Q1e3− eT4Q1e4 +2eT5Q2e5− eT7Q2e7− eT8Q2e8 +C0TQ3C0− (1 − τ3)eT6Q1e6

−(1 − h3)eT16Q3e16, Ω3= eT5k22S1e5− eT15S1e15,

4= C0T12S2+ τ22S2)C0− ΠT5S2Π5

−3ΠT6S2Π6− 5ΠT7S2Π7− ΠT8S2Π8

−3ΠT9S2Π9− 5ΠT10S2Π10, Ω5= C0T(0.5τ12S3+ 0.5τ22S3)C0

−2ΠT11S3Π11− 4ΠT12S3Π12

−6ΠT13S3Π13− 2ΠT14S3Π14

−4ΠT15S3Π15− 6ΠT16S3Π16, Ω6=P4s=1T17TsΠ18+ ΠT18TsΠ17)

+P3a=1P4b=2,b>aΠT19TabΠ20 +P3a=1P4b=2,b>aΠT20TabΠ19, Ω7= −γeT17e17− eT5e17− eT17e5,

(5)

with

Π1 = [eT1, eT9, eT10, eT11, eT12, eT13, eT14]T,

Π2 = [CT0, eT1 − eT3, eT1 − eT4, τ1eT1 − eT9, τ2eT1 − eT10, 0.5τ12eT1 − eT11, 0.5τ22eT1 − eT12]T,

Π3 = eT5(U1− U2)C0, Π4 = eT1(L+U2− LU1)C0, Π5 = e1− e3,

Π6 = e1+ e3τ21e9, Π7 = e1− e3+ 6

τ1e912τ2 1

e11, Π8 = e1− e4,

Π9 = e1+ e4τ22e10, Π10 = e1− e4+ 6

τ2e1012τ2 2

e12, Π11 = e1τ11e9,

Π12 = e1+ 2

τ1e9τ62 1e11, Π13 = e1+ e2τ31e9+ 24

τ12e1160τ3 1e13, Π14 = e1τ12e10,

Π15 = e1+ 2

τ2e10τ62 2

e12,

Π16= e1+ e2τ32e10+24

τ22e1260τ3

2

e14, Π17= es+4− Les,

Π18= L+es− es+4,

Π19= (ea+4− eb+4) − L+(ea− eb), Π20= L+(ea− eb) − (ea+4− eb+4),

C0 = Ae1+ W e5+ W1e6+ W2e15+ W3e16+ e17. Proof : Consider a Lyapunov-Krasovskii functionai candidate:

V (x(t)) =

5

X

i=1

Vi(x(t)), (6) where

V1(x(t)) = ηT(t)P η(t) +2

n

X

k=1

ρk Z x(t)

0

hgk(s) − lksids

+2

n

X

k=1

σk Z x(t)

0

hlk+s − gk(s)ids,

V2(x(t)) =

2

X

i=1

Z t t−τi

hxT(s)Q1x(s)

+gT(x(s))Q2gT(x(s))ids +

Z t

t−τ(t)

xT(s)Q1x(s)ds +

Z t t−h(t)

T(s)Q3x(s)ds,˙ V3(x(t)) = k2

Z t

t−k2

Z t

θ

gT(x(s))S1g(x(s))dsdθ, V4(x(t)) =

2

X

i=1

τi Z t

t−τi

Z t θ

T(s)S2x(s)dsdθ,˙

V5(x(t)) =

2

X

i=1

Z t t−τi

Z t θ

Z t s

T(λ)S3x(λ)dλdsdθ,˙

where η(t) =hxT(t),Rt−τt

1xT(s)ds,Rt−τt

2xT(s)ds, Rt

t−τ1

Rt

θxT(s)dsdθ,Rt−τt 2RθtxT(s)dsdθ, Rt

t−τ1

Rt θ

Rt

sxT(λ)dλdsdθ,Rt−τt 2RθtRstxT(λ)dλdsdθiT, U1 = diag{ρ1, ρ2, · · · , ρn} ≥ 0, and

U2 = diag{σ1, σ2, · · · , σn} ≥ 0 are to be determined, The time derivative of V (x(t)) can be computed as follows:

1(x(t)) = 2 ˙ηT(t)P η(t) +2

n

X

k=1

nρkx(t)[g˙ k(x(t)) − lkx(t)]

kx(t)[l˙ k+x(t) − gk(x(t))]o,

(4)

≤ ζT(t) Ω1ζ(t),

2(x(t)) ≤ 3xT(t)Q1x(t) + ˙xT(t)Q3x(t)˙

−xT(t − τ1)Q1x(t − τ1)

−xT(t − τ2)Q1x(t − τ2) +2gT(x(t))Q2g(x(t))

−gT(x(t − τ1))Q2g(x(t − τ1))

−gT(x(t − τ2))Q2g(x(t − τ2))

−(1 − τ3)xT(t − τ (t))Q1x(t − τ (t))

−(1 − h3) ˙xT(t − h(t))Q3x(t − h(t)),˙

= ζT(t) Ω2ζ(t), V˙3(x(t)) = k22gT(x(t))S1g(x(t))

−k2

Z t t−k2

gT(x(s))S1g(x(s))ds,

≤ k22gT(x(t))S1g(x(t))

−k2

Z t t−k(t)

gT(x(s))S1g(x(s))ds,

≤ k22gT(x(t))S1g(x(t))

Z t

t−k(t)

gT(x(s))ds S1

× Z t

t−k(t)

g(x(s))ds,

= ζT(t) Ω3ζ(t),

4(x(t)) = x˙T(t)(τ12S2+ τ22S2) ˙x(t)

−τ1

Z t t−τ1

T(s)S2x(s)ds˙

−τ2

Z t t−τ2

T(s)S2x(s)ds,˙

≤ ζT(t) Ω4ζ(t),

5(x(t)) = 0.5 ˙xT(t)(τ12S3+ τ22S3) ˙x(t)

Z t

t−τ1

Z t θ

T(s)S3x(s)dsdθ˙

Z t

t−τ2

Z t θ

T(s)S3x(s)dsdθ,˙

≤ ζT(t) Ω5ζ(t),

where Ωi, (i = 1, 2, 3, 4, 5) is defined in (5).

From (3), the nonlinear function gk(xk) satisfies lk ≤ gk(xk)

xk ≤ l+k, k = 1, 2, · · · , n, xk6= 0.

Thus, for any tk> 0, (k = 1, 2, · · · , n), we have 2tk[gkT(x(θ)) − lkx(θ)] [l+kx(θ) − gk(x(θ))] ≥ 0, which

2[gT(x(θ)) − xT(θ)L]TT [L+x(θ) − g(x(θ))] ≥ 0,

where T = diag{t1, t2, · · · , tn}.

Let θ be t, t − τ (t), t − τ1 and t − τ2, and replace T with Ts(s = 1, 2, 3, 4), then, we have (s = 1, 2, 3, 4)

T(t)ΠT17TsΠ18ζ(t) ≥ 0, (7) Another observation from (3), we have

lk ≤ gk(x(θ1)) − gk(x(θ2))

x(θ1) − x(θ2) ≤ lk+, k = 1, 2, · · · , n.

Thus, for any tk> 0, (k = 1, 2, · · · , n), and Λ = gk(x(θ1)) − gk(x(θ2)), we have

2tk[Λ − lk(x(θ1) − x(θ2))]

×[l+k(x(θ1) − x(θ2)) − Λ] ≥ 0, which

2[Λ − L(x(θ1) − x(θ2))]T

×T [L+(x(θ1) − x(θ2)) − Λ] ≥ 0, where Λ = col{Λ1, Λ2, · · · , Λn}.

Let θ1and θ2take values in t, t−τ (t), t−τ1and t−τ2, and replace T with Tab(a = 1, 2, 3; b = 2, 3, 4; b >

a), then, we have

T(t)ΠT19TabΠ20ζ(t) ≥ 0, (8) where a = 1, 2, 3, b = 2, 3, 4, b > a.

From (7) and (8), it can be shown that

ζT(t) Ω6ζ(t) ≥ 0, (9) where Ω6is defined in (5).

Therefore, we conclude that

V (x(t)) − γu˙ T(t)u(t) − 2yT(t)u(t)

P5i=1Vi(x(t)) + Ω6− γuT(t)u(t)

−2yT(t)u(t),

= ζT(t) Ω ζ(t),

(10)

where Ω is defined in (4). If we have Ω < 0, then V (x(t)) − γu˙ T(t)u(t) − 2yT(t)u(t) ≤ 0, (11) for any ζ(t) 6= 0. Since V (x(0)) = 0 under zero ini- tial condition, let x(t) = 0 for t ∈ [τmax, 0], after integrating (11) with respect to t over the time period from 0 to tf, we get

2 Z tf

0 yT(s)u(s)ds

≥ V (x(tf)) − V (x(0)) − γ Z tf

0

uT(s)u(s)ds,

≥ −γ Z tf

0 uT(s)u(s)ds.

Thus, the NTNNs (1) with (3) is passive in the sense of Definition 1. This completes the proof.

(5)

4 Numerical example

In this section, we present example to illustrate the effectiveness and the reduced conservatism of our re- sult. Consider the NTNNs (1) with the following pa- rameters:

A =

"

8.4 0

0 9

# , W =

"

−0.21 −0.19

−0.24 0.1

# ,

W1 =

"

−0.09 −0.2 0.2 0.1

# ,

W2 =

"

−0.52 0

0.2 −0.09

# ,

W3 =

"

−0.5 0 0 −0.5

# .

The activation functions are assumed to be gi(xi(t)) = 0.5(|xi + 1| − |xi − 1|), i = 1, 2.

It is easy to see:

L=

"

0 0 0 0

# , L+=

"

1 0 0 1

# .

For 0.2 ≤ τ (t) ≤ 3, h(t) ≤ 0.5, k(t) ≤ 2, h3 = 0.5, τ3 = 0.5 and γ = 0.34 by using MATLAB LMIs control toolbox and by solving the LMIs in Theorem 4, in our paper we obtain the feasible solutions:

P =

1.0087 0.11967 0.0038 0.0013 0.1196 0.7258 0.0067 0.0088

0.0038 0.0067 0.0128 0.0084

0.0013 0.0088 0.0084 0.0429 0.0000 0.0000 0.0016 0.0017 0.0000 0.0000 0.0015 0.0056

0.0286 0.0289 0.0098 0.0892

0.0035 0.0472 0.0263 0.0502 0.0000 0.0000 0.0022 0.0013 0.0000 0.0000 0.0011 0.0050

0.0060 0.0007 0.0174 0.0289 0.0030 0.0050 0.0033 0.0096 0.0000 0.0000 0.0001 0.0000 0.0000 0.0000 0.0000 0.0002 0.0000 0.0000 0.0286 0.0035 0.0000 0.0000 0.0000 0.0289 0.0472 0.0000

0.0016 0.0015 0.0098 0.0263 0.0022

0.0017 0.0056 0.0892 0.0502 0.0013 0.0012 0.0011 0.0007 0.0001 0.0002 0.0011 0.0040 0.0000 0.0007 0.0003 0.0007 0.0000 1.4958 0.7472 0.0009 0.0001 0.0007 0.7472 3.0171 0.0004 0.0002 0.0003 0.0009 0.0004 0.0006 0.0003 0.0010 0.0003 0.0020 0.0004 0.0005 0.0001 0.0140 0.0364 0.0001 0.0002 0.0006 0.1875 0.3777 0.0002 0.0000 0.0000 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0000

0.0000 0.0060 0.0030 0.0000 0.0000 0.0000 0.0007 0.0050 0.0000 0.0000

0.0011 0.0174 0.0033 0.0001 0.0000

0.0050 0.0289 0.0096 0.0000 0.0002 0.0003 0.0005 0.0002 0.0000 0.0000 0.0010 0.0001 0.0006 0.0000 0.0000

0.0003 0.0140 0.1875 0.0001 0.0000

0.0020 0.0364 0.3777 0.0000 0.0001 0.0004 0.0001 0.0002 0.0000 0.0000 0.0014 0.0002 0.0004 0.0000 0.0000 0.0002 5.1380 0.2050 0.0000 0.0000 0.0004 0.2050 5.7050 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000

,

Q1 =

"

0.2383 0.3599 0.3599 0.7875

# ,

Q2 =

"

0.0019 0.0023 0.0023 0.0093

# ,

Q3 =

"

0.3563 0.0107 0.0107 0.2513

# ,

S1 =

"

2.2291 0.0099 0.0099 0.2674

# ,

S2 = 105×

"

2.6041 1.1983 1.1983 5.4488

# ,

S3 = 108×

"

5.4115 −0.0914

−0.0914 5.2782

# ,

U1 =

"

2.0621 0 0 1.5664

# ,

U2 =

"

3.9452 0 0 3.2423

# ,

T1 =

"

23.2649 0 0 16.9973

# ,

T2 =

"

0.0792 0 0 0.1857

# ,

T3 =

"

0.0562 0 0 0.1528

# ,

T4 =

"

0.0025 0 0 0.0141

# ,

T5 =

"

1.2748 0 0 1.3177

# ,

T6 =

"

1.2244 0 0 1.3813

# ,

T7 =

"

1.2681 0 0 1.2956

# ,

T8 =

"

1.3421 0 01.2576

# ,

(6)

T9 =

"

1.2748 0 0 1.3177

# ,

T10 =

"

1.0461 0 0 1.0442

# .

5 Conclusion

In this paper, the passivity analysis for NTNNs with discrete and distributed time-varying delays has been studied. By employing the LKFs method, double inte- gral inequality was developed to guarantee the passiv- ity performance of NTNNs. A new passivity analysis criterion has been given in terms of LMIs, which is dependent on time-varying delays. Finally, a numer- ical example has been presented which illustrate the effectiveness and usefulness of the proposed method.

Acknowledgements: The first author was supported by Faculty of Science, Khon Kaen University. The second author was supported by the National Re- search Council of Thailand 2018 and Faculty of Sci- ence and Liberal Arts, Rajamangala University of Technology Isan (RMUTI), Thailand. Also, study is part of the project “Robust passivity analysis for neutral-type neural networks with mixed time-varying delays and applications”.

References:

[1] L. Chua, L. Yang, Cellular Neural Networks:

Theory, IEEE Trans. Circuits Syst., Vol. 35, No. 10, 1988, pp. 1257-1272.

[2] B. Kosko, Bidirectional Associative Memories, IEEE Trans. Syst. Man, Cybern., Vol. 18, No. 1, 1988, pp. 49-6.

[3] J. Wang, Z. Xu, New study on neural networks:

the essential order of approximation, Neural Netw, Vol. 23, 2010, pp. 618-624.

[4] T. Wang, Y. Ding, L. Zhang, K. Hao, Delay de- pendent exponential state estimators for stochas- tic neural networks of neutral type with both dis- crete and distributed delays, Int. J. Syst. Sci., Vol. 46, No. 4, 2015, pp. 670-680.

[5] X. M. Zhang, Q. L. Han, Global asymptotically stability analysis for delayed neural networks us- ing a matrix-based quadratic convex approach, Neural Netw, Vol. 54, 2014, pp. 57-69.

[6] N. Yotha, T. Botmart, K. Mukdasai, W. Weera, Improved delay-dependent approach to passiv- ity analysis for uncertain neural networks with discrete interval and distributed time-varying de- lays, Vietnam J. Math., Vol. 45, 2017, pp. 721- 736.

[7] X. Zhang, X. Lin, Y. Wang, Robust fault de- tection filter design for a class of neutral-type neural networks with time-varying discrete and unbounded distributed delays, Optim. Control Appl. Methods, Vol. 34, No. 5, 2013, pp. 590- 607.

[8] X. Liao, Y. Liu, H. Wang, T. Huang, Exponential estimates and exponential stability for neutral- type neural networks with multiple delays, Neu- rocomputing, Vol. 149, No. 5, 2015, pp. 868- 883.

[9] P. Balasubramaniam, G. Nagamani, R. Rakkiyappan, Global passivity analysis of interval neural networks with discrete and distributed delays of neutral type, Neural Process Lett, Vol. 32, 2010, pp. 109-130.

[10] L. H. Xie, M. Y. Fu, H. Z. Li, Passivity analy- sis and passification for uncertain signal process- ing systems, IEEE Trans Signal Process, Vol. 46, No. 2, 1998, pp. 394-403.

[11] L. O. Chua, Passivity and complexity, IEEE Trans Circuits Syst I, Vol. 46, 1999, pp. 71-82.

[12] V. B. Kolmanovski, A. D. Myshkis, Applied Theory of Functional Differential Equations, Springer, Dordrecht, Boston, London, Vol. 85, 1992.

[13] C. Li, X. Liao, Passivity analysis of neural networks with time delay, IEEE Trans. Cir- cuits Syst. II, Analog Digit. Signal Process., Vol. 52(8), 2005, pp. 471-475.

[14] Q. Song, J. Liang, Z. Wang, Passivity analysis of discrete-time stochastic neural networks with time-varying delays, Neurocomputing, Vol.72, No. 7-9, 2009, pp. 1782-1788.

[15] Q. Song, J. Cao, Passivity of uncertain neu- ral networks with both leakage delay and time- varying delay, Nonlinear Dyn., Vol. 67, 2012, pp. 1695-1707.

[16] C. D. Zheng, C. K. Gong, Z. Wang, New passiv- ity conditions with fewer slack variables for un- certain neural networks with mixed delays, Neu- rocomputing, Vol. 118, 2013, pp. 237-244.

[17] R. Samidurai, S. Rajavel, Q. Zhu, R. Raja, H. Zhou, Robust passivity analysis for neutral- type neural networks with mixed and leakage de- lays, Neurocomputing, Vol. 175, 2016, pp. 635- 643.

[18] P. Balasubramaniam, G. Nagamani, R. Rakkiyappan, Passivity analysis for neural networks of neutral type with Markovian jump- ing parameters and time delay in the leakage term, Commun. Nonlinear Sci. Numer. Simul., Vol. 16, 2011, pp. 4422-4437.

(7)

[19] X. Zhang, X. F. Fan, Y. Xue, Y. T. Wang, W. Cai, Robust exponential passive filtering for uncer- tain neutral-type neural networks with time- varying mixed delays via wirtinger-based in- tegral inequality, Internat. J. Control, Autom.

Syst., Vol. 15, No2, 2017, pp. 585-594.

[20] A. Seuret, F. Gouaisbaut, Wirtinger-based in- tegral inequality: application to time-delay systems, Automatica, Vol. 49, No. 9, 2013, pp. 2860-2866.

[21] P. G. Park, W. I. Lee, S. Y. Lee, Aux- iliary function-based integral inequalities for quadratic functions and their applications to time-delay systems, J. Frankl Inst., Vol. 352, 2015, pp. 1378-1396.

[22] N. Zhao, C. Lin, B. Chen, Q. G. Wang, A new double integral inequality and application to sta- bility test for time-delay systems, Appl. Math.

Lett., Vol. 65, 2017, pp. 26-31.

Riferimenti

Documenti correlati

Forward pass on a neural network with a two-neurons hidden layer with relu activation and a single-neuron output layer with sigmoid activation.... Training a

The transition network can code the input sequences (or all the relevant information) and store into the state z(t). Simple case: the dimension of the state must be large enough

 approximation by sigmoids is reached by large weights in first layer and small weights in the second layer.  large weights produce saturation in sigmoids and make

Abstract: In recent years, due to the relation between cognitive control and mathematical concept of control dynam- ical systems, there has been growing interest in the

Abstract: This paper is concerned the problem of H ∞ control for neural networks with mixed time-varying delays which comprising different interval and distributed time-varying

Rajchakit, Robust stability and stabilization of nonlinear uncertain stochastic switched discrete-time systems with interval time- varying delays. Rees, New delay-

According to dynamical system models of neural dynamics, the brain’s basal state is a high-dimensional/chaotic attractor, and cognition is described through a trajectory moving across

If the training set is linearly separable, the perceptron learning algorithm always converges to a consistent hypothesis after a finite number of epochs, for any η &gt; 0.. If