• Non ci sono risultati.

Quasi-Lagrangian neural network for convex quadratic optimization

N/A
N/A
Protected

Academic year: 2021

Condividi "Quasi-Lagrangian neural network for convex quadratic optimization"

Copied!
7
0
0

Testo completo

(1)

IEEE TRANSACTIONS ONNEURALNETWORKSis published by the IEEE Computational Intelligence Society. Members may subscribe to this TRANSACTIONSfor $56.00 per year. IEEE student members may subscribe for $28.00 per year. Nonmembers may subscribe for $1,645.00. For additional subscription information visit http://www.ieee.org/nns/pubs. For informa-tion on receiving this TRANSACTIONS, write to the IEEE Service Center at the address below. Member copies of Transactions/Journals are for personal use only. For more information about this TRANSACTIONSsee http://www.ieee-cis/org/pubs/tnn.

Editor-in-Chief DERONG LIU Institute of Automation Chinese Academy of Sciences

Beijing 100190, China

Dept. of Electrical and Computer Engineering University of Illinois

Chicago, IL 60607 USA Email: ieeetnn@gmail.com

Associate Editors HOJJATADELI

Ohio State Univ., USA ANGELOALESSANDRI Univ. of Genoa, Italy CESARE ALIPPI Politecnico di Milano, Italy AMIT BHAYA

Federal Univ. of Rio de Janeiro, Brazil

SHENGCHEN

Univ. of Southampton, U.K. FAHMIDACHOWDHURY National Science Foundation, USA PAU-CHOO(JULIA) CHUNG National Cheng Kung Univ., Taiwan BHASKAR DASGUPTA

Univ. of Illinois at Chicago, USA MING DONG

Wayne State Univ., USA RENE DOURSAT CNRS, France EL-SAYEDEL-ALFY King Fahd Univ. of Petroleum

& Minerals, Saudi Arabia

DENIZERDOGMUS Northeastern Univ., USA PABLOA. ESTEVEZ Univ. of Chile, Chile MARKGIROLAMI Univ. of Glasgow, U.K. BARBARAHAMMER Clausthal Univ. of Technol., Germany

HAIBOHE

Stevens Institute of Technol., USA

AKIRAHIROSE Univ. of Tokyo, Japan ZENG-GUANGHOU The Chinese Acad. Sci., China SANQINGHU

Drexel Univ., USA AMIRHUSSAIN Univ. of Stirling, U.K. GIACOMOINDIVERI ETH Zurich, Switzerland HOSSEINJAVAHERIAN General Motors R&D Center, USA

YAOCHU JIN

Honda Research Inst., Germany IRWIN KING

Chinese Univ. of Hong Kong LI-WEI(LEO) KO National Chiao-Tung Univ., Taiwan

STEFANOSKOLLIAS

National Technol. Univ. of Athens, Greece

JAMESKWOK

Hong Kong Univ. of Sci. & Technol. FRANKL. LEWIS

Univ. of Texas at Arlington, USA ARISTIDISLIKAS

Univ. of Ioannina, Greece CHIH-JENLIN

National Taiwan Univ., Taiwan GUO-PINGLIU

Univ. of Glamorgan, U.K. DANILOP. MANDIC Imperial College London, U.K.

SEIICHIOZAWA Kobe Univ., Japan MARKD. PLUMBLEY Queen Mary Univ. of London, U.K.

JAGATHRAJAPAKSE

Nanyang Technol. Univ., Singapore GEORGEA. ROVITHAKIS Univ. of Thessaloniki, Greece MARCELLOSANGUINETI Univ. of Genoa, Italy KATESMITH-MILES Monash Univ., Australia ALESSANDROSPERDUTI Univ. of Padova, Italy STEFANOSQUARTINI Univ. Politecnica delle Marche, Italy

DIPTISRINIVASAN National Univ. of Singapore CHANGYINSUN Southeast Univ., China SERGIOSTHEODORIDIS Univ. of Athens, Greece

MARCM. VANHULLE Katholieke Univ. Leuven, Belgium

DRAGUNAVRABIE

Univ. of Texas at Arlington, USA ZIDONGWANG

Brunel Univ., U.K. SIMONX. YANG Univ. of Guelph, Canada ZHANG YI

Sichuan Univ., China ZHIGANGZENG

Huazhong Univ. of Sci. & Technol., China

G. PETERZHANG Georgia State Univ., USA HUAGUANGZHANG Northeastern Univ., China NIAN ZHANG

Univ. of District of Columbia, USA LIANG ZHAO

Univ. of Sao Paulo, Brazil NANNINGZHENG Xi’an Jiaotong Univ., China

IEEE Officers PEDROA. RAY, President

MOSHEKAM, President-Elect DAVIDG. GREEN, Secretary PETERW. STAECKER, Treasurer JOHNR. VIG, Past President

TARIQS. DURRANI, Vice President, Educational Activities

JONG. ROKNE, Vice President, Publication Services and Products BARRYL. SHOOP, Vice President, Membership and Geographic Activities W. CHARLTON(CHUCK) ADAMS, President, IEEE Standards Association ROGERD. POLLARD, Vice President, Technical Activities

EVELYNH. HIRT, President, IEEE-USA VINCENZOPIURI, Director, Division X

IEEE Executive Staff

DR. E. JAMESPRENDERGAST, Executive Director & Chief Operating Officer BETSYDAVIS, SPHR, Human Resources

ANTHONYDURNIAK, Publications Activities JUDITHGORMAN, Standards Activities

CECELIAJANKOWSKI, Member and Geographic Activities DOUGLASGORHAM, Educational Activities

MATTHEWLOEB, Corporate Strategy & Communications RICHARDD. SCHWARTZ, Business Administration CHRISBRANTLEY, IEEE-USA

MARYWARD-CALLAN, Technical Activities

IEEE Periodicals Transactions/Journals Department

Staff Director: FRANZAPPULLA

Editorial Director: DAWNMELLEY Production Director: PETERM. TUOHY Senior Managing Editor: WILLIAMA. COLACCHIO Associate Editor: JOANNAGOJLIK

IEEE TRANSACTIONS ONNEURALNETWORKS(ISSN 1045-9227) is published monthly by The Institute of Electrical and Electronics Engineers, Inc. Responsibility for the contents rests upon the authors and not upon the IEEE, the Society/Council, or its members. IEEE Corporate Office: 3 Park Avenue, 17th Floor, New York, NY 10016-5997. IEEE Operations

Center: 445 Hoes Lane, Piscataway, NJ 08854-4141. NJ Telephone: +1 732 981 0060. Price/Publication Information: Individual copies: IEEE Members $20.00 (first copy only),

nonmembers $137.00 per copy. (Note: Postage and handling charge not included.) Member and nonmember subscription prices available upon request. Available in microfiche and microfilm. Copyright and Reprint Permissions: Abstracting is permitted with credit to the source. Libraries are permitted to photocopy for private use of patrons, provided the per-copy fee indicated in the code at the bottom of the first page is paid through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923. For all other copying, reprint, or republication permission, write to Copyrights and Permissions Department, IEEE Publications Administration, 445 Hoes Lane, Piscataway, NJ 08854-4141. Copyright © 2010 by The IEEE, Inc. All rights reserved. Periodicals Postage Paid at New York, NY and at additional mailing offices. Postmaster: Send address changes to IEEE TRANSACTIONS ONNEURAL NETWORKS, IEEE, 445 Hoes Lane, Piscataway, NJ 08854-4141. GST Registration No. 125634188. CPC Sales Agreement #40013087. Return undeliverable Canada addresses to: Pitney Bowes IMEX, P.O. Box 4332, Stanton Rd., Toronto, ON M5W 3J4, Canada. IEEE prohibits discrimination, harassment and bullying. For more information visit http://www.ieee.org/ nondiscrimination. Printed in U.S.A.

(2)

Abstract—A new neural network for convex quadratic optimization is

presented in this brief. The proposed network can handle both equality and inequality constraints, as well as bound constraints on the optimiza-tion variables. It is based on the Lagrangian approach, but exploits a par-tial dual method in order to keep the number of variables at minimum. The dynamic evolution is globally convergent and the steady-state solutions sat-isfy the necessary and sufficient conditions of optimality. The circuit imple-mentation is simpler with respect to existing solutions for the same class of problems. The validity of the proposed approach is verified through some simulation examples.

Index Terms—Analog circuits, Lagrangian networks, mathematical

pro-gramming, quadratic optimization, recurrent neural networks.

I. INTRODUCTION

The idea of using analogue circuits to solve mathematical program-ming problems can be traced back to the works of Pyne [1] and Dennis [2]. A canonical nonlinear programming circuit was proposed by Chua and Lin [3], later extended by Wilson [4]. Kennedy and Chua [5] re-cast the canonical circuit in a neural network framework and prove the stability. All the networks in [3]–[5] are based on the penalty func-tion method, which gives exact solufunc-tions only if the penalty param-eter tends to infinity, a condition impossible to meet in practice. To avoid the penalty functions, Zhang and Constantinides [6] proposed a Lagrangian approach to solve quadratic programming (QP) lems with equality constraints. The method can be extended to prob-lems including both equality and inequality constraints converting in-equalities into in-equalities by introducing slack variables. Also bound constraints on the variables, often arising in practical problems, can be treated in the same way at the expense of a huge number of vari-ables. To overcome the penalty method, handling both equality and in-equality constraints as well as bounds on the variables, a primal-dual approach [7] and a gradient-based neural network [8] have been pro-posed. Recently, Xia and Feng [9] proposed a one-layer neural net-work for convex QP, based on the projection method. To the best of our knowledge, the scheme in [9] represents the simplest available neural network for real-time quadratic optimization.

In this brief, which extends a previous work on the solution of the optimization problem arising in the context of support vector machine learning [10], we propose a new neural network solver for general QP problems. It is inspired by the Lagrangian approach of Zhang and Con-stantinides but exploits a partial dual method to keep the number of

Manuscript received January 8, 2008; accepted May 28, 2008. First published September 26, 2008; current version published October 8, 2008.

R. Perfetti is with the Department of Electronic and Information Engineering, University of Perugia, I-06125 Perugia, Italy (e-mail: perfetti@diei.unipg.it).

G. Costantini and M. Todisco are with the Department of Electronic Engi-neering, University of Rome “Tor Vergata,” Rome I-00100, Italy (e-mail: costan-tini@uniroma2.it; massimiliano.todisco@uniroma2.it).

Digital Object Identifier 10.1109/TNN.2008.2001183

transformation of inequality constraints into equality constraints. The circuit implementation is simpler than existing solutions; in particular, it requires the same number of connections but less components with respect to the network in [9].

The rest of this brief is organized as follows. In Section II, the op-timization problem is formulated. In Section III, the proposed neural network is introduced and illustrated. In Section IV, convergence and stability are proved. Section V illustrates the circuit implementation. Section VI presents some simulation examples. Finally, some com-ments conclude this brief.

II. OPTIMIZATIONPROBLEM We consider the following QP problem:

min x J(x) = 12x TQx + pTx (1a) subject to Ax 0 d  0 (1b) Bx 0 c = 0 (1c) l  x  h (1d)

whereQ 2 <n2nis a symmetric and positive-definite matrix,A 2 <r2n,B 2 <m2n,x, p, h, l 2 <n;c 2 <m, andd 2 <r. Constraints (1b–d) can be recast as follows:

gi(x)  0; i = 1; . . . ; r (2) fi(x) = 0; i = 1; . . . ; m (3) li xi hi; i = 1; . . . ; n (4) being gi(x) = n j=1 aijxj0 di; i = 1; . . . ; r (5) fi(x) = n j=1 bijxj0 ci; i = 1; . . . ; m: (6)

III. PROPOSEDNEURALNETWORK

The basic idea in Lagrangian duality is to take the constraints in (1) into account by augmenting the objective function with a weighted sum of the constraint functions [11]. To limit the complexity of the resulting neural network, we follow a partial dual approach defining the following Lagrangian function:

L(x; ; ) = J(x) + r j=1 jgj(x) + m j=1 jfj(x) (7) 1045-9227/$25.00 © 2008 IEEE

(3)

Fig. 1. Piecewise linear functionP and P .

where iis the Lagrange multiplier associated with theith inequality constraint, satisfying

i 0 8 i (8)

and iis the Lagrange multiplier associated with theith equality con-straint. Note that constraints (4) are not included in (7), avoiding2n additional dual variables. The solution of problem (1) corresponds to the saddle point of the Lagrangian function (7), which must be maxi-mized with respect to , and minimized with respect to x. Following [6], a gradient dynamical system can be used such that, along a trajec-tory, function (7) is increasing with each iand iand decreasing with eachxi. To ensure that constraints (4) and (8) are satisfied, avoiding the drawbacks of the penalty approach, we extend the method suggested in [7] to take into account bound constraints. To this end, we introduce the state variableszi2 <, i = 1; . . . ; n, and wi2 <, i = 1; . . . ; r, being

xi= P1(zi) = li; zi li zi; li< zi< hi hi zi;  hi (9) i= P2(wi) = 0; ww i< 0 i wi 0: (10)

The piecewise linear functionsP1andP2, depicted in Fig. 1, guar-antee the fulfillment of constraints (4) and (8).

The proposed dynamical system is described by the following state equations:  _zi= 0 @L@x i + i(xi0 zi) = 0 n j=1 qijxj0 pi0 r j=1 jaji0 m j=1 jbji+ i(xi0 zi) (11a)  _wi= @L@ i + i( i0 wi) = gi(x) + i( i0 wi) (11b)  _ i= @L@ i = fi(x) (11c)

wherei,i > 0, and  > 0 is a time scaling factor. Equations (11) represent a gradient dynamical system seeking for the saddle point of the Lagrangian, with the additional termsi(xi0zi) and i( i0wi), which guarantee the existence of a finite equilibrium point, as it will be shown in Section IV.

Equation (11) can be realized by the recurrent neural network shown in Fig. 2. It is composed ofn + m + r integrators, n limiters realizing function (9), andr limiters with transfer function (10).

The equilibrium conditions of system (11) are @L

@xi = i(xi0 zi); i = 1; . . . ; n (12) gi(x) = 0 i( i0 wi); i = 1; . . . ; r (13)

Fig. 2. Scheme of the proposed neural network.

fi(x) = 0; i = 1; . . . ; m: (14)

Concerning (12), we have three cases li< xi< hi @x@L i = 0 (15a) xi= li @x@L i  0 (15b) xi= hi @x@L i  0: (15c)

Moreover, taking into account the form ofP2(:), (13) corresponds to the following conditions:

gi(x)  0; i = 1; . . . ; r (16a)

i 0; i = 1; . . . ; r (16b)

igi(x) = 0; i = 1; . . . ; r: (16c) It is easy to verify that (14)–(16) represent the Karush–Kuhn–Tucker (KKT) conditions of problem (1) [7], [11]. Hence, because the opti-mization problem is convex, each equilibrium point of system (11) cor-responds to an optimal (global) solution of (1).

IV. CONVERGENCEANALYSIS

A dynamical system is said to be globally convergent if every trajec-tory converges to the unique equilibrium point. We will prove that the proposed neural network is globally convergent under the hypothesis that the Hessian matrixQ is positive definite.

Theorem: IfQ is positive definite, the dynamical system described

by (11) is globally convergent.

Proof: By hypothesis, the necessary and sufficient KKT

condi-tions (14)–(16) have a unique solution (x3, 3, 3). The equilibrium points satisfy the KKT conditions. Hence, each equilibrium point has

(4)

librium points share also the componentswiin the linear range. Hence, the equilibrium point of system (11) is unique.

Now, we prove that every trajectory converges to the equilibrium point (z3,w3, 3). Let x0 i= xi0 x3i = P1(zi) 0 P1(zi3) (17a) 0 i= i0 3i = P2(wi) 0 P2(wi3) (17b) 0 i= i0 i3 (17c) z0 i= zi0 zi3 (17d) w0 i= wi0 wi3: (17e)

From the definition ofP1(:) and P2(:), it follows: 0  xi0 x3i zi0 z3i  1; i = 1; . . . ; n (18a) 0  i0 3i wi0 wi3  1; i = 1; . . . ; r: (18b) Substitution of (17) in (11) gives (we assume = 1 for simplicity)

_z0 i= 0 n j=1 qijx0j0 r j=1 0 jaji0 m j=1 0 jbji+ i x0i0 z0i (19a) _w0 i= n j=1 aijx0j+ i 0i0 w0i (19b) _ 0 i= n j=1 bijx0j: (19c)

Consider the following candidate Liapunov function:

V (z; w; ) = n i=1 z 0 x0 i()d + r i=1 w 0 0 i()d + 12 m i=1 02 i : (20) This kind of function has already been used in the stability analysis of some neural network models for optimization [12].V is nonnegative as a consequence of (18). Taking the time derivative and using (19), we have _V (z; w; ) = n i=1 @V @z0 i_z 0 i+ r i=1 @V @w0 i _w 0 i+ m i=1 @V @ 0 i _ 0 i = n i=1 x0i 0 n j=1 qijx0j0 r j=1 0jaji 0 m j=1 0 jbji+ i x0i0 zi0 + r i=1 0 i n j=1 aijx0j+i 0i0wi0 + m i=1 0 i n j=1 bijx0j = 0 n i=1 n j=1 x0 ix0jqij+ n i=1 ix0i x0i0 z0i + r i=1 i 0i 0i0 w0i : (21) j=1 j=1 j=1

for everyt. From (11a) and the positivity of i, we have _zi < 0 for zi> hi+ M=i, _zi> 0 for zi< li0 M=i. Hence,z(t) is bounded, and in the same way, we can prove thatw(t) is bounded.

Because the trajectory is bounded andV is a global Liapunov func-tion, from LaSalle’s invariance principle [13], it follows that each tra-jectory will converge to the largest invariant subset of the set, where

_V = 0. From (21), we see that _V = 0 if and only if, 8i, x0

i = 0, and 0

i= 0 or 0i= wi0. Assumingx0i= 0 8i, we have _ i0= 0 from (19c) and _wi0= i( 0i0 wi0) from (19b). When 0i= wi0, it is _w0i= _ 0i= 0. Hence,xi(t), i(t), i(t) ! constant as t ! 1, and (11a) and (11b) can be rewritten as follows:

_zi= 0 izi+ i(t) _wi= 0 iwi+ i(t)

where functionsi(t) and i(t) have a finite limit as t ! 1. Becausei, i > 0, using well-known properties of differential equations, we have

limt!1zi(t) = i(1)=i limt!1wi(t) = i(1)=i:

All the limits being finite, they must correspond to the unique equi-librium point of system (11). Hence, we conclude that every trajectory converges to (z3,w3, 3).

Remark: The solution is guaranteed to be unique if the Hessian

Q is positive definite. If Q is positive semidefinite (p.s.d.), multiple solutions exist and the theorem is not applicable. However, computer simulations showed a convergent behavior even in this case: the net-work converges to one of the, possibly infinite, solutions depending on the initial conditions (see Example 2 in Section VI). Computer simula-tions showed also that the property of global convergence is structurally stable. It was verified in presence of random errors on the connection matrices with weight relative error up to 5% (the results are not in-cluded due to lack of space).

V. CIRCUITIMPLEMENTATION

The network in Fig. 2 is similar to the scheme proposed in [9] with two main differences: 1) the saturation nonlinearities are moved from the input to the output of the integrators; 2) there is a different self-feedback loop around the integrators, corresponding to the terms i(xi0 zi) and i( i0 wi). These modifications allow a simpler circuit realization, because both saturation and self-feedback can be implicitly obtained by the operational amplifier used to implement the integrator.

For sake of clarity, let us considerith (11a) corresponding to the scheme in Fig. 3(a); it can be realized by the inverting integrator shown in Fig. 3(b). The projection operatorP1(:) is implicitly obtained by the saturation of the op-amp, assuming bipolar voltage supply. The satu-ration valuesVH andVL can be different, respectively, fromhiand li. In this case, the voltagesvxi are scaled versions of the QP variables

(5)

Fig. 3. (a) Scheme corresponding to theith state equation (11a); (b) its circuit implementation.

xi. The connections are realized withvjx(0vxj) if qij > 0 (qij < 0) through a resistanceRo=jqijj, where Rois a normalization resistance. An inverting amplifier provides the voltage0vix. The remaining con-nections are obtained in the same way. Voltagesv j andv j correspond, respectively, to the variables j and j. Voltagevpi corresponds to the constantpi.

Applying Kirchhoff’s current law at the inverting input of the op-amp and taking into account that this input is not a virtual ground at saturation, we obtain the following equation:

RoCo_ui= 0 n j=1 qijvxj0 vip0 r j=1 v jaji0 m j=1 v jbji+ i(vix0 ui) (23) where i= 1 + n j=1 jqijj + r j=1 jajij + m j=1 jbjij: (24) It isvxi = uiifVL < ui < VH;vix = 0 if ui  VL;vxi = VH ifui VH. Equation (23) is equivalent to (11a) with = RoCo. It is worth noting that the last term in (23), due to the nonzero differential input voltage of the op-amp at saturation, provides the corrective term i(xi0 zi) in (11a) without extra circuitry. Equation (11b) can be realized by a similar circuit with the difference that the lower saturation voltage of the op-amp is zero and positive saturation must be avoided; in this case, it is

i= 1 + n

j=1

jaijj: (25)

Equation (11c) can be realized by a simple linear integrator. With respect to the network in [9], the proposed circuit requires the same number of connections and the same number of integrators, but avoids then + r piecewise-linear activation functions and the n + r summers.

Fig. 4. Transient behavior of the network from zero initial conditions for Ex-ample 1.

VI. SIMULATIONEXAMPLES

Example 1: Consider the following QP problem [9]:

Minimize J(x) = x21+ x22+ 0:5x23+ x1(x2+ x30 4) 0 3x20 2x3 subject to x10 x2+ x3 1 0 x1+ x2+ x3 01=9 2x10 2x2+ 2x3 2 3x10 9x2+ 9x3= 1 x1+ x2+ 2x3 = 3 0  xi 4=3; i = 1; 2; 3: (26)

It can be shown that problem (26) has a unique optimal solution x3 = [4=3 7=9 4=9]T.

Problem (26) can be recast in a matrix form as follows:

l = [0 0 0]T h = [4=3 4=3 4=3]T Q = 2 1 1 1 2 0 1 0 1 p = 04 03 02 A = 1 01 1 01 1 1 2 02 2 d = 1 01=9 2 B = 3 09 91 1 2 c = 13 : (27)

The corresponding neural network has been simulated by numerical integration of (11), using Euler’s method. Expressions (24) and (25) have been used foriandi, even if these values do not influence the steady-state solution, as proved in Section V. In Fig. 4, the transient behavior of the network is shown, starting from zero initial conditions (all the state variables are zero int = 0). The time constant is  = 1 ms. Convergence to the correct solution is observed within 5 ms.

To verify the global convergence, the network was simulated starting from different initial conditions. The network always converged to the correct solution, regardless of whether the initial point is inside or

(6)

Fig. 5. Transient behavior for Example 1 with starting points (a)z(0) = [2 2 2] and (b) z(0) = [02 0 2 0 2] .

outside the feasible region. In particular, in Fig. 5, the transient be-havior is shown for the starting pointsz(0) = [2 2 2]Tandz(0) = [02 0 2 0 2]T. The remaining state variables were initially zero.

Finally, to investigate the effect of finite precision, a Monte Carlo simulation was performed by adding to the connection weights a random error with uniform distribution. Assuming a maximum relative error of 1%, we obtained the following mean values of the variables after 1000 runs (in parenthesis the stan-dard deviation): x1 = 1:3313 (0:0055), x2 = 0:7775 (0:0049), x3 = 0:4448 (0:0040).

Example 2 [8]:

Minimize J(x) = x22+ x23 subject to x1 0:

This problem has infinitely many optimal solutionsx3= [x10 0]T withx1 0, corresponding to the optimal value of the objective func-tionJ(x3) = 0, so the theorem does not apply. Notwithstanding, the network converges to an optimal solution for arbitrary initial condi-tions. For example, fromz(0) = [1 1 1]T, the network converges to [1 0 0]T, and fromz(0) = [01 0 1 0 1]T, the evolution converges to [0 0 0]T. The corresponding transient behavior is shown in Fig. 6.

Example 3: In this example, we verified the behavior of the

cir-cuit implementation described in Section V through a simulation with

PSpice. To this end, we considered the following problem:

Minimize J(x) = 1

2 x21+ x22 subject to x1+ x20 4  0

0  x1 1; 0  x2 4:

Fig. 6. Transient behavior for Example 2.

In matrix notation

l = [0 0]T h = [1 4]T

Q = 1 00 1 p = 00

A = [01 0 1] d = [04]

B = 0 c = 0: (28)

The optimal solution isx1= 1 and x2= 3. Even if it is trivial, this example shows the effects of physical devices and finite accuracy of connection resistors. Moreover, it exhibits the two different steady-state behaviors sincex1saturates whilex2 does not. The network was re-alized using 1% tolerance resistors and TL082 op-amps; the normal-ization resistance and capacitance wereRo = 1 k and Co = 1 F. The circuit requires four op-amps: two integrators for the variables, one integrator for the inequality constraint, and one inverting amplifier to realize the 1term in (11a). Saturation at zero is obtained by using a voltage supplyVL= 00.678 V. Op-amp integrators for the optimiza-tion variables have positive saturaoptimiza-tion levels 3 and 12 V, respectively (voltage suppliesVH = 3.678 V and VH = 12.678 V). As a conse-quence, the output voltagesvx1andv2xcorrespond, respectively, to3x1 and3x2. Hence, the constant input04, corresponding to d1, must be scaled accordingly (012 V). The capacitors were initially uncharged. An example of transient behavior is depicted in Fig. 7. The final values of the variables arevx1 = 3 V and vx2 = 9 V, corresponding to the exact solution.

VII. CONCLUSION

In this brief, a novel neural model for the real-time solution of convex quadratic optimization problems has been presented. It is obtained by exploiting a partial dual Lagrangian approach to avoid slack variables.

(7)

Fig. 7. PSpice simulation concerning Example 3.

Moreover, some operations (limitation and self-feedback) can be em-bedded in the op-amp integrators. The resulting circuit implementation requires less components with respect to existing neural networks for the same class of problems. The global convergence to a unique equi-librium point has been proven using a Liapunov function approach. The simulation results confirm the robust behavior of the proposed network and the accuracy of steady-state solutions, also in physical realizations with real-life devices.

REFERENCES

[1] I. B. Pyne, “Linear programming on an electronic analogue computer,” Trans. Amer. Inst. Electr. Eng., vol. 75, pp. 139–143, 1956. [2] J. B. Dennis, Mathematical Programming and Electrical Networks.

London, U.K.: Chapman & Hall, 1959.

[3] L. O. Chua and G.-N. Lin, “Nonlinear programming without compu-tation,” IEEE Trans. Circuits Syst., vol. CAS-31, no. 2, pp. 182–188, Feb. 1984.

[4] G. Wilson, “Quadratic programming analogs,” IEEE Trans. Circuits Syst., vol. CAS-33, no. 9, pp. 907–911, Sep. 1986.

[5] M. P. Kennedy and L. O. Chua, “Neural networks for nonlinear programming,” IEEE Trans. Circuits Syst., vol. CAS-35, no. 5, pp. 554–562, May 1988.

[6] S. Zhang and A. G. Constantinides, “Lagrange programming neural networks,” IEEE Trans. Circuits Syst. II, Analog Digit. Signal Process., vol. 39, no. 7, pp. 441–452, Jul. 1992.

[7] Y. Xia, “A new neural network for solving linear and quadratic pro-gramming problems,” IEEE Trans. Neural Netw., vol. 7, no. 6, pp. 1544–1547, Nov. 1996.

[8] Y. Leung, K.-Z. Chen, Y.-C. Jiao, X.-B. Gao, and K. S. Leung, “A new gradient-based neural network for solving linear and quadratic pro-gramming problems,” IEEE Trans. Neural Netw., vol. 12, no. 5, pp. 1074–1083, Sep. 2001.

[9] Y. Xia and G. Feng, “An improved neural network for convex quadratic optimization with application to real-time beamforming,” Neurocom-puting, vol. 64, pp. 359–374, 2005.

[10] R. Perfetti and E. Ricci, “Analog neural network for support vector machine learning,” IEEE Trans. Neural Netw., vol. 17, no. 4, pp. 1085–1091, Jul. 2006.

[11] A. Bouzerdoum and T. R. Pattison, “Neural network for quadratic op-timization with bound constraints,” IEEE Trans. Neural Netw., vol. 4, no. 2, pp. 293–304, Mar. 1993.

[12] M. Bazaraa, D. Sherali, and C. Shetty, Nonlinear Programming: Theory and Algorithms, ser. Wiley-Interscience Series in Discrete Mathematics and Optimization. New York: Wiley, 1992.

[13] M. Forti and A. Tesi, “New conditions for global stability of neural networks with applications to linear and quadratic programming prob-lems,” IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., vol. 42, no. 7, pp. 354–366, Jul. 1995.

[14] J. La Salle, The Stability of Dynamical Systems. Philadelphia, PA: SIAM, 1976.

[15] J. K. Hale, Ordinary Differential Equations. New York: Wiley, 1969.

Global -Synchronization of Linearly Coupled Unbounded Time-Varying Delayed Neural Networks With

Unbounded Delayed Coupling

Tianping Chen, Wei Wu, and Wenjuan Zhou

Abstract—In this brief, we study the global synchronization of linearly

coupled neural networks with delayed couplings, where the intrinsic systems are recurrently connected neural networks with unbounded time-varying delays, and the couplings include instant couplings and unbounded delayed couplings. The concept of -synchronization is introduced. Some sufficient conditions are derived for the global -syn-chronization for the underlined coupled systems.

Index Terms—Linearly coupled recurrently connected neural networks,

unbounded time-varying delay, global -synchronization, moore-penrose inverse.

I. INTRODUCTION

Complex networks have been widely investigated in science, en-gineering, and nature for decades due to its applications in chemical reactions, biological systems study, secure communications, etc. Typ-ical examples of complex networks include the Internet, World Wide Web (WWW), food webs, cellular and metabolic networks, etc. (e.g., see [1]–[8]). Stability, bifurcation, and chaos synchronization are also studied by many researchers.

Linearly coupled neural networks provide a large class of models that can be used to describe coupled systems with continuous time and state values, as well as discrete spatial states in many research fields. The dynamical behavior of a coupled network is governed by the following two mechanisms: the intrinsic nonlinear dynamics of the neural network at each node and the diffusion due to the spatial coupling among nodes. They have been investigated as theoretical models of spatio–temporal phenomena sof complex networks (for example, see [9]).

Because chaos synchronization in an array of linearly coupled dy-namical systems was investigated by Pecora in [1], many results on local and global synchronization in various coupled systems have been obtained (for details, see comprehensive paper [8]). In [10], Lu and Chen gave criteria for local and global synchronization of linearly cou-pled dynamical systems. In [11], Wu and Chen discussed synchroniza-tion of coupled neural networks with time-varying coupling configura-tion. However, it is inevitable that time-delays occur due to the finite speeds of transmission and spreading as well as traffic congestions. Therefore, the study of delayed-coupling systems is quite important. There are also several papers investigating the coupled systems with a delay [12]–[18]. However, the delays are always constant and bounded. Recurrently connected neural network with delays can be written as

dxk(t) dt = 0dkxk(t) + n l=1 wklgl(xl(t)) + n l=1 wkl1fl(xl(t0kl(t)))+Ik(t); k =1; . . . ; n (1)

Manuscript received November 14, 2007; accepted February 7, 2008. First published September 26, 2008; current version published October 8, 2008. This work was supported by the National Science Foundation of China under Grants 60574044 and 60774074.

The authors are with the Key Laboratory of Nonlinear Mathematics Science, School of Mathematical Sciences, Fudan University, Shanghai 200433, China (e-mail: tchen@fudan.edu.cn).

Digital Object Identifier 10.1109/TNN.2008.2001773 1045-9227/$25.00 © 2008 IEEE

Figura

Fig. 1. Piecewise linear function P and P .
Fig. 4. Transient behavior of the network from zero initial conditions for Ex- Ex-ample 1.
Fig. 6. Transient behavior for Example 2.
Fig. 7. PSpice simulation concerning Example 3.

Riferimenti

Documenti correlati

The context layer adds, to the current input, a value that reproduces the output achieved at the hidden layer based on all the patterns presented up to the previous step..

Keywords integer programming, computational biology, sequence alignment, genome rear-.. rangements,

The entire set of instances can be explored in a searchable and sortable table of selected instance features: problem classification, convexity of the continuous relaxation, number

We con- sider instances of combinatorial problems and in particular, we focus on the quadratic minimum spanning tree (QMST) problem and the quadratic shortest path (QSP) problem..

Gli incrementi di temperatura del bruciatore causati dall’interazione con la fiamma diffusiva di Idrogeno e la conseguente riduzione della portata sono stati analizzati in

In the dynamic experiment of the Teacher model, we estimated the final percentage of pruned nodes or filters in each layer using as a reference the results obtained by the one-

The proposed architecture can perform the convolution using both the standard method and Complex Winograd F (4 × 4, 3 × 3) for an 8-bit fixed-point quantized neural network and it

Finally, two graphs are shown that show the trajectory of the neural network in the two-dimensional space of the Beta distributions and in the &#34;triangular&#34; space of