• Non ci sono risultati.

Algorithms for the design of stabilizing switching graphs for discrete-time switched linear systems

N/A
N/A
Protected

Academic year: 2021

Condividi "Algorithms for the design of stabilizing switching graphs for discrete-time switched linear systems"

Copied!
128
0
0

Testo completo

(1)

Engineering School

School of Industrial and Information Engineering Dipartimento di Elettronica, Informazione e Bioingegneria

Master of Science in

Automation and Control Engineering

Algorithms for the Design of

Stabilizing Switching Graphs for

Discrete-Time Switched Linear Systems

Supervisor:

p r o f. fabio dercole

Master Graduation Thesis by: l u i z a l f r e d o c ava l c a n t e l o u r e n c o

Student Id n. 875934

(2)
(3)
(4)
(5)

I’m thankful for Professor Dercole’s guidance during the course of this work and also for his patience with regards to my time constraints as a double degree student.

Big thanks also to Professor Della Rossa, for providing the source code of the stability checking algorithms, which helped saving a lot of time. As well as teaching me how to run my algorithms remotely on a server.

(6)
(7)

Abstract xiii

1 a n i n t r o d u c t i o n t o d i s c r e t e-time switched lin-e a r s y s t lin-e m s 1

1.1 Definition and General Notation 1

1.2 Constrained Switching 1

1.3 Stability 4

1.4 A simple example 6

2 t r e e-based algorithms for stability analisys 9

2.1 Arbitrary Switching 9

2.2 Arbitrary Switching Example 10

2.3 Constrained Switching 10

2.4 Constrained Switching Example 13

2.5 Choice and computation of the matrix norm 14

3 a l g o r i t h m d e s i g n 15 3.1 Stabilizing Algorithm 1 15 3.2 Stabilizing Algorithm 2 16 3.3 Stabilizing Algorithm 3 18 3.4 Stabilizing Algorithm 4 21 4 i m p l e m e n tat i o n 25

4.1 Directed Graph Functions from Matlab 25

4.1.1 digraph 25 4.1.2 addnode 25 4.1.3 rmnode 25 4.1.4 addedge 25 4.1.5 rmedge 26 4.1.6 findedge 26 4.1.7 numnodes 26 4.1.8 successors 26 4.1.9 predecessors 26 4.1.10 plot 27

4.2 Other Matlab Functions 27

4.2.1 tic and toc 27

4.2.2 struct 28

4.2.3 DataT ype12DataT ype2 28

4.3 User Defined Functions 28

4.3.1 ComputePd (AppendixA.1) 28

4.3.2 ComputeKd (AppendixA.2) 28

4.3.3 ComputeNorm (AppendixA.3) 28

4.3.4 Constrained_Stability (AppendixA.1)) 29

4.3.5 Numerical Precision 29

4.3.6 Arbitrary_Graph (AppendixA.5) 29

4.3.7 Graph_to_Edges (AppendixA.6) 30

4.3.8 Remove_Isolated (AppendixA.7) 30

4.4 Main Algorithms 30

(8)

4.4.4 Implementation of Stabilizing Algorithm 4 32

5 t e s t c a s e s 35

5.1 Test Case 1 35

5.2 Test Case 2 38

5.3 Test Case 3 58

5.3.1 Different Edge Choice for Algorithm 2 65

5.4 Hints toward improvement 68

5.5 Test Case 4 70

6 c o n c l u s i o n a n d f u t u r e w o r k 73

b i b l i o g r a p h y 77

a a p p e n d i x: code listings-user defined functions 81

(9)

Figure 1.1 First Example: A switching graph with three

modes 3

Figure 1.2 Arbitrary switching graph with three modes 3

Figure 1.3 Second Example: Another switching graph with three modes 5

Figure 1.4 All possible state transitions for the example 7

Figure 1.5 Switching Graph for the example 7

Figure 1.6 Limited state transitions for the example 8

Figure 2.1 Alg.A’s Tree 10

Figure 2.2 Example tree for Alg.A 11

Figure 2.3 Trees for Alg.B 11

Figure 2.4 Unstable Cycle’s State Transition 13

Figure 2.5 Stable Switching Graph 13

Figure 3.1 Example 1-Algorithm 1’s idea. 16

Figure 3.2 Example 2-Algorithm 1’s idea. 16

Figure 3.3 Example 1-Algorithm 2’s idea 18

Figure 3.4 Example 2-Algorithm 2’s idea 19

Figure 3.5 Example 3-Algorithm 2’s idea 20

Figure 3.6 Algorithm 3’s idea 21

Figure 3.7 Algorithm 4’s idea 23

Figure 4.1 Plot function representation 27

Figure 4.2 Arbitrary_Graph Function Example 31

Figure 4.3 Remove_Isolated Example 32

Figure 4.4 Difference between allowing up to nrep or al-most nrep+ 1repetitions 33

Figure 5.1 Test1-Algorithm 1-First Iteration 36

Figure 5.2 Test1-Algorithm 1-Result 36

Figure 5.3 Test1-Algorithm 1, 2 and 3 (nrep= 1)-Result 37

Figure 5.4 Test1-Algorithm 1, 2 and 3 (nrep= 2)-Result 38

Figure 5.5 Test2-Algorithm 1-Second Iteration 39

Figure 5.6 Test2-Algorithm 1-Before Third Iteration 39

Figure 5.7 Test2-Algorithms 1 and 2-Result 40

Figure 5.8 Test2-Algorithms 2, 3 and 4-Second Iteration 40

Figure 5.9 Test2-Algorithm 2-Second Iteration-Remove_Isolated 41

Figure 5.10 Test2-Algorithm 2-Third Iteration 41

Figure 5.11 Test2-Algorithm 2-Third Iteration-Remove_Isolated 42

Figure 5.12 Test2-Algorithm 3-Second Iteration-Remove_Isolated 43

Figure 5.13 Test2-Algorithm 3-Third Iteration 43

Figure 5.14 Test2-Algorithm 3-Third Iteration-Remove_Isolated 44

Figure 5.15 Test2-Algorithm 3-Fourth Iteration 44

Figure 5.16 Test2-Algorithm 3-Fourth Iteration-Remove_Isolated 45

Figure 5.17 Test2-Algorithm 3-Result 45

Figure 5.18 Test2-Algorithm 3 (nrep = 2)-Second

(10)

Figure 5.20 Test2-Algorithm 3 (nrep= 2)-Third Iteration 47

Figure 5.21 Test2-Algorithm 3 (nrep = 2)-Third Iteration-Remove_Isolated 47

Figure 5.22 Test2-Algorithm 3 (nrep= 2)-Fourth Iteration 48

Figure 5.23 Test2-Algorithm 3 (nrep= 2)-Fourth Iteration-Remove_Isolated 48

Figure 5.24 Test2-Algorithm 3 (nrep= 2)-Result 49

Figure 5.25 Test2-Algorithm 4-Second Iteration-Remove_Isolated 49

Figure 5.26 Test2-Algorithm 4-Third Iteration 50

Figure 5.27 Test2-Algorithm 4-Third Iteration-Remove_Isolated 50

Figure 5.28 Test2-Algorithm 4-Fourth Iteration 51

Figure 5.29 Test2-Algorithm 4-Fourth Iteration-Remove_Isolated 51

Figure 5.30 Test2-Algorithm 4-Iteration 5 52

Figure 5.31 Test2-Algorithm 4-Iteration 5-Remove_Isolated 52

Figure 5.32 Test2-Algorithm 4-Iteration 6 53

Figure 5.33 Test2-Algorithm 4-Iteration 6-Remove_Isolated 53

Figure 5.34 Test2-Algorithm 4-Iteration 7 54

Figure 5.35 Test2-Algorithm 4-Iteration 7-Remove_Isolated 54

Figure 5.36 Test2-Algorithm 4-Iteration 8 55

Figure 5.37 Test2-Algorithm 4-Iteration 8-Remove_Isolated 55

Figure 5.38 Test2-Algorithm 4-Iteration 9 56

Figure 5.39 Test2-Algorithm 4-Iteration 9-Remove_Isolated 56

Figure 5.40 Test2-Algorithm 4-Iteration 10 57

Figure 5.41 Test2-Algorithm 4-Iteration 10-Remove_Isolated 57

Figure 5.42 Test2-Algorithm 4-Result 58

Figure 5.43 Test3-Proposed Graph from [28] 58

Figure 5.44 Test3-Algorithm 1-Final Graph 59

Figure 5.45 Test3-Algorithm 1-Time per iteration 60

Figure 5.46 Test3-Algorithm 2-Final Graph 61

Figure 5.47 Test3-Algorithm 2-Amount of nodes 61

Figure 5.48 Test3-Algorithm 2-Time per iteration 62

Figure 5.49 Test3-Algorithm 3-Amount of nodes 62

Figure 5.50 Test3-Algorithm 3-Time per iteration 63

Figure 5.51 Test3-Algorithm 4-Amount of nodes 63

Figure 5.52 Test3-Algorithm 4-Time per iteration 64

Figure 5.53 Test3-Algorithm 2 with different edge choice (nrep = 1)-Result 65

Figure 5.54 Test3-Algorithm 2 with different edge choice (nrep = 3)-Result 66

Figure 5.55 Test3-Algorithm 2 with different edge choice (nrep = 1)-Amount of nodes 66

Figure 5.56 Test3-Algorithm 2 with different edge choice (nrep = 3)-Amount of nodes 67

Figure 5.57 Test3-Algorithm 2 with different edge choice (nrep = 1)-Time per iteration 67

Figure 5.58 Test3-Algorithm 2 with different edge choice (nrep = 3)-Time per iteration 68

(11)

Figure 5.61 Test3-New idea-Result 2 Figure 5.62 Test4-Alg. 1 Result 71

Figure 6.1 Future Work Idea 1 74

Figure 6.2 Future Work Idea 2 75

L I S T O F TA B L E S

Table 1.1 Notation Summary for Switched Linear

Sys-tems 2

Table 1.2 Notation Summary for Constrained Switched Linear Systems 4

Table 1.3 Minimal Switching Sequences 4

L I S T I N G S

Listing A.1 ComputePd 82

Listing A.2 ComputeKd 84

Listing A.3 ComputeNorm 84

Listing A.4 Constrained_Stability 85

Listing A.5 Arbitrary_Graph 88

Listing A.6 Graph_to_Edges 88

Listing A.7 Remove_Isolated 89

Listing B.1 Stabilizing Algorithm 1 in Matlab (Alg.1) 92

Listing B.2 Stabilizing Algorithm 2 in Matlab (Alg.2) 94

Listing B.3 Stabilizing Algorithm 3 in Matlab (Alg.3) 100

(12)
(13)

We developed four algorithms, implementing different heuristics, for the automatic generation of switching graphs that stabilize a given discrete-time switched linear system. The algorithms iteratively check (by means of tree-based algorithms recently developed by Professors Dercole and Della Rossa) the stability of the system constrained by the switching graph, that is iteratively modified until a stabilizing so-lution is reached. The procedure starts either from the graph allowing arbitrary switching or from an application-specific graph under which the system is unstable.

In order to modify the graph, we exploit an important theoretical result (due to Professor Dai) that tightly lower bounds the system’s constrained joint spectral radius—bounding the long-term average growth rate of the system’s trajectories—by looking at the stability of matrix products computed along cycles of the switching graph. This result allows us to modify the switching graphs by iteratively cutting unstable cycles (cycles with unstable matrix product), by preventing in different ways their indefinite repetition. The procedure eventually terminates with a graph free of unstable cycles, thus resulting in a stable system.

The final graphs from all algorithms are either stabilizing, or empty (0 nodes). There are, however, important trade-offs between their speed and restrictiveness. The first two algorithms are faster and produce switching graphs that are more restrictive, while Algorithms3and4

are slower and produce less-restrictive switching graphs; termination for the latter algorithms has not been proven yet. Accordingly, the choice of algorithm should be application driven.

In some cases, the resulting switching graph could be improved in different ways to fit a given application better, even by adding some of the freedom lost during the heuristic. This can be done manually, by someone with knowledge about the system and using the result from our algorithms as the starting point, or automatically, e.g., by applying Algorithm 1or2(granting fast termination) starting from a

(14)

In questa tesi sono stati sviluppati quattro algoritmi per la generazione automatica di grafi di commutazione che stabilizzino un determinato sistema lineare a commutazione (switched) a tempo discreto. Gli algoritmi modificano iterativamente il grafo di commutazione, control-lando di volta in volta la stabilità del sistema mediante metodi diretti (ad albero di prodotti di matrici) recentemente sviluppati dal gruppo del Professor Dercole ([11] e [12]).

La procedura inizia da un grafo assegnato, specifico dell’applicazione considerata o dal grafo che consente la commutazione arbitraria, rispetto al quale il sistema risulta instabile. Per modificare il grafo, sfruttiamo un importante risultato teorico (dovuto al Professor Dai [10]) che mostra come un maggiorante stretto al raggio spettrale con-giunto (joint) del sistema a commutazione vincolata—estremo superi-ore del tasso di crescita medio a lungo termine delle traiettorie—possa essere ottenuto osservando la stabilità dei prodotti di matrici lungo i cicli del grafo di commutazione. Questo risultato ci permette di modificare il grafo tagliandone iterativamente i cicli instabili (cicli con prodotto di matrici instabile).

I quattro algoritmi proposti si differenziano per l’euristica utilizzata a tal fine, essenzialmente per impedire di ripetere indefinitamente il ciclo instabile individuato, con l’obiettivo di terminare con un grafo privo di cicli instabili, quindi risultante in un sistema a commutazione stabile.

I grafi finali di tutti gli algoritmi sono stabilizzanti o vuoti (0 nodi). Vi sono, tuttavia, importanti compromessi tra la loro velocità e le restrizioni che impongono alla commutazione. I primi due algoritmi proposti sono veloci, ma producano grafi di commutazione severa-mente restrittivi, mentre i due successivi algoritmi impongono meno vincoli, permettendo di percorrere i cicli instabili un numero limitato di volte. Essi richiedono tempi di calcolo maggiori e, per il momento, non abbiamo dimostrato la loro sicura terminazione.

La scelta dell’algoritmo più idoneo dipende quindi dalla specifica applicazione e dal tempo e potenza di calcolo disponibili. In alcuni casi, il grafico di commutazione risultante potrebbe essere migliorato in modi diversi per adattarsi meglio al caso specifico considerato, anche reintroducendo parte della libertà di commutazione persa con l’euristica dell’algoritmo. Questo può essere fatto manualmente, sec-ondo direttive suggerite dall’applicazione stessa, o automaticamente, per esempio usando in modalità mista gli algoritmi proposti: prima l’algoritmo3o4per arrivare ad un grafo stabilizzante, magari molto

grande, o addirittura fermandosi ad un risultato parziale non stabiliz-zante; poi l’algoritmo1o2per snellire il grafo e garantire terminazione

(15)

terminazione degli algoritmi e , ed approcci complementari sono brevemente discussi in conclusione di questo lavoro.

(16)

1

A N I N T R O D U C T I O N T O D I S C R E T E - T I M E S W I T C H E D

L I N E A R S Y S T E M S

In this chapter the basic notions on discrete-time switched linear systems are described. This description focus on the parts that are important for the next sections and a more detailed description can be found in [9], [21] and [22]. For basics on linear algebra, [18] and [30]. 1.1 d e f i n i t i o n a n d g e n e r a l n o tat i o n

A switched linear system is a system that can change its behavior in accordance to finite a set of modes, given that each mode is described by a linear operation (left-multiplication by a matrix) on the system’s state. Such system is clearly non linear, since different modes are active at different times, or better considered linear time-varying among a finite set of matrices. We will describe our notation by considering the following discrete-time switched linear system

x(t + 1) = Aσtx(t), x(1) = x1 (1.1)

where x(t) ∈Rnand x1 are the current and initial state respectively, σt ∈M = {1, 2, . . . , m} and σ1 are the current and initial mode of the system respectively. The matrix Aσ(t)∈A = {A1, . . . , Am} is nxn and describes the dynamics of the system at time t, therefore, the state at time t + 1 comes from left-multiplying the state x(t) by the matrix mode Aσt.

Another way to see Equation (1.1) is the following

   x(t + 1) = Aσx1, Aσ= AσtAσt−1. . . Aσ1. (1.2)

The system described by Equation (1.2) is the (left-)matrix product of

the system’s matrices applied when following the switching sequence σ =σ1, σ1, . . . , σt.

The switching sequence can be (right-) infinite, assuming σ[l,k], k > l as the finite subsequence σ[l,k]=σl, . . . , σkand σ[l,∞]the infinite sub-sequence σ[l,∞] =σl, σl+1, . . .; we also call σ1σ2, the concatenation of switching sequences σ1 and σ2.

Table1.1is a summary for the notation described above.

1.2 c o n s t r a i n e d s w i t c h i n g

As said in section Section 1.1, the system follows the switching

se-quence σ, such sese-quence can be arbitrary and these systems received a lot of attention in the past (as in [1], [2], [3] and [15]), or it might follow some logical rules (constrained switching). In this section we

(17)

Symbol Description Dimensions x(t) State at time t n x1 Initial state n σ(t) Mode at time t 1 σ1 Initial mode 1 m Number of modes 1

M Set of modes set with m elements

Aσt System’s matrix at time t nxn

Aσ1 System’s initial matrix nxn

Aσ System’s matrix product nxn

A Set of matrixes set with m elements

σ Switching sequence it’s a set of σt’s σ1σ2 Concatenation it’s a set of σt’s Table 1.1: Notation Summary for Switched Linear Systems.

will describe the notation for constrained switching, which is when the switching can’t freely change from one mode to another (such as in [1], [5], [10] , [13], [16], [17], [19], [20], [23], [27] and [32] ). We will denote by Σkand Σ the sets of the corresponding length-k and infinite switching sequences.

The constraints are represented through directed graphs, being that we can either choose the edges or the nodes to represent the system’s modes. We opted to use the edges names to represent the modes, which means the graph’s nodes act just as a memory for the past modes.

Basically, we have a finite automaton G = (V, E), where G is the directed graph,V = {1, . . . , V} is the set of nodes and E ⊂ VxVxM the set of edges. Therefore and edge can be seen as the tuple e[v0,v00] =

(v0, v00, w[v0,v00]), being v’,v” nodes and w[v0,v00] the respective edge’s

weight that represents the system’s mode used to go from node v’ to v”. Sometimes a starting node setS ⊂ V is also considered ([31], [12]), however, for this work we always considerS = V.

The Table1.2is a summary for the notation described above.

In Figure1.1we can see a graph that constrains a system with three

modes, a graph that represents arbitrary switching can be seen in Figure1.2and another type of switching graph for the same system is

shown in Figure1.3. Basically, in the graph in Figure1.2all edges have

the destination node as the label (even the edges from selfloops) and it doesn’t constrain the system whatsoever, which means the system can switch between modes freely. Meanwhile, both graphs in Figure

1.1and Figure1.3, remove possible switching sequences.

For the constraining graphs of Figure 1.1 through Figure 1.3, it is

possible to identify a finite set of finite sequences (each corresponding to a specific path in G) that can be arbitrarily concatenated (as well as the corresponding paths in G) to generate all possible switching

(18)

Figure 1.1: A switching graph with three modes.

(19)

Symbol Description

Σk Length-k σ set

Σ Infinite length σ set

V Automata’s node set

v Automata’s node

E Automata’s edge set

e[v0,v00] Automata’s edge from nodes v’ to v”

w[v0,v00] Weight for e[v0,v00](system’s mode)

G Directed Graph

S Automata’s initial node set

Table 1.2: Notation Summary for Constrained Switched Linear Systems sequences. The sequences are listed in Table1.3and are easily

identifi-able through the visual inspections of the corresponding graph (the sequences are trivially composed by the single system’s modes for the graph in Figure1.2allowing arbitrary switching). This property is

however not general of switching graphs.

It’s clear from looking at Table 1.3that in Figure1.3the system has

more freedom than in Figure 1.1, since the concatenations open up

many more possibilities such as the possibility of using mode 2 for more than one step and the possibility of using mode 3 after mode 2.

Graph σ(s)

Figure1.1 σ1={1}, σ2 ={3}, σ3={21} Figure1.2 σ1 ={1}, σ2 ={2}, σ3 ={3} Figure1.3 σ1={1}, σ2 ={3}, σ3={21}, σ4 ={23},

σ5 ={221}, σ6 ={223}, σ7 ={2221}, σ8 ={2223} Table 1.3: Minimal Switching Sequences.

1.3 s ta b i l i t y

A given finite swiching sequence σ is stable/unstable/contracting depending on the matrix product’s spectral radius (ρ(Aσ)), which is the largest modulus among its eigenvalues. Stable, if ρ(Aσ) < 1, unstable if ρ(Aσ) > 1, and a contraction if ||Aσ|| < 1, || · || being a matrix norm.

The system described by Equation(1.1) is asymptotically stable if x(t)

converges to zero starting from any initial state x1 and following any switching sequence σ, this can also be seen as the matrix product converging to the zero matrix (Aσ→ 0). The system is unstable when the sequence of states x(t), t >= 1 is unbounded for some x1 and σ,

(20)

Figure 1.3: Another switching graph with three modes.

the same can be said if we consider it exponentially unstable, which happens when the states are exponentially unbounded.

Considering non-asymptotic stability, the state is also bounded for all switching sequences and initial conditions, but the state does converge to zero along some sequence starting from some initial condition. Considering non-exponentially instability, no sequence and initial condition give an exponentially unbounded state, but some give an unbounded state.

First, we define a parameter that will be used to determinate asymp-totic stability or exponential instabillity for arbitrary switching sys-tems, the Joint Spectral Radius (JSR) ofA (introduced in [29])

JSR(A) = lim

k→ρk(A), ρk(A) = maxσ∈Mk||Aσ||

1/k. (1.3)

A system is asymptotically stable (exponentially unstable) iff JSR(A)< 1(> 1) [15]. When JSR(A)= 1 the system can be Lyapunov stable or not and this makes the stability problem undecidable. [7]

Another important parameter is the Generalized Spectral Radius (GSR) GSR(A) = lim sup k→ ρbk(A), ρbk(A) = max σ∈Mkρ(Aσ) 1/k . (1.4)

The JSR and GSR coincide [14] and

ρbk(A) 6 GSR(A) = JSR(A) 6 ρk(A) (1.5) is valid for any k. When we have constrained switching, we have similar parameters called Constrained Joint Spectral Radius (CJSR)

(21)

and Constrained Generalized Spectral Radius (CGSR). CJSR still works as a way to determinate stability or exponential instabillity on systems CJSR(A, Σ) = lim k→∞ρk(A, Σ), ρk(A, Σ) = maxσ∈Σk ||Aσ||1/k, (1.6) CGSR(A, Σ) = lim sup k→∞ ρbk(A, Σ), ρbk(A, Σ) = max σ∈Σk ρ(Aσ)1/k, (1.7) however, the CGSR doesn’t provide a CJSR lower bound anymore (e.g., stable systems can have unstable modes). According to [10] The CJSR can be limited instead, by looking at periodic switching sequences

ρb(per)k (A, Σ) = max σ∈Σ(per)k

ρ(Aσ)1/k, (1.8)

where Σ(per)k is the set of length-k admissable periods. We then have the Theorem 1.4 from [10] which states that

max l6kρ b(per) l (A) 6 JSR(A) = sup k>1 ρb(per)k (A, Σ). (1.9) 1.4 a s i m p l e e x a m p l e

As an example, we have a system with m = 2, which means the system switches between two different matrices

A1 = " 0.4 0.3 0.2 0.1 # , A2 = " 1 0.5 0.5 1 # (1.10) with eigenvalues λ11 = 0.54, λ12 = −0.04 for A1 and λ21 = 0.5, λ22 = 1.5 for A2. This system is unsconstrained, therefore the system can alternate modes freely (Arbitrary Switching), for this example we will consider first all possibilites in the spam of three steps (23 possibilites). Considering the initial state to be x1 = [1 1]we get the states from Figure1.4.

However, if instead we have a constraints such as not allowing A2 to be used more than three times in a row, such constraints can be seen in the graph of Figure1.5, this constrain only removes one of the

possibilities in the three-step scenario, the one where A2 is used all the time. This time the possibilities are shown in Figure1.6.

This hints us that by constraining the system its possible to change its behaviour, in ways that may guarantee stability. This will be the type of strategy that will be put to use in order to force stability into the system.

(22)

Figure 1.4: All possible state transitions for the example.

(23)
(24)

2

T R E E - B A S E D A L G O R I T H M S F O R S TA B I L I T Y

A N A L I S Y S

In our work, we use the tree-based algorithms proposed in [11] and [12] to check the stability of discrete-time switched linear systems. First, we describe the algorithm for arbitrary switching, give a simple example and then we do the same for the case of constrained switching. We will focus primarly on the algorithms themselves. See [12] for more details and proofs.

An important property from Alg.B that will be exploited in Chap-ter 3 is the retrieval of an unstable cycle, whenever the system is

exponentially unstable. (See more details in Section 2.3.)

For simplicity, we will denote a switching graph that stabilizes the system as a stable graph, and a switching graph, under which the constrained system is unstable as an unstable graph. (This is valid for the following chapters as well.)

2.1 a r b i t r a r y s w i t c h i n g

The algorithm is the following, seen in Alg.A. Algorithm AArbitrary Switching

set the root ofT to I set JSRlow= JSRup0 = 0, JSRup= +inf, k = 0 1: whileTk is not empty do JSRup00 = 0, k0= k + 1

2: foreach A ∈Tk and AiA do 3: if||AiA|| < 1 then

4: set JSRup0 =max{||AiA||1/k

0 , JSRup0 } 5: else 6: set JSR00 up =max{||AiA||1/k 0 , JSRup00 } 7: add AiAas child of A toTk0 8: end if

9: set JSRlow=max{ρ(AiA)1/k

0

, JSRlow} 10: if JSRlow> 1 then

11: set exit=true, break 12: end if

13: end for 14: ifexit then

15: set k = k0, return ρ(A) > 1 16: end if

17: set JSRup =min{JSRup00 , JSRup}, k = k0 18: end while

19: set JSRup= JSRup0 and return ρ(A) < 1

The algorithm creates a tree that keeps expanding (see in more detail below) untilTkis empty (stable) or when the lower bound of the JSR is found to be greater than one (unstable). On Figure 2.1we can see

(25)

Figure 2.1: Alg.A’s Tree

how this expansion works, for a general case with 5 modes, the tree starts with the Identity Matrix and then creates new leaves.

After computing the JSR bounds for every leaf, it then creates a new level, in which each element is the a mode times the parent. When the norm (||AiA||) for a leaf is less than one, that leaf doesn’t expand anymore. The algorithm terminates when either the k-th level (Tk) of the tree is empty (stable) or when one leaf has a lower bound greater than one (unstable).

Alg.A terminates if and only if the system is either asymptotically stable or exponentially unstable. (For definitions, check Section 1.3)

2.2 a r b i t r a r y s w i t c h i n g e x a m p l e

Considering the case of arbitrary switching treated in Section1.4and

using sum-of-squares polynomial matrix norms (as seen in Section2.5),

with d = 1 (other norms could be used) we get the tree seen in Figure

2.2. It’s worth to highlight the termination of the algorithm when the

lower bound for the JSR was computed to be greater than one in the bottom-right leaf (matrix A2). Therefore the system is (exponentially) unstable.

2.3 c o n s t r a i n e d s w i t c h i n g

When the system is under constrained switching, we can rely on the theretical results from Dai ([10]). By the equality in Equation (1.9) there

are repeatable matrix products Aσ with σ ∈ Σ(per)k and ρ(Aσ) > 1iff ρ(A, Σ) > 1. Alg.Buses this result to classify a system as exponentially

(26)

Figure 2.2: Example tree for Alg.A

Figure 2.3: Trees for Alg.B

unstable whenever there is an unstable cycle (cycle which will update CJSRlow> 1and then terminate).

The tree structure is similar to the one from Alg.A. The main differ-ences are that Alg. Bcreates a tree for every node v ∈V (denoted Tv), and the new leaves have to follow the constraints from the switching graph. Therefore, besides having a matrix product A associated to every leaf, it also has an associated node A.s from the switching graph. A general case can be seen in Figure 2.3, where CJSRlow is only

updated whenever a cycle is found. In this case, one of the trees starts from node v and computes CSJRlow=max{ρ(A1)1/k

0

, CJSRlow} once it returns to v, while the other tree starts from node y and computes CSJRlow=max{ρ(A3A1)1/k0, CJSRlow}, once it returns.

Alg. B terminates if and only if the system is either asymptotically stable or exponentially unstable. (For definitions, check Section 1.3)

(27)

Algorithm BConstrained Switching

foreach v ∈V set the root of Tv to I and I.s = v set CJSRlow= CJSRup0 = 0, CJSRup= +inf, k = 0

1: whileTv,k is not empty for all v ∈V do CJSR00

up = 0, k0= k + 1 2: foreach v ∈V, A ∈ Tv,k and (A.s, w, i) ∈E do

3: if||AiA|| < 1 then 4: set CJSR0 up=max{||AiA||1/k 0 , CJSRup0 } 5: else 6: set CJSRup00 =max{||AiA||1/k 0 , CJSRup00 } 7: add AiAas child of A toTv,k0 8: AiA.s = w 9: end if

10: if w = v(the tree returns to the initial node) then 11: set CSJRlow=max{(ρ(AiA)1/k

0

, CJSRlow)} 12: end if

13: if CJSRlow> 1 then 14: set exit=true, break 15: end if

16: end for 17: ifexit then

18: set k = k0, return ρ(A, Σ) > 1 19: end if

20: set CJSRup=min{CJSRup00 , CJSRup}, k = k0 21: end while

22: set CJSRup = CJSR0

(28)

Figure 2.4: Unstable Cycle’s State Transition

Figure 2.5: Stable Switching Graph 2.4 c o n s t r a i n e d s w i t c h i n g e x a m p l e

We go back to the example from Section1.4, this time we will consider

the constraints seen in Figure 1.5. It results on instability, detected on

cycle [1,2,3,1], which corresponds to following modes [2,2,1]. This is indeed true, if we allow the system to undergo such cycles during some time this becomes clear, as seen in Figure2.4, where the value of

states x1 and x2 keeps increasing exponentially.

If, we don’t allow mode 2 to repeat by constraining the system with the switching graph of Figure2.5we stabilize the system.

(29)

2.5 c h o i c e a n d c o m p u tat i o n o f t h e m at r i x n o r m

We use sum-of-squares polynomial norms optimized to minimize the largest norm among the system’s matrices inA ([4], [26]). With poly-nomial degree 2d, d > 1, a sum-of-squares norm of x ∈ Rnis||x||d= ((||x||(d))TP

dx(d))1/(2d) (e.g., the ellipsoid norm ||x||1 = (xTP1x)1/2 for d = 1 [6]), where Pd is a nd× nd symmetric positive-definite matrix, nd = (n + d − 1)!/((n − 1)!d!) is the number of monomials of degree d in the components of x, and x(d) Rnd lists the

mono-mials in a preassigned order defined by an nd× nd fullrank binary matrix Kd such that Kdx(d) = x⊗d= x⊗ · · · ⊗ x d-times, ⊗ being the Kronecker product.

The induced matrix norm||A||d=maxx||Ax||d/||x||dis the smallest α solving the Linear Matrix Inequality (LMI)

(A(d))TPdA(d)− α2dPd 0, (2.1)

where A(d) = (KT

dKd)−1KTdA ⊗dK

d[8]. The norm is then computed by bisection on α.

Optimizing the matrix Pd to minimize maxi||Ai||d (to be done only once before applying these algorithms) also reduces to α-bisection, while testing for each considered α the feasibility w.r.t. Pd [25] of the set of m LMIs (Equation (2.1)), one for each system’s mode. The

feasibility test is very efficient (interior programming methods) and returns, when positive, a feasible Pd. The obtained minimum α is a JSR (or CJSR) upper bound that improves for increasing d (though it becomes tight for large d only under arbitrary switching). Our heuristic is to increase d when our algorithms abnormally terminate without a decision on stability.

(30)

3

A L G O R I T H M D E S I G N

In this chapter, we will describe four algorithms for the design of stabilizing switching graph starting from a given graph (e.g., allow-ing arbitrary switchallow-ing) under which the system is unstable. Each algorithm presents their own heuristic, which are based on breaking unstable cycles obtained from Alg.B(see more details in Section2.3).

By iteratively breaking such cycles, we expect to eventually reach a stabilizing switching graph, or an empty switching graph.

These algorithms may over-constrain the system to guarantee stability. Some of the stable cycles of the switching graph, those involving unstable sub-cycles, are removed during the procedures.

3.1 s ta b i l i z i n g a l g o r i t h m 1

The inequality in Equation (1.9) says that if the system is unstable,

then there are unstable periods (i.e. cycles of the switching graph G with unstable associated matrix products). These cycles are detected with the stability analysis algorithms from Chapter2and our idea is

to "break" them to stabilize the system.

For Alg.1we constrain the system with an unstable graph and then

we iteratively remove an edge that belongs to an unstable cycle until the graph becomes stable. The algorithm terminates (considering the stability analysis always terminates), granted that the only act done is that of removing edges. Therefore, since we start with a finite number of edges, it either converges to a stable switching graph or to a switching graph with 0 nodes (G{}).

Algorithm 1Stabilizing Algorithm 1 set G = arbitrary switching graph {A.c,ρ(A, Σ)} =run Alg.B

while ρ(A, Σ) > 1 do

remove one edge of A.c from G (i.e. the first edge) remove all isolated nodes

update G

{A.c,ρ(A, Σ)} = run Alg. B end while

return G

where A.c is the unstable cycle obtained from Alg.Band an isolated node is a node that misses either outgoing edges or incoming edges. When using Alg. 1 we will mainly cut the first edge of unstable

cycles, unless a different choice is mentioned beforehand. As a simple example, we can think of a system with 2 modes of operation, with A1 = 0.5 and A2 = 1.3. At first, an arbitrary switching graph with 2 nodes is created, but instability is detected for node’s 2 self-cycle. The

(31)

Figure 3.1: Algorithm 1’s idea, where A1= 0.5 and A2= 1.3.

Figure 3.2: Algorithm 1’s idea, where A1= 0.5 and A2= 3.0. algorithm removes that edge, the resulting graph stabilizes the system, therefore being the final result, this can be seen in Figure3.1.

However, if A1 = 0.5 and A2 = 3.0 we would end with a single node, as seen in Figure3.2. This result is trivial and the next algorithms try

to overcome this.

3.2 s ta b i l i z i n g a l g o r i t h m 2

Alg. 1 is really simple, but it ends up constraining the system too

much, as seen in Figure3.1, it never allows the 2ndmode of operation

to happen twice or more in a row or in Figure3.2, where the algorithm

terminated on a single node graph. Alg.2is another simple algorithm,

but that tries to give more freedom to the system by allowing a finite number nrep> 1 of repetitions of any detected unstable cycle.

(32)

Algorithm 2Stabilizing Algorithm 2

set G = arbitrary switching graph (with originial nodes) {A.c,ρ(A, Σ)} =run Alg.B

while ρ(A, Σ) > 1 do

if A.c contains only original nodes then

remove one edge of A.c from G (i.e. the first edge)

allow at most nrep repetitions of the cycle, by creating a chain of nrepT new nodes (replica nodes), where T is the period of the cycles (the number of edges), and connect it to the node from which the edge has been removed.

for each edge (v, w, e) not belonging to the cycle from a node vof the cycle, add (z, w, e) to each replica node z representing node vin the chain

else

either cuts the last edge ingoing a replica node of A.c, or cuts the following edge, which goes to an original node (the type of cut has to be chosen beforehand)

end if

remove all isolated nodes

update original nodes (they can also be removed) update G

{A.c,ρ(A, Σ)} = run Alg. B end while

return G

When using Alg.2we will mainly cut the last edge ingoing a replica

node of A.c. The only cases where we cut the following edge is when such is explicitly informed in the description (Section5.3). In general,

cutting the edge ingoing a replica node is equivalent to cutting all the following edges of the replica node, because both isolate the node. Which means the first choice tends to be more restrictive.

Following the 2 modes example with A1 = 0.5 and A2 = 1.3, with nrep = 2. At first, an arbitrary switching graph with 2 nodes is created, but instability is detected for node’s 2 self-cycle. The algorithm removes that edge, creating replica nodes 21and 22, and then copying the edges going to 1 from 2. The resulting graph doesn’t stabilize the system, detecting instability in A.c ={1, 2, 21, 22, 1}, since it contains replica nodes, the edge going from 21 to 22 is cut, resulting in a stable switching graph. This can be seen in Figure3.3.

When A1 = 0.5, A2 = 3.0 and nrep = 2, we get Figure 3.4. The

first and second switching graph are the same as the last example, however when the stability is checked with the second graph, a cycle in A.c = {1, 2, 1} is detected instead. Since this cycle doesn’t contain replica nodes, Alg. 2creates a chain of 4 new nodes, and removing

the edge (1, 2, 2), which isolates 3 nodes (2, 21 and 22). After removing all isolated nodes, the third graph is obtained, from which instability is detected for cycle A.c = {1, 23, 11, 24, 12, 1}. This cycle has replica nodes, therefore, edge (24, 12, 1) is removed, leaving nodes 24 and 12

(33)

Figure 3.3: Algorithm 2’s idea with nrep= 2, A1= 0.5 and A2= 1.3 isolated. The isolated nodes are removed and we end with the final switching graph.

However, when we have A1 = 0.5, A2 = 5.0 and nrep = 1(to have a lower amount of nodes), we also end with a trivial solution, as seen in Figure3.5.

Alg. 2 also always terminates, even though new nodes are created,

they are always replica nodes. Whenever we create replica nodes, we also remove an original edge, and whenever we don’t create replica nodes, we cut edges between replica nodes. Therefore, the number of replica nodes created is limited by the number of original edges (assuming that the system can only be stable or unstable and Alg.B

terminates) and the heuristic either converges to a stable switching graph or to G{}.

3.3 s ta b i l i z i n g a l g o r i t h m 3

Another algorithm, similiar to Alg.2is Alg.3.

The main difference being that this time, even if A.c doesn’t contain only original nodes, it still it allows to repeat a given number nrep of times. Another big difference is that, instead of cutting one edge, and starting the chain from there (as in Alg.2), it cuts all edges from

the cycle and builds one chain starting from every node in A.c, this introduces less switching constraint.

Considering the system A1 = 0.5, A2 = 5.0 and nrep = 1again. By running Alg. 3, a better result is obtained when compared to the

single node graph from Figure3.5. As seen in Figure3.6, the algorithm

creates 2 chains when detecting a cycle with length 2 in the second switching graph instead of a single chain. It also keeps creating chains

(34)
(35)

Figure 3.5: Algorithm 2’s idea with nrep= 1, A1= 0.5 and A2= 5.0

Algorithm 3Stabilizing Algorithm 3

set G = arbitrary switching graph (with originial nodes) {A.c,ρ(A, Σ)} =run Alg.B

while ρ(A, Σ) > 1 do

remove all edges of A.c from G

allow at most nreprepetitions of the cycle, by creating a T chains of nrepT new nodes (replica nodes), where T is the period of the cycles (the number of edges), and connect each chain to the node from which the edge has been removed

for each edge (v, w, e) not belonging to the cycle, with w being an original nodes and v a node in the cycle, add (z, w, e) to each replica node z representing node v in the chain

remove all isolated nodes

update original nodes (they can also be removed) update G

{A.c,ρ(A, Σ)} = run Alg.B end while

(36)

Figure 3.6: Algorithm 3’s idea, with A1= 0.5, A2= 5.0 and nrep= 1 even when an unstable cycle containing replica nodes (11 and 22) was detected in the third switching graph. However, the termination of Alg.

3has yet to be proven (or proven to not happen under all scenarios),

unlike for the previous algorithms. 3.4 s ta b i l i z i n g a l g o r i t h m 4

This last algorithm (Alg.4) is basically Alg.3, but every node is treated

as an original node, this change was made to give more freedom to the system. In other words, we have:

The idea is to have even more freedom than in the previous scenario, al-lowing for even more switching sequences. This means, this algorithm may take even longer to terminate.

(37)

Algorithm 4Stabilizing Algorithm 4 set G = arbitrary switching graph {A.c,ρ(A, Σ)} =run Alg.B

while ρ(A, Σ) > 1 do

remove all edges of A.c from G

allow at most nreprepetitions of the cycle, by creating a T chains of nrepT new nodes (replica nodes), where T is the period of the cycles (the number of edges), and connect each chain to the node from which the edge has been removed

for each edge (v, w, e) not belonging to the cycle, with w being an original node and v a node in the cycle, add (z, w, e) to each replica node z representing node v in the chain

remove all isolated nodes

all nodes become original nodes (main difference from Alg.3)

{A.c,ρ(A, Σ)} = run Alg.B end while

return G

Considering the same example of Section3.3and starting from before

the removal of isolated nodes in the third switching graph, since the second switching graph is always the same for Algorithms 3and 4.

By running Alg. 4we get the procedure seen in Figure3.7, the steps

showing isolated nodes were hidden (besides the one before the third switching graph), since the number of nodes becomes really large (for a full similiar example, check Section5.2). In this algorithm, the

edges going into old replica nodes are copied (the algorithm considers all nodes, including replica, from the previous step original), as seen (before the removal of isolated nodes in the third switching graph) in the edge (22,21,2), that doesn’t exist in Figure3.6. This procedure constrains the system much less, resulting in a graph that allows for mode 2 to happen twice in a row during a switching sequence, instead of only once.

(38)
(39)
(40)

4

I M P L E M E N TAT I O N

The implementation was done in Matlab, the code listings are in the appendix.

4.1 d i r e c t e d g r a p h f u n c t i o n s f r o m m at l a b

In here the functions related to digraphs used in the implementation are described.[24]

4.1.1 digraph

The function G = digraph(A), creates a directed graph (G = (V, E)) based on a square adjacency matrix (A). The location of each nonzero entry in A specifies an edge for the graph, and the weight of the edge is equal to the value of the entry. For example, A =

" 1 2 1 2 #

results on an arbitrary switching graph with 2 modes of operation.

4.1.2 addnode

The function G = addnode(G, k) adds k new nodes to the directed graph G. For example, having G1 = addnode(G, 2) on the 2 modes of operation arbitrary switching graph from the previous subsection would result on it having 2 extra nodes without any edges (nodes 3 and 4).

4.1.3 rmnode

The function G = rmnode(G, v) removes node v, together with all its incoming and outgoing edges, from the directed graph G. For example, having G2 = rmnode(G1, 4) on the previous example, would result in it having only 1 node without edges (node 3).

4.1.4 addedge

The function G = addedge(G, v, w, e) adds an edge on the directed graph G coming from v and going to w, with the weight e. For example, having G3 = addedge(G2, 3, 3, 3), on the previous example would result on node 3 having a self-loop with weight 3.

(41)

4.1.5 rmedge

The function G = rmedge(G, v, w) removes the edge on G coming from v and going to w. For example, having G4 = rmedge(G3, 3, 3), on the previous example would result on node 3 having no edges again.

4.1.6 findedge

Every edge and node have and identificantion number (ID). The function EID = findedge(G, v, w) finds the ID from an edge in G that comes from v and goes into w. For example, having EID = findedge(G4, 2, 2), on the previous example would result on EID = 4. The ID is used to look the weight up in the edges matrix (that contains data for all edges) and then place the same weight into copied edges for replica nodes.

4.1.7 numnodes

The function Num = numnodes(G) returns the total number of nodes in G. Having Num = numnodes(G4), on the previous example would result on Num = 3.

4.1.8 successors

A node v being a successor of a node w means there exists an edge (v,w,•) in the graph. The function SucIDs = successors(G, v) re-turns a list containing all nodes that succeed v. Having SucIDs = successors(G, 1), on the previous example would result on SucIDs = [1, 2], whereas having SucIDs = successors(G, 3), would return an empty list.

4.1.9 predecessors

A node v being a predecessor of a node w means there exists an edge (w,v,•) in the graph. The function PreIDs = predecessors(G, v) returns a list containing all nodes that preceed v directly. Having PreIDs = predecessors(G, 1), on the previous example would re-sult on PreIDs = [1, 2], whereas having PreIDs = predecessors(G, 3), would return an empty list.

(42)

Figure 4.1: Plot function representation 4.1.10 plot

It’s possible to directly plot the graph by using the plot function, there are many parameters that can be changed, such as the color and the size of the nodes. In Figure 4.1we can see the results of plotting G,

G1, G2, G3 and G4 by using plot(Gk,0EdgeLabel0, Gk.Edges.Weight, 0LineWidth0, 1.5,0NodeColor0,0y0,0MarkerSize0, 15,0ArrowSize0, 10,0EdgeColor0,0k0,0Layout0,0circle0). This function makes it easier to visualize how the graph is.

4.2 o t h e r m at l a b f u n c t i o n s 4.2.1 tic and toc

tic is used to start a stopwatch timer (CPU time) and toc reads the elapsed time from the stopwatch timer (CPU time) started by the tic function. The function reads the internal time at the execution of the toccommand, and displays the elapsed time since the most recent call to the tic function, in seconds. Therefore, in a given script where a tic command is executed after 5 seconds, and a toc command after 25

(43)

seconds, ’Elapsed time is 20 seconds.’ will be displayed. This is useful to quickly figure out the script’s bottlenecks.

4.2.2 struct

structcreates a structure array data type that groups related data using data containers called fields. The data in each field can be of different types and one can acces the data stored by using dot notation (struct-Name.fieldName). For instance, s[i] = struct(0mat0, matrices[i],

0v0, nodeout[i],0w0, nodein[i]) could be used as a way to represent weightless directed graphs and associate the edges to their respective matrices, where nodeout and nodein are lists containing information of origin and destination respectively for every edge.

One important scructure used in our work is the tree, which has 4 fields, ’mat’, which contains a matrix, ’init’, contains the initial node, ’state’, contains the current node and ’memory’, which contains all the nodes that were in the path from the node in ’init’ to the node in ’state’, this structure was used for the implementation of Alg.B.

4.2.3 DataT ype12DataT ype2

Many times, data type conversions were needed, such as the table2array which was used to convert G.Edges, that has a table data type to an array, in order to input information into the tree structure and test G’s stability.

4.3 u s e r d e f i n e d f u n c t i o n s 4.3.1 ComputePd (AppendixA.1)

ComputePdis used to compute Pd. The method for obtaining it is in Section2.5and its needed solve Equation (2.1).

4.3.2 ComputeKd (AppendixA.2)

ComputeKdis used to compute the binary matrix Kd. Which is nec-essary for the computation of A(d) which is needed to solve Equation (2.1).

4.3.3 ComputeNorm (AppendixA.3)

ComputeNormcomputes the norm of the multiplicative matrices in the nodes. It solves the LMI of Equation (2.1) for d > 1. There is also

the option of simply using the largest (absolute) eigenvalue as the norm, this is done by having the parameter d = 0 instead.

(44)

4.3.4 Constrained_Stability (AppendixA.1))

Reproduction of Alg.B, by using the tree structure, it decides wether the constrained system is stable or not. It receives as input d, the matrices of the modes, a matrix nedgesx3containing all edges of the system in the form [v, w, e], where nedges is the total number of edges in the graph.

The function returns a flag Stable which describes wether the system is unstable (0) or stable (1), the upper and lower CSJR bounds and a list called Cycle, which contains the cycle where instability was detected, if the system is stable, Cycle is an empty list.

Cyclealways has the same first and last elements. For example, if an unstable cycle is detected between nodes v and w, Cycle = [v w v] or Cycle = [w v w]. Which are different ways to represent the same cycle.

4.3.5 Numerical Precision

For the stability test (ComputePd, CompudeKd, ComputeNorm and Constrained_Stability), we’ve used floating point precision in order to speed up the calculations. This means that, the stability test is not correct for any given test, since it approxiamates the calculations, instead of computing it symbolically. However, this was not an issue, since for most tests we used first ordem systems (n = 1), where the final results’ stability could be checked manually.

One option is to use "arbitrary precision arithmetic", also known as "variable precision arithmetic", implemented in Matlab with the function vpa(x, dig). The idea is to keep the computation of x symbolic and then use function vpa to evaluate each element of x to at least dig correct significant digits. This would obviously increases the time and (especially) memory requirements of our algorithms. Note, however, that one correct significant digit in the evaluation of the bounds to the system’s Constrained Joint Spectral Radius (CJSR) is enough for our stability test.

Another option would be "interval arithmetic", where computations are kept in floating point precision, but each real is represented by the interval into which it is guaranteed to be, according to the worst pos-sible under- and over-estimation rounding errors. Interval arithmetic is not internally supported in Matlab, but there are external packages like b4m (www.ti3.tu-harburg.de/zemke/b4m/). The implementation and test of both options were left for future developments.

4.3.6 Arbitrary_Graph (AppendixA.5)

This function creates arbitrary switching graphs, given a number m of system modes. This is done by building a square adjacency matrix A that has values equivalent to the column. Which means, the general form would be:

(45)

Am=       1 2 3 . . . m 1 2 3 . . . m . . . . 1 2 3 . . . m       (4.1)

By plotting the results of Arbitrary_Graph(3) and Arbitrary_Graph(10), we get the graphs seen in Figure4.2. Which are clearly arbitrary

switch-ing graphs with 3 modes and 10 modes respectively.

4.3.7 Graph_to_Edges (AppendixA.6)

This function takes the edges from a directed graph and converts them to an array data type. Because the main algorithms were built using the digraph functions from matlab, in which the edges have the table data type, and the Constrained_Stability algorithm edges need to be in array data type.

4.3.8 Remove_Isolated (AppendixA.7)

Removes all isolated nodes and updates the original nodes of a graph. The function follows the logic of repeatedly removing the nodes that are isolated until there are no more isolated nodes. If one of the removed nodes is an original node, it then updates the nodes that are considered original.

The reason why this function repeatedly removes nodes is that once a node is removed, it might make it so other nodes become isolated. An example is inFigure4.3, in the first iteration, the algorithm detects

that node 2 is isolated, then removes it. In the second iteration, detects that node 3 is isolated, then removes it. Lastly, the algorithm detects no isolated nodes and terminates, returning the last graph.

4.4 m a i n a l g o r i t h m s

The algorithms also contain steps to label the nodes, this only affects the visualization.

4.4.1 Implementation of Stabilizing Algorithm 1

The implementation of Alg.1was pretty straightfoward as can be seen

in AppendixB.1.

4.4.2 Implementation of Stabilizing Algorithm 2

The implementation of Alg.2(AppendixB.2) was also pretty

(46)
(47)

Figure 4.3: Remove_Isolated Example

copying the successors from the original nodes. After this is done, it then removes the isolated nodes, checks the stability and repeats. If there are replica nodes in the unstable cycle, it simply cuts the last edge before and original node. This is done by simply detecting where is the edge that goes from one replica node to one original node in the cycle and then cutting it.

4.4.3 Implementation of Stabilizing Algorithm 3

An important detail that differs the implementation of Alg. 3

(Ap-pendix B.3) from the one of Alg. 2 was in the reproduction of the

removed unstable cycles, which is done starting on every node from the removed cycle.

Since the Cycle obtained from the Constrained_Stability function has the same node as first and last element, what is done is the removal of the last element, rotation and adding of new last element for all possible cycle representations. For example, if Cycle = [1 2 3 4 1] is detected, it creates Cycle_aux = [1 2 3 4] and then creates the rotations, [1 2 3 4], [2 3 4 1], [3 4 1 2], [4 1 2 3], finally, by copying the first element into the end we get the cycles [1 2 3 4 1], [2 3 4 1 2], [3 4 1 2 3], [4 1 2 3 4] An option to decide wether the cycle will be repeated nreptimes or almost nrep+ 1was also added, where nrep repetitions are allowed and the follwing T − 1 edges are also allowed (T being the number of edges in the cycle), the difference is shown in Figure4.4.

4.4.4 Implementation of Stabilizing Algorithm 4

The implementation of Alg. 4 is similar to that of Alg. 3, with the

only difference being that all the nodes from the previous step are considered original nodes. It also has the option of allowing up to nrepcycles, or almost nrep+ 1cycles.

(48)

Figure 4.4: Difference between allowing up to nrep or almost nrep+ 1 repetitions- The left graph contains 2 full repetitions of the cy-cle, while the right graph, contains almost 2 repetitions, by not adding the last node. All edges and nodes that don’t belong to the reproduced cycle are hidden.

(49)
(50)

5

T E S T C A S E S

This chapter consists of 3 test cases. The first (Section5.1) and second

(Section5.2) test cases consist on 1-dim (n = 1) systems with 2 modes

(m = 2), such systems are much more intuitive, because one can grasp its (in)stability by just multiplying the A1 and A2values. The purpose of these tests are the following:

• Reinforce the differences between the stabilizing algorithms. • Highlight the differences for different choices of nrep.

• Show that the scripts’ procedures conform to what was expected in Chapter3.

• Show the scripts results’ interface (how all graphs appear to the user)

The third test case (Section 5.3) consists on a 2-dim (n = 2) system,

with 4 modes (m = 4) and was taken from the literature of switched systems [28].

In Section5.4, we try to highlight potential new procedures that can

improve the results obtained for the system in Section5.3.

Lastly, in the fourth test case (Section5.5), we will show that

termina-tion on an empty graph does not imply there doesn’t exist an suitable stabilizing graph for the system. This will be done by running Alg.1

and Alg.2on a simples system.

Edge labels won’t always be shown, especially when the graph has a large number of nodes, so that the figures are not overloaded. This is actually possible because we always start from the graph allowing arbitrary switching , where all edges are in the form (v, w, w) even when v = w (selfloop), therefore we can represent the label as the first digit in the pointed node’s label (i.e: all edges going into node 2 . . . will have weight 2). The are exceptions in the final graphs, since they usually contain fewer nodes and in Section 5.4, where we also show

the edge labels. 5.1 t e s t c a s e 1

The first test case consists on a system with 2 modes of operationg A1 = 0.5 and A2= 1.2 and d = 0. First, we apply Alg.1and, as can be seen in Figure5.1, an unstable cycle ([2 2]) is detected and thereafter,

the edge from this cycle is removed (Figure5.2), this graph is stable

(51)

Figure 5.1: Test1-Algorithm 1-First iteration of Alg.1-Creating the arbitrary

switching graph and detecting instability in the selfloop of node 2.

Figure 5.2: Test1-Algorithm 1-The selfloop edge is removed, resulting in this graph when running Alg.1on the given system.

(52)

Figure 5.3: Test1-Algorithm 1, 2 and 3 (nrep= 1)-The resulting graph from running Alg. 2, Alg. 3or Alg. 4with nrep = 1 on the given

system, the selfloop cycle is reproduced once.

On the other hand, when we apply Alg. 2, by allowing nrep = 1

and nrep= 2repetitions of the encountered unstable cycles. Starting from Figure 5.1, we get Figure5.3(nrep= 1) and Figure5.4(nrep=

2). Since this algorithm tries to replicate detected unstable cycles nrep times, it gives more freedom to the graph, by allowing more combinations.

When Alg.3 and Alg.4 are run, they result on the same switching

graphs as Alg. 2. This is the case since the procedure is exactly the

same when only single-edge unstable cycles are detected. But it’s clear that the final graph gives more freedom to the system, especially for nrep = 2, in which we can have mode 2 three times in a row, while two times in a row with nrep= 1and only once with Alg.1.

(53)

Figure 5.4: Test1-Algorithm 1, 2 and 3 (nrep= 2)-The resulting graph from running Alg. 2, Alg. 3or Alg. 4with nrep = 2 on the given

system, the selfloop cycle is repeated twice.

5.2 t e s t c a s e 2

The second test case consists on a similar system, the only difference being that we have A2 = 10(really unstable considering A1= 0.5) this time. We also used d = 0 for this case.

By using Alg.1, we start by making an arbitrary switching graph seen

in Figure5.1, an unstable cycle [2 2] is detected and the first edge (only

edge in this case) is removed. Resulting in the graph from Figure5.5,

on this new graph, the unstable cycle [1 2 1] is detected and the first edge is removed, as seen in Figure5.6, node 2 becomes isolated and is

removed. The final graph is in Figure5.7.

If instead, we use Alg. 2 with nrep = 1, we start with the same

arbitrary graph (Figure 5.1), in the second iteration(Figure 5.8) the

unstable cycle [1 2 1] is detected, the cycle is reproduced (Figure5.9),

producing some isolated nodes, which are removed. On the third iteration (Figure5.10) the unstable cycle [1 22111]is detected, since

it contains replica nodes, the last edge between replica nodes is cut (Figure5.11), the isolated nodes are removed, resulting in a single

(54)

Figure 5.5: Test2-Algorithm 1-Second iteration,the selfloop edge is removed, then unstable cycle [1 2 1] is detected.

Figure 5.6: Test2-Algorithm 1-The first edge ([1 2]) is removed, and node 2 becomes isolated, this is the Remove_Isolated part of the algorithm.

(55)

Figure 5.7: Test2-Algorithms 1 and 2-After removing the isolated nodes, only node 1 with a selfloop remains.

Figure 5.8: Test2-Algorithm 2 and 2 (nrep = 1) and 3 (nrep= 1) -Second iteration, the selfloop edge is removed, then unstable cycle [1 2 1] is detected.

(56)

Figure 5.9: Test2-Algorithm 2 (nrep= 1)-Second iteration, isolated nodes 2 and 21will be removed.

Figure 5.10: Test2-Algorithm 2 (nrep= 1)-Third iteration detects the unsta-ble cycle [1 22111], which contains replica nodes.

(57)

Figure 5.11: Test2-Algorithm 2 (nrep= 1)-Since the cycle had replica nodes, the edge between 11 and 22 is removed, nodes 11 and 22 are isolated.

Next, with Alg. 3 with nrep = 1, we get the graphs of Figure 5.1,

Figure 5.8 and from Figure 5.12 through Figure 5.17. The result is

definitely better than the one from the previous algorithms, which just produce the trivial result of a node with a selfloop.

If we have Alg.3with nrep= 2, we end with an slightly better result,

where a few more possibilities are allowed. (Figure5.18-Figure5.24)

Lastly, Alg.4with nrep= 1, we get the graphs of Figure 5.1, Figure 5.8and from Figure5.25through Figure5.42. The result (Figure5.42)

ends up being the best so far (as expected), allowing mode 2 to happen once, or twice in a row.

(58)

Figure 5.12: Test2-Algorithm 3 (nrep= 1)-Second iteration, isolated nodes 2, 21, 12 and 23will be removed.

Figure 5.13: Test2-Algorithm 3 (nrep= 1)-Third iteration detects the unsta-ble cycle [1 22111].

(59)

Figure 5.14: Test2-Algorithm 3 (nrep= 1)-The original edges are replicated, some nodes become isolated.

Figure 5.15: Test2-Algorithm 3 (nrep = 1)-On the fourth iteration, after removing the isolated nodes, the algorithm detects the unstable cycle [1 2413141].

(60)

Figure 5.16: Test2-Algorithm 3 (nrep = 1)-The cycle was replicated and some nodes become isolated.

Figure 5.17: Test2-Algorithm 3 (nrep= 1)-The result is a graph with 5 nodes that forces the system to undergo mode 1 at least four times in a row whenever mode 2 happens.

(61)

Figure 5.18: Test2-Algorithm 3 (nrep= 2)-Second iteration,the selfloop edge is removed, then unstable cycle [1 2 1] is detected.

Figure 5.19: Test2-Algorithm 3 (nrep= 2)-Second iteration, isolated nodes 2, 21, 12and 23will be removed.

(62)

Figure 5.20: Test2-Algorithm 3 (nrep= 2)-Third iteration detects the unsta-ble cycle [1 23111].

Figure 5.21: Test2-Algorithm 3 (nrep= 2)-The original edges are replicated, some nodes become isolated.

(63)

Figure 5.22: Test2-Algorithm 3 (nrep = 2)-On the fourth iteration, after removing the isolated nodes, the algorithm detects the unstable cycle [1 2715161].

Figure 5.23: Test2-Algorithm 3 (nrep = 2)-The cycle was replicated and some nodes become isolated.

(64)

Figure 5.24: Test2-Algorithm 3 (nrep= 2)-The result is a graph with 9 nodes that forces the system to undergo mode 1 at least three or four times in a row depending on which node the system is at, this is slightly better than the result for nrep= 1.

Figure 5.25: Test2-Algorithm 4-Iteration 2-The original edges are replicated, some nodes become isolated.

(65)

Figure 5.26: Test2-Algorithm 4-Third iteration, after removing the isolated nodes, the algorithm detects an unstable cycle.

Figure 5.27: Test2-Algorithm 4-Iteration 3-The original edges are replicated, some nodes become isolated.

(66)

Figure 5.28: Test2-Algorithm 4-On the fourth iteration, after removing the isolated nodes, the algorithm detects an unstable cycle.

Figure 5.29: Test2-Algorithm 4-Iteration 4-The original edges are replicated, some nodes become isolated.

(67)

Figure 5.30: Test2-Algorithm 4-On iteration 5, after removing the isolated nodes, the algorithm detects an unstable cycle.

Figure 5.31: Test2-Algorithm 4-Iteration 5-The original edges are replicated, some nodes become isolated.

(68)

Figure 5.32: Test2-Algorithm 4-On iteration 6, after removing the isolated nodes, the algorithm detects an unstable cycle.

Figure 5.33: Test2-Algorithm 4-Iteration 6-The original edges are replicated, some nodes become isolated.

(69)

Figure 5.34: Test2-Algorithm 4-On iteration 7, after removing the isolated nodes, the algorithm detects an unstable cycle.

Figure 5.35: Test2-Algorithm 4-Iteration 7-The original edges are replicated, some nodes become isolated.

(70)

Figure 5.36: Test2-Algorithm 4-On iteration 8, after removing the isolated nodes, the algorithm detects an unstable cycle.

Figure 5.37: Test2-Algorithm 4-Iteration 8-The original edges are replicated, some nodes become isolated.

(71)

Figure 5.38: Test2-Algorithm 4-On iteration 9, after removing the isolated nodes, the algorithm detects an unstable cycle.

Figure 5.39: Test2-Algorithm 4-Iteration 9-The original edges are replicated, some nodes become isolated.

(72)

Figure 5.40: Test2-Algorithm 4-On iteration 10, after removing the isolated nodes, the algorithm detects an unstable cycle.

Figure 5.41: Test2-Algorithm 4-Iteration 10-The original edges are replicated, some nodes become isolated.

(73)

Figure 5.42: Test2-Algorithm 4-The result is a graph with 12 nodes that forces the system to undergo mode 1 at least four times in a row whenever mode 2 happens, or at least seven times when mode 2 happens twice.

Figure 5.43: Test3-Proposed Graph from [28].

5.3 t e s t c a s e 3

We consider 2-dim case based on [28], where the dynamics of a plant that may experience controller failures: xt+1 = (A + BKi)xt, with A = " 0.94 0.56 0.14 0.46 # , B = " 0 1 # ,

and Ki= (k1,i, k2,i). The control gains switch to represent 4 failure modes. When everything works as expected, i = 1,

and K1 = (k1 k2), = (−0.49 0.27), The second and third modes cor-respond respectively to a failure of the first and second part of the controller, with K2= (0, k2) and K3 = (k1, 0). The last mode is the to-tal failure case with K4 = (0, 0). The graph proposed for this problem in [28] can be seen in Figure5.43.

Starting from an arbitrary switching graph for the system (with 4 modes). All four algorithms have been tested, Algs.1and2(nrep= 1)

(74)

Figure 5.44: Test3-Algorithm 1-Resulting graph for the system described in Section5.3.

terminated quickly, while Algs.3and4(nrep= 1) did not terminate

for a long amount of time and all algorithms had d = 6 for the norm. In Subsection 5.3.1, we also use Alg. 2. However, we will cut the

following edge instead of the standard one (described in Section3.2)

and will have nrep= 1and nrep= 3.

The final graph, obtained by running Alg.1is shown in Figure5.44,

since this algorithm only cuts edges, the amount of nodes can only go down, in this case, it keeps the initial amount (4 nodes) during all iterations.

Note that A1 and A3 are stable, whereas A2 and A4 are not. The selfloops of the unstable modes are (obviously) removed, and that after mode 2 the graph forces the use of a stable mode (1 or 3), whereas after mode 4 the graph also allows mode 2. This graph constrains the system significantly less than the one from Figure5.43, in which

two failures in a row were assumed not possible for both controllers. However, as seen in Figure5.44no matter how many times mode 3

happens, which is a failure in the second controller k2, the system should remain stable. This is also true for many other paths, such as [4 2]and [4 3], which represents fixing only one controller when both failed.

The times taken per iteration are in Figure5.45, there is a comparison

between the time taken to run the stability test (Alg. B), which was called Stability-Check and the total time spent during the iteration Totalin the figures, it’s clear that running the stability test takes the most time for all iterations.

The final graph, obtained by running Alg.2 also results in a stable

solution, which is shown in Figure5.46, this graph allows node 4 to

Riferimenti

Documenti correlati

The present study aimed at investigating the self- reported prevalence of AR and NAR in the general popu- lation aged 20–84 years in Italy and to compare the risk factor

The entire knowledge plan has been conceived as a multilayered stratified knowledge and indirect and on-site analysis required a multidisci- plinary team composed, till now, by

 Selects only the vertexes for which the specified condition is satisfied and returns a new graph with only the subset of selected

In the evaluation of wireless networks, where a node is free to move around the simulation area, the simulator requires also a mobility model to apply to the nodes..

The N 1s high resolution spectrum (Fig.S2) shows only one component centered at 400.1 eV on COOH–GNP, which can be assigned to azo groups, introduced during

Conventional and Real-Time PCR for the identification of Fusarium fujikuroi and Fusarium proliferatum from diseased rice tissues and seeds.. Terms of use:

We describe the results of our anatomic study of lymph node size and distribution within the mesorectum and pelvic side-wall tissue using a fat-clearing solvent in seven male