• Non ci sono risultati.

Pattern Formation in Networks of

N/A
N/A
Protected

Academic year: 2021

Condividi "Pattern Formation in Networks of"

Copied!
69
0
0

Testo completo

(1)

Universit`a degli Studi di Pisa

Facolt`a di Ingegneria

Corso di Laurea Magistrale in Ingegneria Robotica & Automazione

Tesi di Laurea Magistrale (Master’s Thesis)

Pattern Formation in Networks of

Kuramoto Oscillators:

A Geometric Approach for Analysis and Control.

Keywords: Nonlinear systems, Geometric & algebraic methods, Structured perturbation, Clusterization, Oscillators.

Candidato:

Lorenzo Tiberi Matricola 512678

Relatori:

Prof. Mario Innocenti Prof. Fabio Pasqualetti Correlatore:

Prof.ssa Lucia Pallottino

Anno Accademico 2015–2016

(2)
(3)

Impossible is just a big word thrown around by small men who find it easier to live in the world they’ve been given than to explore the power they have to change it. Impossible is not a fact. It’s an opinion. Impossible is not

a declaration. It’s a dare. Impossible is potential. Impossible is temporary.

Impossible is nothing.

— Muhammad Ali

A Riccardo e Loriana, genitori unici che hanno sempre saputo indicarmi la

via. Fino a qui, con passione ed orgoglio, grazie al loro supporto.

(4)
(5)

C O N T E N T S

1 introduction 1 1 .1 Contribution 2 1 .2 Outline 3

2 preliminary & problem setup 5 2 .1 Preliminary on graph theory 5 2 .2 Notes on geometric approach 8

2 .3 Kuramoto as a generalized consensus 12 2 .4 Problem setup 13

3 analysis of clusterization 19 3 .1 Assumption on the analysis 19 3 .2 Clusterization theorem 19 3 .3 Side results & insights 22

3 .4 Generalized external equitable partions 24 4 control of cluster synchronization 27

4 .1 Constrained optimization control mechanism 28 4 .2 Control without Hadamard constraints 31 5 simulations and examples 33

5 .1 Simulations with previously introduced network 33 5 .2 Simulation: control without constraints 36

5 .3 Simulation: control with constraints 37 5 .4 Further modelization with H 39

6 conclusions 43 a appendix 45 bibliography 51

v

(6)

L I ST O F F I G U R E S

Figure 1 A graph G = (V, E), with V = {1, 2, 3, 4, 5, 6}. Notice the path {1, 4, 6, 5} highlighted with blue edges. By defini- tion, each node is adjacent to the previous and/or the following. 6

Figure 2 In (a) an undirected graph with three nodes is de- picted. Notice how the edges relate the nodes in both the directions. In this model, communications are bi- directional. In (b) the graph is directed. For instance, notice how node 2 receives an input (a message) from node 3, without transmitting anything to the sender

node. 6

Figure 3 Example graph for Definition 2 .1.6. Notice that d(v 3 ) = a 31 + a 32 where a ij is the weight of the connection from j to i. 7

Figure 4 In (a) the graph is strongly connected. Notice that for every two pair of nodes there exists a path that links them. In (b) this property does not hold. Node 3 is a basin for the graph: once there, there is no path to escape. 7

Figure 5 Graph for Example 1 . Notice that this graph is not connected, nor the left part (nodes from 1 to 5) is strongly connected since, for instance, there is no path from node 3 to any other node of the left part. The iso- lated right graph G 2 = {6, 7, 8} is strongly connected.

In fact, for each node of G 2 is possible to reach each other node of G 2 . 8

Figure 6 State evolution of ( 4 ) within Example 2 . (a) The sy-

stem is stable with a proper feedback choice of u. The

evolution is in the whole space of dimensions 2. (b)

The friend F makes the closed loop system, (A + BF),

V invariant. Notice that the subspace V with basis

v = [1 , 1] T is a locus of the trajectories for the initial

conditions selected. 12

(7)

List of Figures vii Figure 7 Phases evolution for system ( 7 ) with 100 oscillators

interconnected through A. Notice how in (a) no sy- nchronization pattern can be seen for the whole po- pulation of oscillators, whereas in (b) all the oscilla- tors have the same phase for each time instant, thus evolving in full synchronization. The most interes- ting case is (c) where a pattern emerges for some of the phases, while some others keep evolving in inco- herence. 14

Figure 8 Figure (a) shows a generic network of oscillators with a partition P = P 1 , P 2 , P 3 that is phase synchronized.

For each P i , i = 1, 2, 3 the phases are evolving cohesi- vely and just for graphical purposese are depicted in different points. In figure (b) frequency synchronized nodes are shown. Notice how frequency synchroniza- tion does not imply phase synchronization since two nodes may have the same velocity (frequency), but different positions (phases). 15

Figure 9 Directed network of oscillators. Nodes are partitioned as P = {P 1 , P 2 }, with P 1 = {1, 2, 3} and P 2 = {4, 5, 6}.

Notice that, for each node i of P 1 (resp. P 2 ), the sum of the weights of all edges (i, j), with j ∈ P 2 (resp.

j ∈ P 1 ), is equal. We will show in Chapter 3 that this is a necessary condition for phase synchronization of the partition P. 16

Figure 10 For the network in Example 9 with natural frequen- cies ω = [30, 30, 30, 10, 10, 10] T figure (a) shows the frequencies of the oscillators in the clusters P 1 = {1, 2, 3}

and P 2 = {4, 5, 6}. Notice that Assumption A1 is satis- fied over the entire time interval. In figure (b) we let ω = [19 , 19, 19, 10, 10, 10] T . Notice that Assumption A1 is satisfied within bounded time intervals, such as [t 1 , t 2 ]. 20

Figure 11 Network of Remark 1 . For each choice of the arc weig-

hts a ij , the partition P = {P 1 , P 2 } with P 1 = {θ 1 , θ 2 }

and P 2 = {θ 3 , θ 4 } is phase synchronizable. Hence,

condition (i) of Theorem 3 .2.1 is not necessary when

assumption A1 is not satisfied. In the simulation pro-

posed in (b), θ = [0.3, 0.3, 0.3 + π, 0.3 + π] T and

ω = [40, 40, 40, 40] T . 22

(8)

viii List of Figures

Figure 12 Graphs and simulations for Example 4 . Notice that (a) is an external equitable partition since the output degree of each node with respect to different cells is the same. The graph is not an EP due to the missing edges (2, 1) and (3, 4). In (b) the graph is a GEEP.

For instance, node 3 has different output degrees for nodes 1 and 2, but the system still synchronizes since the sum of output degrees of 3 and 4 is the same with respect to nodes of the cell C 1 = {1, 2}. 24

Figure 13 Connection nature. Some connections among oscilla- tors may be altered during control, others are fixed and cannot be perturbed. In this thesis, solid lines represent adjustable edges while dashed lines fixed (hence non alterable) connections. 27

Figure 14 Network of four oscillators interconnected with each other, P = {P 1 , P 2 }. The cyan and magenta connecti- ons does not affect synchronization properties of the network. Only the gray edges, that link nodes be- tween different partitions P i , play an active role in pattern formation and retaining. 28

Figure 15 Linear system with input u(t). In (a) the input is arbitrary, while in (b) the state is fedback through the input matrix B with gain K. Notice that in the studied framework and when the actual nonlinear dynamics is considered, ∆ = BK is the structured perturbation for ¯ A. 31

Figure 16 Directed network of oscillators. Nodes are partitioned as P = {P 1 , P 2 }, with P 1 = {1, 2, 3} and P 2 = {4, 5, 6}.

Notice that, for each node i of P 1 (resp. P 2 ), the sum of the weights of all edges (i, j), with j ∈ P 2 (resp.

j ∈ P 1 ), is equal. Hence, the generalized equitable partition condition is satisfied. 33

Figure 17 Phase and frequency simulations for the network as in Fig. 16 . Notice that when the initial conditions for the phases are not in the image of V P there is no con- vergence to the pattern of synchronization. Instead, when the initial conditions are chosen properly, the nodes in each partition evolve cohesively as specified by V P . This is visible both on phases and frequen- cies. 35

Figure 18 Modified version of the network presented in Exam-

ple 3 . Notice that the input weight from 6 to 1 has

been replaced and the new value is 12. 36

(9)

List of Figures ix Figure 19 Simulation of Example 18 . Even starting with θ(0) ∈

Im(V P ) , notice how the de-synchronization takes ef- fect immediately due to the input weight from 6 to 1.

36

Figure 20 Modified version of the structure in 18 . Here, the per- turbed adjacency matrix allows synchronization. Mo- reover, the perturbation matrix ∆ has the minimum possible cost, measured by the Frobenius norm ||∆|| F , as to enable pattern formation. 37

Figure 21 Phase and frequency simulations for the network as in Fig. 20 (adjusted network without constraints on edges). Notice that when the initial conditions for the phases are not in the image of V P there is no con- vergence to the pattern of synchronization. Instead, when the initial conditions are chosen properly, the nodes in each partition evolve cohesively as specified by V P . This is enabled thanks to the structural pertur- bation ∆ . 38

Figure 22 Constrained network presented in Section 5 .3. The dashed edges cannot be changed, whereas the arcs depicted with solid lines may be perturbed with ∆ . 39 Figure 23 Adjusted network with structural control through ∆ ∗.

Notice how the dashed lines are not modified (and also no forbidden connection is plugged in). The only adjustments made are feasible, hence the cost results in a finite, real, number. 39

Figure 24 Phase and frequency simulations for the adjusted net- work of Fig. 23 . Notice that when the θ(0) are chosen inside the image of V P the synchronization pattern is formed and retained. This is possible thanks to the adjacency perturbation that has been implemen- ted via structural control ∆ . 40

Figure 25 Adjusted network with structural control through ∆ ∗

with H 1 . Notice that h 36 is a preferred direction of

adjustment since the cost of modification for this arc is

less with respect to the (unconstrained) others. 41

Figure 26 Adjusted network with structural control through ∆ ∗

with H 2 . Notice that h 35 is a preferred direction of

adjustment since the cost of modification for this arc is

less with respect to the (unconstrained) others. 41

Figure 27 A detail of the CDC 2017 flyer. 43

(10)
(11)

A B ST R A C T

Synchronization is crucial for the correct functionality of many natural and man-made complex systems. In this work we characterize the formation of synchronization patterns in networks of Kuramoto oscillators. Specifically, we reveal conditions on the network weights and structure and on the oscil- lators’ natural frequencies that allow the phases of a group of oscillators to evolve cohesively, yet independently from the phases of oscillators in diffe- rent clusters. Our conditions are applicable to general directed and weighted networks of heterogeneous oscillators. Surprisingly, although the oscillators exhibit nonlinear dynamics, our approach relies entirely on tools from li- near algebra and graph theory. Further, we develop a control mechanism to determine the smallest (as measured by the Frobenius norm) network per- turbation to ensure the formation of a desired synchronization pattern. Our procedure allows to constrain the set of edges that can be modified, thus enforcing the sparsity structure of the network perturbation. The results are validated through a set of numerical examples.

S O M M A R I O

Il fenomeno della sincronizzazione è cruciale per la il corretto funziona- mento di molti sistemi naturali ed artificiali. In questo lavoro verrà caratteriz- zata la formazione di specifici pattern di sincronizzazione in reti di oscillatori di Kuramoto. Nel dettaglio, verranno fornite condizioni sulla struttura delle connessioni tra nodi e sulle frequenze naturali degli oscillatori per garantire una evoluzione coesa delle fasi di gruppi di oscillatori. I risultati proposti sono applicabili a reti modellate tramite grafi diretti e pesati che evidenzia- no una dinamica nonlineare con accoppiamento diffusivo. Nonostante gli oscillatori siano caratterizzati da dinamica nonlineare l’approccio proposto sfrutta tecniche di algebra lineare e teoria dei grafi. Oltre all’analisi, pre- sentiamo un nuovo meccanismo di controllo per determinare la più piccola perturbazione strutturale (misurata dalla norma di Frobenius) in grado di ottenere la formazione di un desiderato pattern di sincronizzazione. La pro- cedura permette di vincolare una serie di archi, così da forzare un grado di sparsità variabile nella struttura della perturbazione di rete. I risultati sono validati e presentati tramite una serie di esempi.

xi

(12)
(13)

A C K N O W L E D G E M E N T S ( I TA L I A N )

Those who dream by day are cognizant of many things which escape those who dream only by night

— Edgar Allan Poe Grazie ai miei genitori, Riccardo e Loriana. Tutto questo non sarebbe stato possibile senza il loro supporto continuo, nonostante tutte le difficoltà di questo periodo. Grazie a Matteo, mio fratello minore, con cui sono e sarò sempre legato da un legame indissolubile.

Grazie a Simona che, da quattro anni a questa parte, riesce ad ascoltarmi, sopportarmi, starmi vicino e darmi tutto il suo amore.

Un grandissimo ringraziamento al Professor Mario Innocenti, mio rela- tore di tesi presso l’Università di Pisa, per avermi guidato nella scelta del progetto, durante il percorso, e nella stesura di questa relazione finale.

Un ringraziamento speciale al Professor Fabio Pasqualetti, relatore esterno presso l’Università della California a Riverside. Ho avuto il grande piacere di lavorare con una persona estremamente talentuosa ed amante della ricerca accademica. Una figura d’esempio da cui spero di aver imparato qualcosa.

Un ringraziamento alla Professoressa Lucia Pallottino, controrelatrice di tesi presso l’Università di Pisa, per gli aiuti nella stesura delle slides ed il continuo ed appassionato feedback fornitomi.

Un ringraziamento ai Prof. Gianni Bianchini ed Andrea Garulli di Unisi.

Per primi hanno instillato in me la passione per i controlli automatici con il loro entusiasmo e abilità di insegnamento fuori dal comune.

Grazie a Chiara, amica-collega con cui ho passato intere giornate in lab in cerca di nuovi risultati, tra disperazione e gioia, con una unica grande costante: la ciaccolata di metà pomeriggio/prima di uscire. Grazie per l’aiuto che mi hai offerto, e per tutti i momenti divertenti passati insieme.

Grazie a Tommaso. Dude, ci siamo proprio divertiti. Durante i weekend in giro qua e là. Durante la settimana lavorativa, anche se restavamo in lab fino a tardi o eravamo concentrati nei nostri rispettivi progetti, i momenti di nonsense, ilarità generale, o semplici chiacchierate non sono mai mancati.

Abbiamo sviluppato un grande legame, nato tra i banchi dell’Università a Pisa e rinforzato durante questi miei mesi di permanenza a Riverside.

Grazie a Gian. UCR mi ha dato la possibilità di conoscere un nuovo grande amico che spero continuerà ad essere tale nonostante la distanza.

E’ stato un piacere condividere questi mesi con te, tra risate, scherzi e chiac- chierate continue.

Thanks to all my friends and lab mates. Yin-Chen, Rajasekhar, Akila, Sid- dharth, Christian, Vahid, Alireza, Vaibhav, Sofia & Alessandra. We spent

xiii

(14)

xiv List of Figures

awesome days together. We worked a lot, but we also enjoyed everything that SoCal has to offer. Beautiful trips, parties, beaches and, most impor- tantly, shared company.

Grazie a Gianluca. Cosa dire? Ci conosciamo da una vita, e fin da quando ho memoria, sei sempre stato un pilastro per me. Ci siamo divertiti da matti a partire dalle scuole materne, e non abbiamo mai smesso nel corso degli anni. Sei un amico unico e ti ringrazio per tutti i consigli, i momenti di serietà e le tante giornate di svago passate insieme.

Grazie a Davide. Non è facile avere la fortuna di trovare amici come te.

Il nostro è un rapporto in continua evoluzione, nato tra i banchi delle ele- mentari e rinforzato nel corso degli anni. Abbiamo scelto lo stesso percorso scolatistico e poi accademico che ci ha portati fino a qua. Ed anche se le cose, inevitabilmente, cambieranno, so che potrò sempre contare su di te.

Grazie a Francesco. La triennale mi ha permesso di conoscerti bene ed ora, guardandomi alle spalle, vedo quante giornate abbiamo passato insieme a studiare e divertirci. Prima ad Arezzo, poi nella avventura a Pisa dove siamo stati coinquilini, oltre che compagni di corso. In pochi anni sei diventato uno dei miei migliori amici, e per questo ti sono grato.

Grazie ai miei amici storici, quelli con cui ho condiviso i momenti più belli della mia vita. Se ho raggiunto questo traguardo, in parte è anche merito vostro. Sarebbe impossibile scrivere un pensiero personalizzato uno per uno (finirei per avere più pagine di ringraziamenti che di tesi!), ma sappiate che in cuore mio avete molto più spazio di un semplice aforisma. Grazie a Gabri, Barna, Leva, Parro, Tommy, Asmando, Ventu, Niko, Fede, Guido, Terra, Acu, Peter, Giorgio.

Grazie a tutti quelli che mi sono stati vicini, mi hanno supportato diretta- mente ed indirettamente. Grazie a tutti quelli che non citato, a tutti quelli di cui (mi scuso) mi sono dimenticato. Spero che ognuno porti dentro di se un bel ricordo di me e che con gli anni, e tutte le esperienze future che affronteremo, il legame che ci lega diventi sempre più saldo.

Arezzo, maggio 2017

(15)

1 I N T R O D U C T I O N

It is theory that decides what can be observed.

— Albert Einstein Synchronization of coupled oscillators is everywhere in nature (Ferrari et al. 2015; Lewis et al. 2014; Strogatz 2000) and in several man-made systems, including power grids (Dörfler and Bullo 2014) and computer networks (Nis- hikawa and Motter 2015). While some systems require complete synchroni- zation among all the parts to function properly (Danino et al. 2010; Kim et al.

2010 ), others rely on cluster or partial synchronization, where subsets of no- des exhibit coherent behaviors that remain independent from the evolution of other oscillators in the network. For example, while partial synchroni- zation patterns have been observed in healthy individuals (Schnitzler and Gross 2005), complete synchronization in neural systems is associated with degenerative diseases including Parkinson’s and Huntington’s diseases (Ba- naie et al. 2009; Hammond et al. 2007; Rubchinsky et al. 2012), and epilepsy (Lehnertz et al. 2009). Cluster synchronization has received attention only recently, and several fundamental questions remain unanswered, including the characterization of the network features enabling the formation of a de- sired pattern, and the development of control mechanisms to enforce the emergence of clusters.

In this thesis we focus on networks of Kuramoto oscillators (Kuramoto 1975 ), and we characterize intrinsic and topological conditions that ensure the formation of desired clusters of oscillators. Our network model is moti- vated by a large body of literature showing broad applicability of the Kura- moto model to virtually all systems that exhibit synchronization properties.

Although Kuramoto networks exhibit nonlinear dynamics, we adopt tools from linear algebra and graph theory to characterize network conditions enabling the formation of a given synchronization pattern. Further, we de- sign a control mechanism to perturb (a subset of) the network weights so as to enforce or prevent desired synchronization patterns.

Complete synchronization in networks of Kuramoto oscillators has been extensively studied, e.g., see Cenedese and Favaretto 2015; Dörfler and Bullo 2014 ; Gómez-Gardeñes et al. 2007; Mehta et al. 2015; Zhang et al. 2014. It has been shown that synchronization of all nodes emerges when the coupling strength among the agents is sufficiently larger than the heterogeneity of the oscillators’ natural frequencies. Partial synchronization and pattern for- mation have received considerably less attention, with the literature being composed of only few recent works. In Louis M Pecora et al. 2014 it is shown how symmetry of the interconnections may lead to partial synchroni- zation. Methods based on graph symmetry have been also used to find all

1

(16)

2 introduction

possible clusters in networks of Laplacian-coupled oscillators (Sorrentino et al. 2016). The relationship between clusterization and network topology has been studied in Lu et al. 2010 for unweighted interconnections. In Dahms et al. 2012, the emergence and the stability of groups of synchronized agents within a network has been studied for different classes of dynamics, like delay-coupled laser models and neuronal spiking models. Here, the appro- ach of master stability function has been used to characterize the results. In Favaretto, Bassett, et al. 2017; Favaretto, Cenedese, et al. 2017 the idea is put forth to study an approximate notion of cluster synchronization in Kuramoto networks via tools from linear systems theory. It is quantitatively shown how cluster synchronization depends on strong intra-cluster and weak inter- cluster connections, similarity with respect to the natural frequencies of the oscillators within each cluster, and heterogeneity of the natural frequencies of coupled oscillators belonging to different subnetworks. With respect to this work, we focus on an exact notion of cluster synchronization, identify necessary and sufficient conditions on the network weights and oscillators’

natural frequencies for the emergence of a desired synchronization pattern, and exploit our analysis to design a structural control algorithm for the for- mation of a desired synchronization pattern.

The work that is closer to this thesis is Schaub et al. 2016, where the aut- hors relate cluster synchronization to the notion of an external equitable partition in a graph. In fact, the notion of an external equitable partition can be interpreted in terms of invariant subspaces of the network adjacency ma- trix, a notion that we exploit in our development. However, the analysis in Schaub et al. 2016 is carried out with unweighted and undirected networks and, as we show in this paper, the conditions in Schaub et al. 2016 may not be necessary when dealing with directed and weighted networks. Further, our approach relies on simple notions from linear algebra, and leads to the development of our control algorithm for the formation of desired patterns.

1.1 contribution

The contributions of this thesis are twofold. First, we consider a notion of

exact cluster synchronization, where the phases of the oscillators within each

cluster remain equal to each other over time, and different from the phases

of the oscillators in different clusters. We derive necessary and sufficient

conditions for the formation of a given synchronization pattern in directed

and weighted networks of Kuramoto oscillators. In particular we show that

cluster synchronization is possible if and only if (i) the natural frequencies

are equal within each cluster, and (ii) for each cluster, the sum of the weights

of the edges from every separate group is the same for all nodes in the

cluster. Second, we leverage our characterization of cluster synchronization

to develop a control mechanism that modifies the network weights so as

to ensure the formation of a desired synchronization pattern. Our control

method is optimal, in the sense that it determines the smallest (measured

(17)

1.2 outline 3 by the Frobenius norm) network perturbation for a given synchronization pattern, and it guarantees the modification of only a desired subset of the edge weights.

1.2 outline

The rest of this thesis is organized as follows.

chapter 2 contains some preliminary definitions on graph theory, con- cepts on geometric approach for control theory and the problem de- finition. The Kuramoto dynamics is introduced and shown via simula- tions. Relevant formal structures used throughout the thesis are made clear by means of a network example.

chapter 3 explains the derived conditions for cluster synchronization on networks of oscillators. In this chapter the analysis is carried out ana- litically providing one main theorem. This result is the cornerstone of the research relating the concepts of subspace invariance with nonli- near dynamics. Two corollaries follow. These are side results regarding the mathematical structure of one of the clusterization conditions. Fi- nally, one necessary remark addressing one assumption made is given.

chapter 4 describes the structured control implemented to force a particu- lar pattern of clusterization. The main theorem is proven taking into account the sparsity constraints of the optimal perturbation. This chap- ter containts the closed form solution for such perturbation matrix. A corollary about the structure of the controller when no arc constraints are present is given.

chapter 5 containts one network example where the conditions of the pre- vious chapters are implemented numerically through Matlab simula- tions. Plots show the correctness of the proposed approach. Further developments regarding the sparsity constraint exploitation are presen- ted.

chapter 6 finally concludes the thesis with a summary of the produced

results.

(18)
(19)

2 P R E L I M I N A R Y & P R O B L E M S E T U P

If a man writes a book, let him set down only what he knows.

I have guesses enough of my own

— Johann Wolfgang Goethe In this chapter we recall useful concepts regarding graph theory, linear al- gebra and the geometric approach for linear systems. Most of the definitions and results presented here constitutes the theoretical foundation of the the- sis. The material developed in this chapter is a personal elaboration based on Mesbahi and Egerstedt 2010, Lang 1987 and Basile and Marro 1991.

2.1 preliminary on graph theory

Graph-based abstractions of networked systems contain virtually no infor- mation about what is shared by the agents, the nature of the signal trans- mitted or the subsequent elaborations after the transmission. Nevertheless, graph models are utterly important due to the broad applicability to dif- ferent class of systems, from robotic agents moving within the plane, to computers connected together via internet or brain areas in neuroscience.

Even if the agents and, consequently, the information received and sent is completely different for such classes of systems, the high-level abstraction given by graphs helps to cope with hetereogeneity within the graph-based paradigma. We formalize the discussion on graph theory via the following definitions.

Definition 2.1.1. (Graph) A graph G = (V, E) is a structure made of a finite set of nodes (also called vertices) and edges E ⊆ V × V that link pair of nodes. Two nodes connected by an edge are called adjacent nodes.

Definition 2.1.2. (Path on a graph) A path in G is a list of different nodes such that each node is adjacent to the previous and/or to the following vertex.

Definition 2.1.3. (Connectedness of a graph) A graph G = (V, E) is connected if for every pair of nodes in V it exists a path that has such nodes as starting and ending nodes.

Definition 2 .1.1, 2 .1.2, 2 .1.3 are given without specifying the nature of the edge connecting two different nodes. If all the edges are equal and they have no direction the graph is referred as unweighted and undirected graph. Loosely speaking, this means that the flow of information between two nodes A and

5

(20)

6 preliminary & problem setup 1

2 3

4

5 6

Figure 1: A graph G = (V, E), with V = {1, 2, 3, 4, 5, 6}. Notice the path {1, 4, 6, 5}

highlighted with blue edges. By definition, each node is adjacent to the previous and/or the following.

B is the same for both the directions. The absence of weights related to the links results in the same effect, or the same input, among different nodes.

Clearly, few phenomena can be represented with this degree of approxi- mation. More detailed and complex models are found when we deal with weighted and directed graphs.

Definition 2.1.4. (Weighted graphs) Given V and E, let ω : E → R be a function that associate a scalar value to each edge in E. The graph G = (V, E, ω) is a weighted graph.

Definition 2.1.5. (Directed graphs) A graph G = (V, E) is a directed graph, often said digraph, if E is made of oriented pairs of nodes. The link (i, j) ∈ E is an arc from j to i where the nodes j and i are, respectively, the tail and the head of the arrow connecting the two vertices.

1 2

3

(a) Undirected graph.

1 2

3

(b) Directed graph.

Figure 2: In (a) an undirected graph with three nodes is depicted. Notice how the edges relate the nodes in both the directions. In this model, communicati- ons are bidirectional. In (b) the graph is directed. For instance, notice how node 2 receives an input (a message) from node 3, without transmitting anything to the sender node.

Given a directed graph G = (V, E) the input set of node v i ∈ V is defined as the set {v j ∈ V | (v i , v j ) ∈ E}. In a similar fashion, the output set of node v i ∈ V is defined as the set {v j ∈ V | (v j , v i ) ∈ E}. In order to quantify the impact of the connections for the node v i ∈ V we rely on a measure of input degree.

Definition 2.1.6. (Input degree for digraphs) Given the input set of node v i ∈ V as I i = {v j ∈ V | (v i , v j ) ∈ E}, the input degree of v i is defined as

d(v i ) = X

j ∈I

i

ω(v i , v j ) .

(21)

2.1 preliminary on graph theory 7

1 3 2

a 32 a 31

Figure 3: Example graph for Definition 2 .1.6. Notice that d(v 3 ) = a 31 + a 32 where a ij is the weight of the connection from j to i.

Definition 2.1.7. (Direct paths) A direct path in G is a collection of nodes such that each node belongs to the input set of the following and/or to the output set of the previous.

Definition 2.1.8. (Strong connection) A directed graph G is strongly connected if for every pair of nodes it exists a direct path that links them together.

1 3

2

4

(a) Strongly connected graph.

1 3

2

4

(b) Non strongly connected graph.

Figure 4: In (a) the graph is strongly connected. Notice that for every two pair of nodes there exists a path that links them. In (b) this property does not hold. Node 3 is a basin for the graph: once there, there is no path to escape.

Related graph matrices

Thanks to the definitions presented, the concepts of degree and adjacency matrix for a digraph will be introduced. Finally, the Laplacian matrix for a digraph will be defined.

Definition 2.1.9. (Degree matrix) The degree matrix of a graph G is a diagonal positive semidefined matrix with dimensions n × n, where n is the number of nodes in G. The (i, i)-entry is defined as d(v i ) . Specifically

∆( G) =

 

 

d(v 1 ) 0 · · · 0 0 d(v 2 ) · · · 0 .. . .. . . .. .. . 0 0 · · · d(v n )

 

  .

(22)

8 preliminary & problem setup

Definition 2.1.10. (Adjacency matrix) The adjacency matrix of a digraph G is a n × n matrix that describes the weighted and directed connections among nodes, i.e,

[A( G)] ij =

 ω ij if (v i , v j ) ∈ E

0 otherwise ,

where ω ij is the weight related to the edge (v i , v j ) or ω(v i , v j ) .

Definition 2.1.11. (Graph laplacian) The graph laplacian is a n × n matrix rela- ted to the graph G defined as

L( G) = ∆(G) − A(G).

Notice that the graph laplacian sums to 0 by rows (by construction). Hence the matrix has always a 0 eigenvalue with eigenvector 1, or L(G)1 = 0, where 1 is such that all entries are equal to 1.

We now propose an example graph G where the defined matrices are shown.

1

2

3 4 5 6

7

8

Figure 5: Graph for Example 1 . Notice that this graph is not connected, nor the left part (nodes from 1 to 5) is strongly connected since, for instance, there is no path from node 3 to any other node of the left part. The isolated right graph G 2 = {6, 7, 8} is strongly connected. In fact, for each node of G 2 is possible to reach each other node of G 2 .

Example 1. (Defined matrices) Consider the network depicted in Fig. 5, with graph G = (V, E), V = {1, 2, 3, 4, 5, 6, 7, 8}. The graph is described by the following matrices:

D =

 

 

 

 

 

 

1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1

 

 

 

 

 

 

, A =

 

 

 

 

 

 

0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 1 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0

 

 

 

 

 

  .

2.2 notes on geometric approach

The essence of the geometric approach for linear control consists of deve-

loping most of the mathematical support in coordinate-free form, to take ad-

vantage of simpler and more elegant results, which facilitate insight into the

(23)

2.2 notes on geometric approach 9 actual meaning of statements and procedures; the computational aspects are considered independently of the theory and handled by means of the stan- dard methods of matrix algebra, once a suitable coordinate system is defined.

The cornerstone of the approach is the concept of invariance of a subspace with respect to a linear transformation (Pasqualetti 2007). In this section the properties and geometric meaning of invariants are reported. Notice that the geometric approach applies to linear time invaring systems. However, in Chapter 3 we show how this theory can be applied also to nonlinear phase oscillators to derive conditions for synchronization.

This propedeutical section is a personal elaboration of Basile and Marro 1991 and Pasqualetti 2007. Note that the concepts of vector spaces over fields, subspaces and other building blocks from linear algebra are taken for granted. For a quick review, the reader may skim through Lang 1987 or Strang et al. 1993.

Consider a linear transformation A : R n → R n . Recall that an A-invariant is a subspace L such that

A L ⊆ L. (1)

Proposition 2.2.1. (Invariants with coordinates) A subspace L with basis matrix V is an A-invariant if and only if there exists a matrix X such that

AV = VX . (2)

Proof. Let v i , i = 1, . . . , r be the columns of V: L is an A-invariant if and only if each transformed column is a linear combination of all columns, i.e., if and only if there exist vectors x i such that Av i = Vx i with i = 1, . . . , r.

Equation ( 2 ) express this relation in matrix form.

Proposition 2.2.2. (Invariants with change of coordinates) If a subspace L with basis matrix V is A-invariant, then it is A-invariant for every choice of the basis matrix.

Proof. Since the subspace L has basis matrix V and it is A-invariant, it holds AV = VX for a proper choice of X. Consider a change of coordinates expres- sed by the matrix T . We require T to be non-singular for the invertibility of the chosen change. It follows that A

0

= T −1 AT where A

0

is A expressed in the new coordinates. From equation ( 2 ) we get

A

0

T −1 V = T −1 VX.

Defining W := T −1 V it is easy to notice how L with basis W is A

0

-invariant.

Proposition 2 .2.2 states that the invariance of a subspace is a coordinate

free concept that does not depend on the chosen coordinate system. The

following theorem characterize one particular similarity transformation of

A which will be useful in the next chapters.

(24)

10 preliminary & problem setup

Theorem 2.2.3. (Useful structure for A

0

) Let A : R n → R n be a linear map and L ⊆ R n be an A-invariant subspace. There exists a similarity transformation T such that

A

0

= T −1 AT =

 A 11

0

A 12

0

0 A 22

0



, (3)

where A 11

0

is an h × h matrix with h = dim(L).

Proof. Assume T = [T 1 T 2 ] with Im(T 1 ) = L. Clearly W = T −1 T 1 =

 I h 0

 ,

and together with A

0

W = WX implies the structure given in equation ( 3 ).

For the following, consider the linear time invariant continuos system des- cribed by the equations

˙x(t) = Ax(t) + Bu(t),

y(t) = Cx(t). (4)

We are now ready to state one major result that will be used in Chapter 3 . Lemma 2.2.4. (The fundamental lemma of geometric approach) Any state trajectory x| [t

0

,t

1

] of ( 4 ) belongs to a subspace L ⊆ R n if and only if x(t0) ∈ L and ˙x(t) ∈ L almost everywhere in [t 0 , t 1 ].

Proof. Recall that a Lebesgue measurable and integrable function is zero al- most everywhere in [t 0 , t 1 ] if and only if its integral in any subinterval of [t 0 , t 1 ] is zero. Apply this property to function Y T ˙x(t), where Y denotes a basis matrix of L : the function is zero in [t 0 , t 1 ] if and only if

Y T Z t

t

0

˙x(τ)dτ = Y T



x(t) − x(t 0 )



= 0 ∀t ∈ [t 0 , t 1 ] . This clearly implies x(t) ∈ L for all t ∈ [t 0 , t 1 ] .

It may be proved that in the absence of control action a subspace of the state space R n is a locus of trajectories if and only if it is an A-invariant.

When the control u(t) is present we can steer the state along a convenient trajectory described by the subspace L ⊆ R n . Intutively, this leads to a definition of a new type of invariant, the (A, B)-controlled invariant.

Definition 2.2.5. (Controlled invariant) Consider a pair (A, B) describing the system ( 4 ). A subspace V ⊆ R n is said to be an (A, B)-controlled invariant if there exists a feedback control law u(t) = Fx(t) such that V is an invariant subspace for the dynamic matrix of

˙x(t) = (A + BF)x(t).

(25)

2.2 notes on geometric approach 11 Definition 2 .2.5 can be given also in a coordinate-free fashion, as done with invariants in equation ( 1 ).

Definition 2.2.6. (Controlled invariant #2) Consider a pair (A, B) describing the system ( 4 ). A subspace V ⊆ R n is said to be an (A, B)-controlled invariant if there exists a matrix F such that

(A + BF) V ⊆ V.

Such F is called a friend of V.

Theorem 2.2.7. (Controlled invariant condition) Consider a pair (A, B) descri- bing the system ( 4 ). A subspace V ⊆ R n is an (A, B)-controlled invariant if and only if

A V ⊆ V + B with B := Im(B). (5)

Proof. (Only if) Suppose that F is a friend of V, then (A + BF)V ⊆ V or A V ⊆ V − B(FV),

since B(F V) ⊆ Im(B) then equation ( 5 ) holds.

(If) Let v 1 , v 2 , . . . , v r be a basis for V. Since V satisfies AV ⊆ V + Im(B), there is for each i = 1, . . . , r a w i ∈ V and a u i ∈ R m such that Av i = w i + Bu i . Let F be a m × n matrix such that Fv i = −u i for i = 1, . . . , r. Then Av i = w i − BFv i , i.e, (A + BF)v i = w i ∈ V and therefore (A + BF)V ⊆ V.

The proof for the sufficiency of Theorem 2 .2.7 gives implicitly a way to test if a subspace V ⊆ R n is (A, B)-controlled invariant and find a friend F.

In the following, we clarify this concept by means of a simple 2-dimensional example.

Example 2. (Controlled invariance) Consider a linear system described by ( 4 ) with matrices A =

 0 1 2 1



and B =

 0 1



. Verify that the subspace V = {x ∈ R n : x 1 = x 2 } is (A, B)-invariant and find a friend F.

Notice that V is spanned by v = [1, 1] T . Since Av =

 0 1 2 1

  1 1



=

 1 3



=

 1 1

 + 2

 0 1



∈ V + Im(B),

where we let u = 2. The (A, B)-invariance is verifed. In order to find a friend F we solve the linear system of equation Fv = −u getting

 f 1 f 2   1 1



= 2.

Hence, the set of solution is F = {(λ − 2, −λ); λ ∈ R}. Among the infinite solutions, choose, for example, F = [−2, 0]. As shown in Fig. 6 the system with the feedback gain matrix F is unstable, but it is evolving within the line when the plane is the space of maximum dimensions.

In Chapter 4 we will use the concepts of invariant and controlled invariant

in order to give sufficent and necessary conditions for a nonlinear oscillator

to cluster-synchronize.

(26)

12 preliminary & problem setup

0 5 10 15 20 25 30

x

1

0 5 10 15 20 25 30

x

2

State Evolutions

(a) Evolution in R

2

0 5 10 15 20 25 30

x

1

0 5 10 15 20 25 30

x

2

State Evolutions

(b) Evolution in V, dim(V) = 1.

Figure 6: State evolution of ( 4 ) within Example 2 . (a) The system is stable with a proper feedback choice of u. The evolution is in the whole space of dimensions 2. (b) The friend F makes the closed loop system, (A + BF), V invariant. Notice that the subspace V with basis v = [1, 1] T is a locus of the trajectories for the initial conditions selected.

2.3 kuramoto as a generalized consensus

Agreement, or consensus, is one of the fundamental problems in networks of agents, where the nodes are to agree on a state value (Mesbahi and Eger- stedt 2010). In one of the most common version of the consensus protocol, the agents are characterized by a single integrator dynamics expressed as

 ˙x i = u i u i = P n

j=1 a ij [x j (t) − x i (t)] , (6) where i = 1, . . . , n.

It is easy to see how the input for each agent is shaped through the inte- raction among the agents. The proposed feedback control law for equation ( 6 ) can be proven to asymptotically stabilize the system and solve the con- sensus problem (all the states x i converge to an identical value). Notice how the information on the structure of the network is included in equation ( 6 ) through the adjacency matrix A. If the hypothetical node i interacts with nodes c, d and e, then these connections will have an effect on the input u i . Whereas conditions on convergence for linear consensus are well known and available in the broad literature (Mesbahi and Egerstedt 2010), results where nonlinear dynamics is taken into account are still elusive and much research is to be done. One relevant shape for this nonlinear dynamics is given by the Kuramoto oscillators, joined in a network through an adjacency matrix. In the following, we point out some similarities between these equa- tions and the linear consensus. In Chapter 3 we will derive new conditions, undiscovered in literature (Tiberi et al. 2017), to quantitatively characterize a multi-consensus problem in our nonlinear framework.

Formally, consider now a network of heterogenous Kuramoto oscillators

described by the digraph G = (V, E), where V = {1, . . . , n} denotes the set

(27)

2.4 problem setup 13 of oscillators and E ⊆ V × V their interconnections. Let A = [a ij ] be the weighted adjacency matrix of G, where a ij ∈ R if (i, j) ∈ E and a ij = 0 otherwise. We assume that G is strongly connected (Godsil and Royle 2001).

Let θ i ∈ R denote the phase of the i-th oscillator, whose dynamics reads as

 ˙θ i = ω i + P n

j=1 a ij sin(θ j − θ i )

y i = mod(θ i , 2π) , (7)

where ω i is the natural frequency of the i-th oscillator and y i is the phase brought in [0, 2π]. The dynamics is a generalized version of the classic Kura- moto model (Kuramoto 1975).

Notice that the dynamics expressed in equation ( 7 ) is structurally similar to the one in equation ( 6 ). However, the oscillatory dynamics is much more involved due to the diffusive coupling expressed via a sinusoidal, and hence nonlinear, function. Moreover, the natural frequencies ω i act as a constant term on the phase derivatives ˙θ i . The class of possible behaviors depending on ω i and a ij is depicted in Fig. 7 for a large population of oscillators.

remark The Kuramoto model is a particular version of a wider class of dynamic systems, also known as phase oscillators. These systems are charac- terized by the same dynamics in isolation and the amount of phase induced by another node depends only on the phase difference between the two no- des. A general phase oscillator can be expressed as

˙θ i = f(θ i ) + X

j 6=i

g iji − θ j )

where f(θ i ) = ω i and g ij (θ i − θ j ) = sin(θ i − θ j ) for the class of Kuramoto systems.

2.4 problem setup

Followed by an introduction on the tools that we will use, in this section the studied problem is formally defined and a clarifying example will be presented at the end of the chapter.

remark Starting from this section, the material presented is original work that has been submitted for publication at the 56 th IEEE Conference on De- cision and Control (CDC) 2017. The first paragraph of the paper Tiberi et al.

2017 is reported here in extended form.

Depending on the interconnection graph G, the adjacency matrix A, and the oscillators natural frequencies, different oscillatory patterns for equation ( 7 ) are possible corresponding to (partially) synchronized or chaotic states (Koskin et al. 2015; Mirchev et al. 2014).

In this work we are particularly interested in the case where the phases of

groups of oscillators evolve cohesively within each group, yet independently

(28)

14 preliminary & problem setup

Phase Syncronization

0 0.05 0.1 0.15 0.2 0.25 0.3

t(s)

10 20 30 40 50 60 70 80 90 100

θ(rad)

(a) Incoherence.

Phase Syncronization

0 0.05 0.1 0.15 0.2 0.25 0.3

t(s)

10 20 30 40 50 60 70 80 90 100

θ(rad)

1 2 3 4 5 6

(b) Full synchronization.

Phase Syncronization

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

t(s)

10 20 30 40 50 60 70 80 90 100

θ(rad)

(c) Cluster synchronization

Figure 7: Phases evolution for system ( 7 ) with 100 oscillators interconnected through A. Notice how in (a) no synchronization pattern can be seen for the whole population of oscillators, whereas in (b) all the oscillators have the same phase for each time instant, thus evolving in full synchroni- zation. The most interesting case is (c) where a pattern emerges for some of the phases, while some others keep evolving in incoherence.

from the phases of oscillators in different groups. We refer to this behavior as pattern formation or cluster synchronization. Notice how this evolution resembles a multi-consensus problem, where partitions of agents have to agree on the value of a state variable (the phase).

To formalize this discussion, let P = {P 1 , . . . , P m } be a partition of V, that is, V = ∪ m i=1 P i and P i ∩ P j = ∅ for all i, j ∈ {1, . . . , m} with i 6= j. We restrict our attention to the case m > 1. This means that we study networks that exhibit clusterization that is local, i.e., we rule out the global synchronization case already widely studied in the broad literature.

Throughout the thesis we will assume without loss of generality that, gi- ven P = {P 1 , . . . , P m }, the oscillators are labeled so that

P i = {

i−1 X

j=1

|P j | + 1, . . . , X i j=1

|P j |},

where |P j | denotes the cardinality of the set P j . While different notions of

synchronization exist, we will use the following definitions.

(29)

2.4 problem setup 15

P

1

P

1

P

2

P

2

P

3

P

3

(a) Phase synchronization.

P

1

P

1

P

1

P

1

P

2

P

2

P

2

P

2

P

2

P

2

P

3

P

3

P

3

P

3

˙θ

P1

˙θ

P1

˙θ

P1

˙θ

P1

(b) Frequency synchronization.

Figure 8: Figure (a) shows a generic network of oscillators with a partition P = P 1 , P 2 , P 3 that is phase synchronized. For each P i , i = 1, 2, 3 the phases are evolving cohesively and just for graphical purposese are depicted in different points. In figure (b) frequency synchronized nodes are shown.

Notice how frequency synchronization does not imply phase synchroniza- tion since two nodes may have the same velocity (frequency), but different positions (phases).

Definition 2.4.1. (Phase synchronization) For the network of oscillators G = ( V, E), the partition P = {P 1 , . . . , P m } is phase synchronizable if, for some initial phases θ 1 (0) , . . . , θ n (0) , it holds

θ i (t) = θ j (t) ,

for all times t ∈ R >0 and i, j ∈ P k , with k ∈ {1, . . . , m}.  Definition 2.4.2. (Frequency synchronization) For the network of oscillators G = (V, E), the partition P = {P 1 , . . . , P m } is frequency synchronizable if, for some initial phases θ 1 (0), . . . , θ n (0), it holds

˙θ i (t) = ˙θ j (t) ,

for all times t ∈ R >0 and i, j ∈ P k , with k ∈ {1, . . . , m}.  Clearly, phase synchronization implies frequency synchronization, while the converse statement typically fails to hold.

We further introduce the following notation. Notice that

¨θ i = X n j=1

a ij cos(θ j − θ i )( ˙θ j − ˙θ i ) ,

and that the network dynamics can be written in vector form as the linear time-varying system

¨θ = L(t) ˙θ,

(30)

16 preliminary & problem setup

4

6

5 1

2

3

9

5 9

10 2

10

7 2 5

Figure 9: Directed network of oscillators. Nodes are partitioned as P = {P 1 , P 2 }, with P 1 = {1, 2, 3} and P 2 = {4, 5, 6}. Notice that, for each node i of P 1

(resp. P 2 ), the sum of the weights of all edges (i, j), with j ∈ P 2 (resp.

j ∈ P 1 ), is equal. We will show in Chapter 3 that this is a necessary condition for phase synchronization of the partition P.

where L is a time-varying matrix satisfying

L ij =

 a ij cos(θ j − θ i ) , if i 6= j,

− P n

k=1,k 6=i a ik cos(θ k − θ i ) , if i = j.

Finally, we define the characteristic matrix associated with a partition P of the network nodes, which will be used to derive our synchronization conditions in Chapter 3 .

Definition 2.4.3. (Characteristic matrix) For the network of oscillators G = ( V, E) and the partition P = {P 1 , . . . , P m }, the characteristic matrix of P is V PR n ×m , where

V P = 

v 1 v 2 · · · v m  , and

v T i = 

0 0 · · · 0

| {z }

P

i−1 j=1

|P

j

|

1 1 · · · 1

| {z }

|P

i

|

0 0 · · · 0

| P {z }

n j=i+1

|P

j

|

 .

 The characteristic matrix V P is a design matrix that carries, formally, the concept of network partition. Loosely speaking, V P defines the nodes-belonging to a defined partition. Clearly, this concept is unrelated to the clusterization:

two nodes may belong to the same P i without evolving cohesively. Ne- vertheless, as explained in Chapter 3 , this formalism comes in handy when structural conditions for pattern formation are found.

To conclude this section, we illustrate our setup and the definitions given

in this section with an example.

(31)

2.4 problem setup 17 Example 3. (Setup and definitions) Consider the network of Kuramoto oscillators in Fig. 9, with graph G = (V, E), V = {1, 2, 3, 4, 5, 6} and partition P = {P 1 , P 2 }.

The graph G and the partition P are described by A and V P as follows:

A =

 

 

 

 

0 0 0 0 0 10

0 0 0 5 0 5

0 0 0 0 10 0

9 0 0 0 0 0

0 9 0 0 0 0

0 7 2 2 0 0

 

 

 

  , V P =

 

 

 

  1 0 1 0 1 0 0 1 0 1 0 1

 

 

 

  .



(32)
(33)

3 A N A LYS I S O F C L U ST E R I Z AT I O N

I do believe in an everyday sort of magic – the inexplicable connectedness we sometimes experience with places, people, works of art and the like; the eerie appropriateness of moments of synchronicity; the whispered voice, the hidden presence, when we think we’re alone.

— Charles de Lint In this chapter we derive conditions ensuring phase (hence frequency) synchronization of a partition of oscillators. In particular, we show how sy- nchronization of a partition depends both on the interconnection structure and weights, as well as the oscillators natural frequencies. The conditions are presented in the main result of this section, Theorem 3 .2.1. The aforemen- tioned theorem is relevant because the derived result for cluster synchroniza- tion involves conditions that are necessary and sufficent, broadly applicable within the considered framework.

3.1 assumption on the analysis

We make the following technical assumption.

(A1) For the partition P = {P 1 , . . . , P m } there exists an ordering of the clus- ters P i and an interval of time [t 1 , t 2 ] , with t 2 > t 1 , such that for all times t ∈ [t 1 , t 2 ]:

max i ∈P

1

˙θ i > max

i ∈P

2

˙θ i > · · · > max

i ∈P

m

˙θ i .

Assumption (A1) requires the phases of the oscillators in different clusters to evolve with different frequencies, at least in some interval of time. This assumption is in fact not restrictive, as this is typically the case when the oscillators in different clusters have different natural frequencies. Two cases where this assumption is satisfied are presented in Fig. 10 . A special case where Assumption (A1) is not satisfied is discussed at the end of the chapter.

3.2 clusterization theorem

We are now ready to state the main result of this chapter.

Theorem 3.2.1. (Cluster synchronization) For the network of oscillators G = ( V, E), the partition P = {P 1 , . . . , P m } is phase synchronizable if and only if the following conditions are simultaneously satisfied:

19

(34)

20 analysis of clusterization

(a) Evolution in R

2

(b) Evolution in V, dim(V) = 1.

Figure 10: For the network in Example 9 with natural frequencies ω = [30 , 30, 30, 10, 10, 10] T figure (a) shows the frequencies of the oscillators in the clusters P 1 = {1, 2, 3} and P 2 = {4, 5, 6}. Notice that Assump- tion A1 is satisfied over the entire time interval. In figure (b) we let ω = [19, 19, 19, 10, 10, 10] T . Notice that Assumption A1 is satisfied within bounded time intervals, such as [t 1 , t 2 ].

(i) the network weights satisfy P

k ∈P

`

a ik − a jk = 0 for every i, j ∈ P z and z , ` ∈ {1, . . . , m}, with z 6= `;

(ii) the natural frequencies satisfy ω i = ω j for every k ∈ {1, . . . , m} and i , j ∈ P k .

Proof. (If) Let θ i = θ j for all i, j ∈ P k , k = 1, . . . , m. Let i, j ∈ P ` , and notice that

˙θ i − ˙θ j = X

z 6=`

X

k ∈P

z

a ik sin(θ k − θ i ) − a jk sin(θ k − θ j )

= X

z 6=`

s z` X

k ∈P

z

a ik − a jk = 0 ,

where we have used conditions (i) and (ii), and where s zl is a parameter that depends on the clusters z and `, but not on the indices i, j, k. We con- clude that, when conditions (i) and (ii) are satisfied, θ ∈ Im(V P ) implies

˙θ ∈ Im(V P ) , so that the subspace Im(V P ) is invariant thanks to Lemma 2 .2.4 applicable even with nonlinear dynamics. Thus, the network is phase syn- chronizable (simply select θ(0) ∈ Im(V P ) and for every t θ(t) ∈ Im(V P ) ).

(Only if) We first show that condition (i) is necessary for phase synchroni-

zation. Assume that the network is phase synchronized, and notice that it

(35)

3.2 clusterization theorem 21 is also frequency synchronized. Let i, j ∈ P ` . Because the network is phase synchronized, we must have at all times that

0 = ¨ θ i − ¨ θ j = X

z 6=`

X

k ∈P

z

a ik cos(θ k − θ i )( ˙θ k − ˙θ i )

− X

z 6=`

X

k ∈P

z

a jk cos(θ k − θ j )( ˙θ k − ˙θ j )

= X

z 6=`

c z` v z` X

k ∈P

z

a ik − a jk

| {z }

d

z

,

(8)

where c z` and v z` depends on the clusters z and `, but not on the indices i , j, k. From Assumption (A1), possibly after reordering the clusters, in some nontrivial interval we have

max i ∈P

1

˙θ i > max

i ∈P

2

˙θ i > · · · > max

i ∈P

m

˙θ i .

Thus, ( 8 ) implies that, either d z = 0 for all z (thus implying condition (i)), or the functions c zl v zl must be linearly dependent at all times in the interval.

Assume by contradiction that the functions c zl v zl are linearly dependent at all times in the above interval. Then it must hold that

X

z 6=`

d z d n

dt n c z` v z` = 0 ,

for every nonnegative integer n, where dt d

nn

denotes n-times differentiation.

In other words, not only the functions c zl v zl must be linearly dependent, but also all their derivatives, at some times in the above interval. Let d 1 6= 0 (if d 1 = 0, simply select the first nonzero coefficient), and i, j 6∈ P 1 . No- tice that, because of assumption (A1), there exists an integer n such that d 1 dt d

nn

c 1` v 1`  d z d

n

dt

n

c z` v z` , for all z 6= 1. Thus, the functions c zl v zl cannot be linearly dependent. We conclude that statement (i) is necessary for phase synchronization.

We now prove that, when the network is phase synchronized, statement (i) implies statement (ii). This shows that statement (ii) is necessary for phase synchronization. Let the network be phase synchronized, and let i, j ∈ P ` . We have

0 = ˙θ i − ˙θ j = ω i − ω j + X

z 6=`

s z` X

k ∈P

z

a ik − a jk

| {z }

=0

,

where s z` is a parameter that depends on the clusters z and `, but not on the indices i, j, k, and where we have used the fact that statement (i) is ne- cessary for phase synchronization. Thus, ω i = ω j , and statement (ii) is also necessary for phase synchronization. This concludes the proof.

Using Theorem 3 .2.1, if structural modifications are allowed within the

adjacency matrix and natural frequencies, it is possible to arbitrarly synchro-

nized given partitions P i of nodes. When structural modifications are not

(36)

22 analysis of clusterization

1 2

3 4

a 21 a 12

a 32 a 23

a 43 a 34

(a) Graph of Remark 1.

0 0.2 0.4 0.6 0.8 1

t[s]

-1 -0.5 0 0.5 1

si n ( θ)

θ

3

, θ

4

θ

1

, θ

2

(b) Time evolution of the oscillator’s phases.

Figure 11: Network of Remark 1 . For each choice of the arc weights a ij , the par- tition P = {P 1 , P 2 } with P 1 = {θ 1 , θ 2 } and P 2 = {θ 3 , θ 4 } is phase sy- nchronizable. Hence, condition (i) of Theorem 3 .2.1 is not necessary when assumption A1 is not satisfied. In the simulation proposed in (b), θ = [0 .3, 0.3, 0.3 + π, 0.3 + π] T and ω = [40, 40, 40, 40] T .

permitted, the result still says if a given partition P = {P 1 , . . . P k } with k < n is synchronizable or not.

3.3 side results & insights

In this section we provide one remark and two corollaries that comple- ment the analysis. The remark is given through an example and deals with Assumption A1. The corollaries shed some lights into Theorem 3 .2.1 provi- ding equivalent results for clusterization in matrix form and relating condi- tion (i) to subspace invariance.

Remark 1. (Necessity of assumption A1) Consider a network of oscillators with adjacency matrix

A =

 

0 a 12 0 0

a 21 0 a 23 0 0 a 32 0 a 34

0 0 a 43 0

 

 ,

and natural frequencies ω i = ω ¯ for all i ∈ {1, . . . , 4}. Notice that condition (i) in

Theorem 3.2.1 is not satisfied. Let θ 1 (0) = θ 2 (0) and θ 3 (0) = θ 4 (0) = θ 1 (0) + π,

and notice that ˙θ i = ω ¯ at all times and for all i ∈ {1, . . . , 4} (Assumption A1 is

not satisfied). In other words, the partition P = {P 1 , P 2 }, with P 1 = {1, 2} and

P 2 = {3, 4} is phase synchronized, independently of the interconnection weights

among the oscillators. Thus, condition (i) in Theorem may not be necessary when

Assumption A1 is not satisfied. Refer to Fig. 11 for a graphical interpretation. 

Let A B denote the Hadamard product between A and B ( Horn and

Johnson 1985), and Im(V P ) the orthogonal subspace to Im(V P ).

Riferimenti

Documenti correlati

The aim of the present activity is the experimental investigation on the influence of two vanes materials (cast iron and aluminium with anodized surface) and of four

Dipartimento di Ingegneria, ICT e Tecnologie per l’Energia e i Trasporti Struttura di Particolare Rilievo Valorizzazione della Ricerca.. © CNR Edizioni, 2018 P.le Aldo Moro, 7 –

Torna- ta alla ribalta nelle molte forme di neo-razzismo biologico (Bonilla- Silva), così come nei razzismi culturalisti, dove un concetto di cultura impermeabile e reificata prende

vidual member stars in the cluster. In each panel, the intersection of the dashed lines delimits a 1σ area around the average value. Star-to-star error bar is indicated. Star #7

The P-induced evolution of the unit-cell volumes of Si-FER, based on the single-crystal and powder diffraction experiments using methanol:ethanol:H 2 O mixture. (top), ethylene

Con i LEDs ora a disposizione, la luce bianca che può essere generata ha una efficienza luminosa massima di 30 lm/W (lumen/Watt).In figura 1.5 è riportato uno spettro

6 The recalled document has been pased by Commissioni riunite Finanza (VI) e attività produttive, commercio e turismo (X) della Camera dei Deputati il 18 marzo 2004.. have to

NKG2D is an activating receptor expressed on all NK cells early in NK cell development and have important role in the cellular stress-surveillance.. ‘Stressed’ cells due to