Often satellites are not equipped with gyroscopes, so they are not able to get measure of angular velocity. For this reason, the control law in section 3.1 can be modified [1] as follows:
(ui(t) = −zi(t) − gPN
j pijR(θi(t) − θj(t))
˙zi(t) = −gzi(t) − gPN
j (gpij − qij)R(θi(t) − θj(t))
Also in this case we can use block matrices to write the equations ∀ i:
(u(t) = −z(t) − (gLv⊗ R)θ(t)
˙
z(t) = −gz(t) − g((gLv− Ld) ⊗ R)θ(t) In this case, z(t) =z1(t)T z2(t)T ... zN(t)TT
is the controller state that changes dynamically following the second equation in the set. In this case the parameter g has a lower bound, and in literature [1] we can find that always exists a g sufficiently large such that the system reaches consensus.
Simulation 2 of section 3.1 is reproposed with this modified control law, initial velocities are set to 0, g = 100, Lv = 4Ld= 4L
Figure 25: y-axis rotations transient, Simulation 2
Also in this case consensus is reached, parameters can be tuned to trade between settling time and control input, or to avoid chattering. For sake of completeness, the transient of the modal vectors is shown in the next Figure (26):
Figure 26: modal coordinates transient
Now a last case is analyzed introducing the concept of leader. The leader in undirected topologies is an agent, that can be a real physical object like others or equivalently just a signal, that acts as a reference for the others.
So in this case consensus is the reaching of the same state of the leader:
t→∞lim ||θi(t) − θleader(t)|| = 0 ∀ i
t→∞lim || ˙θi(t) − ˙θleader(t)|| = 0 ∀ i
As found in [29], control laws are able to guarantee consensus also in the presence of a leader. To make the leader a constant reference we set its dynamic to be:
Jleaderθ¨leader(t) = 0; ηleader(t) = 0
A simulation is now presented with the following topology. The leader is
Figure 27: Directed topology with the presence of a leader
set as a convention in the last position in the enumeration (Agent 7, circled in red). Since the leader is not moving, its state is constant and equal to the initial conditions, the other agents should converge to it. In this example leader‘s initial conditions are:
θleader(0) =0.7684 0 0.224T; ˙θleader(0) =0 0 0T
The following figure shows the transient of the rotations around the x-axis:
Figure 28: Directed topology with the presence of a leader
4 Directed graph cases
In this section directed topology cases are be introduced, the main results of this work and related proofs are shown.
4.1 Eigenprojections in dynamic systems
To introduce the main results eigenprojections and their applications in dynamic systems are shown. In general the eigenprojection of a matrix A is an idempotent matrix A+ that satisfies:
Range(A+) = N ull(Aν) Range(Aν) = N ull(A+)
where ν is the smallest natural number such that rank(Aν) = rank(Aν+1).
Since A+ is idempotent, it is completely determined by its range and null space. Let‘s take an example:
A =
2 −1 −1
0 1 −1
0 0 0
A+=
0 0 1 0 0 1 0 0 1
In this case rank(A0) 6= rank(A1),rank(A1) = rank(A2) so ν = 1. A basis of the range of A can be {
2 1 0
,
1 1 0
} and a basis of the null space is
1 1 1
.
So the dimension of null(A) is 1, that will correspond to the dimension of range(A+), while the dimension of range(A) is 2, that will correspond to the dimension of null(A+). This means that rank(A+) = 1. It is a property of idempotent matrices to have the trace equal to the rank [6] so trace(A+) = rank(A+) = 1. Combining this with range(A+) = span(
1 1 1
) we get:
A+ =
0 0 1 0 0 1 0 0 1
Notice that the non-zero column must be the third otherwise null(A+) 6=
range(A). From this example we can see a useful property of eigenprojec-tions: if a square matrix A such that rank(A) = rank(A2) has a row all set
to 0, its eigenprojection is a matrix with the correspondent column set to 1 and all other entries set to 0.
It is a property of eigenprojections that:
A+= lim
|τ |→∞(I + τ Aν)−1
that can be used to link eigenprojections with dynamic systems. To this end, let‘s consider the following differential equation:
˙
x + Ax = 0 and its Laplace transform:
sx(s) − x0+ Ax(s) = 0 → x = (sI + A)−1x0
We know that the solution in time domain is x(t) = e−Atx0, so using final value theorem if limt→∞x(t) exists and is finite we can write:
s→0lims(sI + A)−1x0 = lim
t→∞e−Atx0
Since in general (CB)−1= B−1C−1, if C = sI + A and B = 1sI:
s→0lims(sI + A)−1= lim
s→0[(sI + A)1
sI]−1 = lim
s→0(I +1 sA)−1 Now calling τ = 1s and if ν = 1:
t→∞lim e−Atx0 = lim
|τ |→∞(I + τ A)−1x0 = A+x0
So we can conclude that if the solution of a first order differential equation like ˙x + Ax = 0 has a finete limit to infinity, it is equal to A+x0if rank(A) = rank(A2). We can now apply this to some simple cases involving Laplacians.
Let‘s consider
˙
x + Lx = 0
where L = Din− AT is the generalized Laplacian associated to a general directed graph one source(leader) and a directed spanning tree starting from it, for example:
Figure 29: Graph example
L =
2 −1 −1
0 1 −1
0 0 0
L+=
0 0 1 0 0 1 0 0 1
Since the graph has a source, the correspondent third row is 0. This means that, since rank(L) = rank(L2), the eigenprojection of L has the third column set to 1 and all other entries set to 0, as already shown. Moreover the 0 eigenvalue of L is simple, so since other eigenvalues are all positive, the solution converges. This means that:
t→∞lim x(t) = L+x0 =
x0(3) x0(3) x0(3)
meaning that all the agents will converge to the source‘s (leader) initial condition (consensus).
Figure 30: First order transient example
This latter example can be generalized for all directed graphs with a source, since rank(L) = rank(L2) holds in general. To show it let‘s take the Jordan form of such Laplacian:
L = T J T−1; J =
J0 ... 0 0 0 J1 ... 0 ... ... ... 0 0 0 ... Jn
where Ji is the Jordan block associated to the ith eigenvalue of L. Now taking the square of L: L2 = LL = T J T−1T J T−1 = T J2T−1. Since T is full-rank, the rank of L2 is the same of J2.
J2=
J02 ... 0 0 0 J12 ... 0 ... ... ... 0 0 0 ... Jn2
Consider in a slightly more general case d as the number of sources in the graph, rank(L) = n − d, moreover d is also the number of rows set to 0 in the Laplacian (if d = 0 then rank(L) = n − 1 since we have the 1 vector that is a basis of null(L), but the reasoning is the same). This means that the algebraic multiplicity of the 0 eigenvalue is always equal to geometric multiplicity, this can be shown writing g (geometric multiplicity) as g = dim{null(L − 0I)} = dim{null(L)} that is equal to the number of zero rows of L. This means that the 0 eigenvalue is semisimple and that the correspondent Jordan blocks are of dimension 1. All other Jordan blocks are full-rank so this implies that rank(J2) = rank(J ). For example:
L =
2 −1 −1 0
0 0 0 0
0 0 0 0
0 0 0 0
J =
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
rank(L2) = rank(L) = rank(J ) = rank(J2).
Let‘s now analyze the second order case, starting from a general case like:
¨
xc+ gL ˙xc+ Lxc= 0
where g ≥ 0 is a real number. We can transform the system in a first order one as in (2.5):
˙
x + Ax = 0 where A = 0 −I
L gL
and x = xc
˙ xc
. In this case rank(A) 6= rank(A2) so we can‘t use the same approach as for the first order dynamics. Let‘s at first study the sign of the eigenvalues of the system: using block matrices properties (2.3) we can write that:
det(H1 H2 H3 H4
) = det(H1)det(H4− H3H1−1H2) and apply it to A − sI:
det(−sI −I L gL − sI
) = det(−sI)det{gL − sI − L(−1
sI)(−I)} =
= det(−sI)det(gL − sI − 1
sL) = det(−sI)det((g − 1
s)L − sI) =
= det(−sI)det(L − s g − 1sI) So if λi = si
g−1
si
is an eigenvalue of L, si is an eigenvalue of A. We can now solve the latter equation to find the eigenvalues of A as function of those of L:
s2i − gλis + λi= 0 s1,2i =
gλi±q
g2λ2i − 4λi 2
If all the eigenvalues of L are real, we know that λi ≥ 0, so if g2λ2i < 4λi
the square root is complex and Re{si} = gλi ≥ 0 since g ≥ 0. If g2λ2i ≥ 4λi the square root is a real number, but
q
g2λ2i − 4λi ≤ gλi so also in this case si ≥ 0. If the eigenvalues of L are in general complex, a more strict bound for g that guarantees the eigenvalues si to have non negative real part can be found in literature [3], in particular
g2 > Im2(λi)
∀ i|λi 6= 0
If the two Laplacians matrices are not the same, the explicit computation of eigenvalues is not trivial and can be done using numerical tools. This shows that all the eigenvalues of the system are non-negative, but still we have to proove consensus.
To this end, let‘s consider the solution in the time domain x(t) = e−Atx0. The exponential can be replaced by its Taylor series:
e−At= I + (−A)t + (−A)2t2
2 + (−A)3t3 3!+ ...
Now substituting the canonical Jordan form of A = T J T−1 we get:
e−At = I + (−T J T−1)t + (−T J T−1)2t2
2 + (−T J T−1)3t3 3! + ...
Since all the powers can be written as (T J T−1)n = T JnT−1, the whole exponential can be rewritten as:
e−At = T e−JtT−1
From now on, let‘s consider A as (−A) for sake of simplicity, so all the eigenvalues of A are non-positive. The Jordan matrix J can be written as J = diag(J0, J1, ..., Jn) where Ji is the Jordan block associated to the i − th eigenvalue. If we consider J0 as the Jordan block of the 0 eigenvalue, since it has algebraic multiplicity 2 and geometric multiplicity 1, it is equal to:
J0 =0 1 0 0
Moreover ediag(ji)= diag(eJi) so we can write:
eJ t=eJ0 0 0 eF t
where F is the block matrix containing all the non-zero eigenvalues of A.
Notice that J0n= 0 for any natural n > 1 so:
eJ t=I + J0t 0 0 eF t
Let‘s now consider the final value taking the limit:
t→∞lim eJ t≈I + J0t 0
0 0
Now we can say that
t→∞lim eAt ≈ TI + J0t 0
0 0
T−1 = T
1 t 0 1
0
0 0
T−1
T is a matrix whose columns are the right generalized eigenvectors of A, while T−1 is a matrix whose rows are the left generalized eigenvectors of A. Since in J we put the two zero eigenvalues in the first to places, the correspondent left and right eigenvectors will be in the first two columns and rows of T−1 and T that are respectively l1 = 1 0 0 .. 0T and l2=0 1 0 .. 0T, r1 =1 0 1 0..T and 0 1 0 1..T. This can be intuitively seen considering that if we take a vector v that has all zero entries exept one set to 1, vTA = 0 if the 1 is in the same place of the 0 row of L. Moreover if we take a vector v with half of the entries set to 1, and the other half set to 0 (0 0 0 1 1 1T for instance), Av = 0. Then Jordan form ”rearrange” variables in order to have the alternation of xi and ˙xi, so also the entries of the eigenvectors will follow that pattern. Now, calling xl
and ˙xl the initial conditions of the leader, we can compute:
t→∞lim eAtx0≈ T
1 t 0 1
0
0 0
T−1x0=
xl+ ˙xlt
˙ xl
xl+ ˙xlt
˙ xl ...
where x0 =xl x˙l xc0(1) x˙c0(1) xc0(2) x˙c0(2) ...T
is the initial con-ditions vector with the initial position and velocity alternated for each agent.
This establishes consensus since the positions and velocities of all the agents (leader included) are the same. Moreover notice that position diverges as a ramp, and that if ˙xl = 0 all the agents converge to the same constant position. At last it can be useful write this latter result as
t→∞lim eAtx0 ≈L+ L+t 0 L+
x00
where x00 =xc(0)
˙ xc(0)
.