• Non ci sono risultati.

Annoaccademico2008-2009 Stefano Falasca DrGuy-Bart StanCandidato: Prof.Ing.Alberto LandiSupervisoreestero: Dott.Ing.Emanuele CrisostomiControrelatore: Relatori: Prof.Ing.Andrea Caiti Cooperativecontrolof3Dmobileagentswithlimitedsensingcapabilities Facolt`

N/A
N/A
Protected

Academic year: 2021

Condividi "Annoaccademico2008-2009 Stefano Falasca DrGuy-Bart StanCandidato: Prof.Ing.Alberto LandiSupervisoreestero: Dott.Ing.Emanuele CrisostomiControrelatore: Relatori: Prof.Ing.Andrea Caiti Cooperativecontrolof3Dmobileagentswithlimitedsensingcapabilities Facolt`"

Copied!
99
0
0

Testo completo

(1)

Facolt`a di Ingegneria

Tesi di laurea specialistica in Ingegneria dell’automazione

Cooperative control of 3D mobile agents

with limited sensing capabilities

Relatori:

Prof. Ing. Andrea Caiti

Dott. Ing. Emanuele Crisostomi Controrelatore:

Prof. Ing. Alberto Landi

Supervisore estero:

Dr Guy-Bart Stan

Candidato: Stefano Falasca

(2)

Sommario

Lo studio dei sistemi cooperanti richiede l’uso di tecniche derivate da di-verse branche dell’ingegneria, delle scienze informatiche e della matema-tica. Questo lavoro propone una formulazione matematica da usare per la definizione e l’analisi di una particolare classe di gruppi di veicoli. Si prendono in considerazione flotte di agenti nello spazio 3d; vengono ana-lizzate le possiblit`a di cooperazione quando la comunicazione `e limitata all’acquisizione attraverso sensori, con campo visivo limitato, della posizione di altri veicoli. Allo scopo di studiare gli effetti dei limiti di comunicazione viene formulato il problema del rendezvous, per il quale `e fornita una soluzione. I risultati ottenuti sono in linea con quelli tradizionalmente ottenuti nell’ambito del controllo cooperativo.

Abstract

The field of cooperative control for groups of vehicles lies within several different engineering, computer sciences and mathematical branches. This work proposes a mathematical framework to be used in defining and analysing a particular class of vehicles’ groups. Fleets of agents moving in the 3d space are considered; their ability to cooperate when no communication but po-sition sensing—using a limited field of view—is possible, is analysed. The statement of a slightly modified version of the rendezvous problem for the considered system and an algorithm to solve it are used as a mean to ex-plore the effects of the communication limits. The obtained results are on the same level with traditional results of cooperative control.

(3)

ii Acknowledgements

I would like to thank Prof. A. Caiti and Dr. E. Crisostomi, who, after suggesting a highly interesting issue and suggesting avenues of investigation and sources, arranged for me to do my thesis abroad at the University of Cambridge and supported me through the whole period I spent with Dr G. Stan, now fellow of Imperial College, London. And, of course, I express my great gratitude to Dr. G. Stan for his invaluable guidance in each step of my research, from the very first development of my initial ideas to the final writing of the work: a precious mentor in both method and technical matters, never forgetting the rigour in the presentation of the results, who led me to obtain the definitive approval of my Italian supervisor, Prof. Caiti.

(4)

Contents

1 Introduction 1 2 Problem definition 6 2.1 Assumptions . . . 6 2.1.1 Environment . . . 7 2.1.2 Vehicle . . . 8

2.1.3 Sensing and computation . . . 8

2.1.4 Leading structure . . . 9

2.2 Mathematical definition . . . 10

2.2.1 The sensing agents . . . 10

2.2.2 The leading function . . . 14

2.2.3 Timing issues . . . 14

2.2.4 Directed Acyclic Graphs . . . 15

2.3 Goal . . . 18

2.3.1 Visibility of a pair of points . . . 18

2.3.2 The Gram matrix . . . 20

2.3.3 Visibility of a pair of points via Gram matrices . . . . 21

2.3.4 The 3-dimensional case . . . 22

2.3.5 The overall goal . . . 22

(5)

CONTENTS iv

3 Results 24

3.1 Mathematical analysis . . . 24

3.1.1 Connection maintaining . . . 25

3.1.2 Leading function constraints . . . 32

3.1.3 Gram matrix realisability . . . 39

3.2 Proposed algorithm . . . 43

3.3 Simulation results . . . 49

4 Discussion 58 4.1 Following behaviour . . . 58

4.2 Position tuning behaviour . . . 59

4.3 On leader changing . . . 60

5 Conclusions 61 A Software solutions 62 A.1 Naming conventions . . . 62

A.2 Memory management . . . 64

A.2.1 Example . . . 68

A.3 Data containers . . . 70

A.3.1 Example . . . 73

A.4 Global referencing of agents . . . 74

A.5 Vector and matrix representation . . . 77

A.6 Algorithms . . . 81

B Vector and matrix extensions 87 B.1 Vectors . . . 87

(6)

Introduction

The synthesis of cooperating behaviours for groups of mobile robots has been capturing the interest of several researchers in the last years. The field of cooperative control is considered to be a relatively new one, even if it has been pioneered in the late 1980s by, among few others, [8]. Thanks to the big advances in computer technology and microelectronics the realisation of low cost vehicles is now possible and the fields of application of cooperating groups of vehicles has grown. For this reason there has been increased re-search interest in systems composed of multiple autonomous mobile robots exhibiting cooperative behaviour. The theoretical framework supporting such studies is still in its formative stages, although some tools (e.g. di-graphs, invariance principles, etc.) are almost directly borrowed from other disciplines and used to tackle a certain set of problems that arise.

In the preliminary stages of the research regarding such field, a great source of ideas came from biological systems. Interesting cooperative be-haviours arise in biological networks; this is true regardless of the level of resolution the observation of biological systems has: cooperative behaviours can be found all the way from interactions among molecules and cells to the behavioural ecology of animal groups. Influences from the biological world can be easily recognised in the definition of goals too. While first works (e.g. [8], [6]) explored cooperation issues in order to tackle practi-cal problems such as robot-aided manufacturing or space exploration; the focus eventually moved towards the study of which class of problems can be dealt with via cooperative control techniques. Of course, several prac-tical scenarios have been proposed in the related literature spanning from the surveillance of geographical areas to the accomplishment of tasks such as fire extinguishing. Practical applications of cooperating systems usually relies on the composition of several base behaviours previously treated in literature.

(7)

CHAPTER 1. INTRODUCTION 2 Cooperative control is aimed at finding control laws for autonomous ve-hicles; the term autonomous, here, indicates their ability to take decisions to some degree. Autonomous vehicles are not subject to direct human control given neither by on-board operators—if any—nor by remote ones. This is the main reason why autonomous vehicles are useful for accomplishing haz-ardous tasks (e.g. exploring contaminated areas) or in hostile environments (space or submarine exploration).

The field of cooperative control lies within several different engineering, computer sciences and mathematical branches. Control theory, optimiza-tion, networking and parallel and distributed computation are usually in-volved to some extent in every work on cooperative control. Moreover every cooperative system is a complex engineering system during the realisation of which several problems have to be solved (e.g. designing an appropriate computation platform, real time implementation of the algorithms, design of vehicles and their actuation, design of low level control laws for vehicles, ap-propriate sensorization, etc.). For this reason it is clear that a certain level of abstraction is needed in treating cooperative control, especially from a theoretical point of view.

The theory of cooperative control mainly involves two aspects, namely: the definition of models and the accomplishment of tasks; the explicit link between the two is to be found in algorithms. The modelling portion is aimed at mathematically defining the system and the means of interactions; it usually involves a geometrical definition of the vehicle and of its sensing ca-pabilities (sensors are almost always supposed to be the payload of vehicles), the topology and mechanisms of communication, the means of controlling vehicles and some computational structure aimed at permitting coopera-tion. As for the tasks there is a given set of goals that become an accepted basis of behaviours; rendezvous, flocking, deployment, consensus and target tracing are some examples. An algorithm is a mean of exploiting the mod-elled system—and particularly the given computational structure—in order to achieve a given task.

It is easy to see that what makes the field of cooperative control different from that of distributed computation is the assumption that the processing units are vehicles; thus it is not surprising that geometrical aspects play an important role in defining both the model and the tasks. Moreover as vehi-cles move—or, what is the same, while the network evolves—interconnection topology changes; for this reason models are specifically thought to account for spatial-related issues. Geometrical and communication issues are now briefly covered.

(8)

Vehicles are generally thought to move on a 2d space1. More precisely: models are often stated, in their early stages, as being relative to the 3d space whilst it is rare to find trace of 3d motions when algorithm design is con-cerned. In [4]—that is the most comprehensive resource currently available on this topic as far as the author knows—3d models are formulated while only some results are effectively solved in the 3d space; the most important ones—as, for instance, deployment, to which an entire chapter is devoted— are only stated for the 2d case. The author’s belief is that the reason for this is to be found—apart from the increased complexity deriving from a 3d formulation—in the possibility of casting several cooperative control-related problems into path planning tasks: obstacle avoidance, in fact, is often considered within this field and traditional path planning algorithms are intended to work within a 2d space. Of course increased complexity and simulation issues plays an important role as causes for such trend2.

As we pointed out before it is almost mandatory to consider changes in interconnections’ topology when cooperative control is concerned. For this reason constraints imposed by limits in communication capabilities of vehicles are traditionally explored. While some research works assume all to all communication among vehicles it is more frequent to find studies where whether two vehicles can communicate or not depends on their relative distance being smaller than a prescribed value. Communication constraints are often tightened when particular emphasis is put to the environmental characteristics: a non convex—possibly filled with obstacles—environment is often supposed to interact with communication capabilities in that vehicles not being too distant from each other can communicate only if their line of sight is not obstructed. The constraints described hitherto are completely dictated by geometrical considerations. On the other hand lies a different approach to the problem; it is based on considering the network topology as being dictated by an external super-entity; conditions on links’ presence and persistence are usually made (e.g. “if the network topology is such that a spanning tree for the network always exists. . . ”). Again [8] depicted a very detailed yet complex scenario by introducing a limited field of view in the 3d space.

However, as far as the author knows, limited field of view problems are not properly taken into account in the cooperative control related literature up to now. This is mainly because completely characterising implications of having a limited field of view is far from being a complete task. [9], for example, is a very recent paper taking into account such complex topic;

1

for the sake of precision it should be said that “vehicles are thought to lie on a plane”, since vehicles’ orientation is often taken into account when non-holonomic constraints are assumed to be in place.

2

it is probably needless to say that prototyping a land vehicle is far simpler than trying aerial, underwater or space vehicles.

(9)

CHAPTER 1. INTRODUCTION 4 it characterises optimal trajectories for the parking problem; yet a deeper understanding of the implications may be thought to be needed in order to allow the use of such constraints for a task as complex as cooperative control.

In addition the author’s belief is that the 3d space should be given a bigger attention; some preliminary works are necessary in order to lead the 3d case towards a maturity comparable to that of the 2d case. It is possible to foresee that several purely geometrical-based studies have to be carried out before being able to take into account complex topics such as dynamics and non-holonomic constraints. In this regard the robotic modelling tools may play an important role.

Two main threads trigger the interest of the author as far as the ongo-ing debate on cooperative control is concerned: the exploration of what it is possible to achieve by means of groups of cooperating vehicles and the definition—if not of a proper, complete mathematical framework—of what mathematical models are required to describe and how they are supposed to do so.

As for the exploration of abilities of cooperative control systems, a par-ticular framework is considered. Unidirectional, content-limited communi-cation is supposed to be in place; this means that not only the network topology is geometrically dictated (in a very sharp way, as we will see), but also the content of communication is supposed to be limited to partial infor-mation regarding the vehicles’ state (communication will be, in fact, limited to have the position of vehicles as object: no algorithmic information will be transmitted at any time). It will be seen that such limited conditions allows, however, the vehicles to cooperate to some extent.

As it has been pointed out above, different mathematical models are used for different portions of the overall system; the author’s belief is that the widely accepted dichotomy between what is network-related (and usu-ally represented by means of graphs) and algorithms has to be overcome. More precisely network models should be used whenever problems as chan-nel failures, propagation delays, transmission errors and similar problems have to be taken into account—which is usually not the case; on the con-trary, topological issues should be directly considered in defining, designing and analysing both tasks and algorithms. A frequent approach, on the con-trary, consists of checking the proposed algorithm for desirable properties when topological variations occur; even when this is not the case models do present a division between the network aspects and the algorithmic ones.

The contributions of this work are threefold: first of all it considers 3d agents having limited field of view and with communication capabilities limited to position sensing in order to explore their ability to cooperate; sec-ondly, when it comes to define the problem and analyse it, a novel approach

(10)

is used—of course, several assessed techniques are largely used through-out the work; it finally solves the rendezvous problem within the proposed framework as a mean to check the proposed framework for coherency.

This document has the following structure: in chapter 2 the mathemat-ical definitions of both the system and the goal are given; chapter 3 shows a possible way of analysing the goal and uses the results of such an analysis in order to devise an algorithm aimed at accomplishing the defined task; finally, within chapter 4 the results are discussed. The order of presentation of topics has been chosen with the aims of briefness and clarity in mind.

(11)

Chapter 2

Problem definition

This work is about the cooperative control of the motion of several au-tonomous vehicles. Its main efforts are used to overcome two main difficul-ties, namely: the lack of communication (that is, in turn, substituted by sensing) and the limited field of view of each vehicle.

Vehicles move using their sensors in order to obtain data. They then process the collected information using an algorithm in order to decide how to move at any given time. In the context of this work, vehicles shall move into a perfectly friendly environment (where there is nothing with which they could collide) and will be equipped with perfect sensors. The vehicles themselves are perfect as they have no dynamics and are fully actuated.

Yet several problems will arise because of the limited field of view of the vehicles’ sensors. What follows shall try to devise a strategy to let vehicles cooperate within this framework.

One of the first concerns when designing such algorithms will be how to mantain the connection among (part of) the vehicles, as this is the only way in which a cooperative behaviour can be estabilished and mantained.

This chapter contains three sections. The first one is aimed at describing all the assumptions that define the entities with which we will work; the second one contains a mathematical definition of the models that will be used throughout the work; the goal of cooperation will be defined in the last one.

2.1

Assumptions

There is a set of vehicles (or agents) moving in the 3-dimensional space.

(12)

In order to maintain a cooperative behaviour, some connection maintaing strategy has to be put in place. This is the reason motivating the following discussion. Every agent may have a leader (being another agent) and may be the leader of any number of other agents (even zero). Generally, there is no way for an agent to know whether it is a leader or not; we will see that for this reason every agent must act as if it were. The primary objective of agents with a leader is to maintain the visual contact with it1. The leader itself must act in such a way that allows it to do so; this means that agents’ movements have to satisfy two conditions:

• allow following agents to keep connectivity; • keep connectivity with the leader (if any).

Visual contact is not a symmetrical relation; that is the reason why we must distinguish between the role of the two agents being in visual contact. We will call one of them the sensing agent and the other one will be referred to as the sensed agent. Whether there is a visual contact between a sensing agent and a sensed agent or not depends on their relative position and on the sensing agent’s attitude.

At time k every agent will decide the position and attitude it will be in at time k + 1, knowing the position of its leader—if any—at time k. That’s the only sensed information on which the moving strategy can rely on.

In order to have a comprehensive outline of what the framework we are working into is, several things must be discussed. It is important to know the environment in which agents are going to be and how they decide their movements, as well as how they are able to move. Some preliminary considerations about the leading structure will then be made.

2.1.1 Environment

The environment within which agents are supposed to move is assumed to contain nothing but the agents themselves; this means that there are no objects to collide with. It is also assumed to be infinitely extended, so that no attention has to be paid in avoiding the crossing of boundaries.

This really simplifying assumptions are usually implicitly made in tack-ling problems like the ones that will be discussed here; see for example [7] or [10].

1

(13)

CHAPTER 2. PROBLEM DEFINITION 8

2.1.2 Vehicle

In this work the words vehicle and agent will be used with a slightly different meaning. The first one will be used when the actual vehicle is referred, whilst agent will be mainly used with the meaning of the representation of a vehicle2. Such distinction is, however, not strictly enforced throught the text

unless the context is such that the difference in meaning is not self-evident. Each vehicle is assumed to occupy a null portion of space so that no col-lision between vehicles is possible. Every agent describes a vehicle’s position in space and its orientation3.

Every vehicle can change its position and attitude with discrete timing; such operations are assumed to be synchronized and, therefore, vehicles are able to change their position at the same time as each other. Every vehicle is assumed to change both its position and attitude arbitrarily: i.e. a first order, fully actuated model with no constraints on inputs will be used.

2.1.3 Sensing and computation

Every agent is intended to choose its new position according to some algo-rithms based on some sensed information. Two kinds of sensing are assumed to be possible:

• each agent is supposed to be able to sense its actual position and attitude at every time;

• each agent is supposed to be able to sense the position4 of a certain (time-varying) set of agents.

The set of sensed agents is determined by the field of view of the sensing agent and by its current position and attitude. This field of view is the portion of space delimited by a cone and within some given minimum and maximum distance5.

Sensing activities are assumed to be free of noise; perfectly precise infor-mation are, thus, assumed to be available at any time.

Each agent has the following computation capabilities: a processor with the ability of allocating both continuous and discrete states and performing operations on them. To be more precise we shall assume that each processor is capable of executing basic instructions (e.g. arithmetic operations both

2

or, as an extension, the vehicle intended as its processing capabilities and algorithm

3

this will lead to a geometric model of the vehicle having 5 degrees of freedom.

4

and id ; this will be explained in section 2.1.4.

5

(14)

on scalars and vectors/matrices, comparison, conditional instructions, . . . ) and that such capabilities are exploited by some software library providing a mean to both discrete-time and continuous-time dynamic simulation (i.e. solving sets of ODE on a finite horizon). Since the algorithms agents are intended to use are not too complex, it will be assumed that every agent has enough computational capabilities to execute them in order to convert the information sensed at time k into the new position that will be occupied at time k + 1. Since the focus here is not on such issues (the branch of parallel computation would be much more interested in those aspects) we can formulate them by saying that sensing and computation are both assumed to be instantaneous and cost-free operations.

2.1.4 Leading structure

Agents move according to a given leading structure; this means that the algorithm may ask the agent to consider one of the sensed agents to be its leader and act accordingly. The leading structure may be fixed or time-varying (depending on the algorithm of choice). Several aspects have to be analysed in order to explain this feature.

First of all, agents are supposed to be univocally identifiable by an id; moreover it is assumed that the sensing of an agent leads to the sensing of its id so that the position of the leader, if there is a leader, can be identified among those of the sensed agents. The position of an agent and its id are then considered to be external information (meaning that they can be sensed by other agents) and every other information (including orientation) regarding the state of an agent will be considered as internal information.

As for the distinction between a fixed leading structure and a time-varying one, a simple consideration has to be made: in the first case it is possible to predetermine, for each agent, the list of agents of which it is a leader; this information will thus be considered to be known during all the evolution of the agents’ network. In the second case, on the contrary, some changing in the leading structure will be allowed; a strategy ensuring that full visual connectivity is maintained will be devised and some prede-termined information6 may be needed for such purpose.

Finally, not every agent is supposed to act in the same way; this means that some agents may have a leader whilst other agents may be not leaded by anyone. Moreover we will allow an agent to change its status in this respect during the evolution.

6

(15)

CHAPTER 2. PROBLEM DEFINITION 10

2.2

Mathematical definition

The definition of a mathematical framework plays an important role in this work both as a mean of proposing a viewpoint that differs from what is usually found in the related literature and as a tool for the subsequent analysis.

Three main entities are defined throughout this section: the sensing agent, the leading function and the directed acyclic graph.

The sensing agent is a point of the space the vehicles move within; it, moreover, describes the sensing capabilities of a vehicle. Several possible ways are possible to describe the data we want a sensing agent to represent; the one used in this work happens to be simpler than some alternatives we tried, when the subsequent analysis is concerned.

The leading function is a mean of describing a hierarchy among agents. It directly incorporates some acyclicity conditions aimed at guaranteeing full connectivity over the resulting network.

Finally, the directed acyclic graph is a representation of algorithms aimed at pointing out their requirements in terms of communication. It turns out to be very expressive and will be analysed in connection with the leading function.

2.2.1 The sensing agents

First of all we need to know how to represent a vehicle. Our definition of an agent is aimed at knowing about its position and attitude.

Definition An agent is a 5-uple of real numbers

A = (x, y, z, φ, θ) (2.1)

such that φ ∈ (−π, π], θ ∈−π 2,

π

2. We will say the vector

iA= (cos(φ) cos(θ), sin(φ) cos(θ), − sin(θ))T (2.2)

to be the sensing axis of the agent. Moreover we will call

pA= (x, y, z)T (2.3)

(16)

Figure 2.1: Agent φ θ x y z

The previous definition tells us some interesting things that is worth pointing out. While defining a complete reference frame into the 3-dimensional space would have required us to use 6 parameters, we are using only 5 of them; this reflects the need of defining only one axis in terms of its position and orientation. The meaning of the position related parameters is trivially understood; the remaining parameters (namely φ and θ) turn out to be (part of) a very common way of expressing a 3-dimensional transformation; it is generally known as the Roll-Pitch-Yaw (RPY) representation. This is used to express a complete reference frame in terms of three angles (the roll angle, the pitch angle and the yaw angle). In our representation the roll is considered to be irrelevant and the x axis7 (the roll one) is expressed in

terms of the remaining two parameters. Figure 2.1 shows an agent togheter with all quantities involved in its definition.

The reason why we have defined the sensing axis of an agent is, as the name suggests, that it plays an important role in defining what is the set of other agents that it can sense. This will be explained in a few moments after a preliminary definition:

Definition A cone is a 6-uple

C = (x, y, z, m, M, α) (2.4)

such that px2+ y2+ z2 = 1, 0 < m < M , α ∈ (0, π]. The vector i C =

(x, y, z)T will be said to be the cone axis.

7

(17)

CHAPTER 2. PROBLEM DEFINITION 12

Figure 2.2: Cone

x y

z

This definition is aimed at describing a portion of space. As we will see, this is the region in which agents must reside in order to be sensed. The first three parameters (x, y and z) represent the orientation of the cone (as we will see shortly, this will be the same of the sensing axis of the agent the cone is related to). The m and M values are the minimum and maximum sensing distances while α represents the semi-aperture of the cone. In order to show what a cone looks like figures 2.2 and 2.3 show a cone in two different configurations. The cone shown in figure 2.2 has null values for the x, y and z parameters; the one shown in figure 2.3, instead, has a non-null position that is represented by the green vector in figure.

With this definition in mind we will now describe a sensing agent. This is aimed at describing our situation in which every agent is equipped with a sensor that allows it to see everything that is in front of it provided that • the sensed object is at a distance d from the agent such that m ≤ d ≤

M ;

• the angle between the agent’s sensing axis and the line of sight is smaller than some αmax.

Definition Given an agent A = (x, y, z, φ, θ) and a cone C = (x, y, z, m, M, α) such that iA= iC we will say that A is a sensing agent having cone C.

In order to shorten the notation, when the context is such that it makes it clear whether we are referring to an agent or to a sensing agent, we will say that A = (x, y, z, φ, θ, m, M, α) is an agent; this notation is implicitly intended to refer to a sensing agent A = (x, y, z, φ, θ) having cone C = (cos(φ) cos(θ), sin(φ) cos(θ), − sin(θ), m, M, α).

(18)

Figure 2.3: Cone

x y

z

It is now possible to define the set of visible agents in terms of what is visible by a sensing agent.

Definition Given an agent A1 = (x1, y1, z1, φ1, θ1) and a sensing agent

A2 = (x2, y2, z2, φ2, θ2, m, M, α) we will say that A1 is visible by A2 (or,

equally, that A2 senses A1) if, given p = pA1 − pA2 and d = kpk, it is:

• m ≤ d ≤ M and • d cos(α) ≤ hp, iA1i.

Notice that the previous definition does not rely on φ1 and θ1 and, thus, we

have8:

Lemma 1 Let A1 and ˆA1 be two agents such that pA1 = pAˆ1 and A2 be

a sensing agent with cone C; ˆA1 is visible by A2 cone if and only if A1 is

visible by A2.

It is worth specifying how such a framework is intended to describe the actual behaviour of moving vehicles. Since vehicles’ positions change at dis-crete times, every vehicle should be described by a different agent at each

8

(19)

CHAPTER 2. PROBLEM DEFINITION 14 time; moreover the same agent can represent more than one vehicle at dif-ferent times9. The sensing abilities of the vehicle are not intended to change

during the evolution of the system. In order to make the notation clearer we

will refer to the (sensing) agent A1(k) = (x1(k), y1(k), z1(k), φ1(k), θ1(k), m1, M1, α1)

as the agent that represents the vehicle having id 1 at time k. Since most of the times the sensing parameters will be equal for all of the vehicles we will omit the subscripts for m, M and α when not needed.

2.2.2 The leading function

What follows is to be intended as a description of what has been referred to as the leading structure (see section 2.1.4).

Let A = A1, A2, . . . , An the set of all agents and f : A → [1, . . . , n] ∈ N

be an injection10.

Definition 11The leading function is a map ldr : [1, . . . , n]×N → [0, 1, . . . , n] such that ldr(i, k) 6= i, ldr (ldr (i, k) , k) 6= i, . . . , ldr (ldr (. . . ldr (i, k) . . . , k) , k) 6= i, . . . and ∀ k ∃! i : ldr(i, k) = 0.

Given an agent Ai ∈ A, the leading function has the following meaning:

• if ldr (f (Ai) , k) = 0 the agent has no leader at time t;

• the agent is leaded by f−1(ldr (f (Ai) , k)) at time t otherwise.

We will say that a leading function is stable if ∃ˆk : k, j ≥ ˆk ⇒ ldr(i, k) = ldr(i, j) ∀i; this allows agents to change their leaders only up to a given instant ˆk.

2.2.3 Timing issues: algorithm, leading function and timing

Some notation needs to be introduced in order to discuss the timing is- check this subsection for coherency

sues. All the vehicles move according to an algorithm; the algorithm gen-erates, for each of them, at each time, an input; every vehicle has, thus, a sequence of inputs generated by the algorithm (i.e. the i-th vehicle’s in-put is {dcsAi(k)}k≥0

12). As we said above the algorithm produces such

inputs for every vehicle and, therefore, it generates a set of sequences:

9

here we are somehow suggesting that no two vehicles should occupy the same location at the same time.

10

since the number of elements in A equals the number of elements in [1, dots, n] ∈ N, f is easily seen to be a bijection.

11

within this definition both [1, . . . , n] and [0, 1, . . . , n] are intended to be subsets of N.

12

(20)

n

{dcsAi(k)}k≥0|Ai∈ A

o

; we will refer to this set as the set of evolving sequences. Every decision may, of course, depend on some information (and it will, in fact, depend on—part of—the sensed information); such a func-tional dependence is not explicit whithin the introduced notation.

Being completely general in including the functional dependencies in the introduced notation would be very coumbersome; the complexity of the notation, moreover, would force us to make simplifying assumptions when using it. For this reason we now explicitly assume that dcsAi(k) only depends

on Ai(k) and pf−1(ldr(Ai,k))(k) (being the position at time k of the leader of

Ai; in order to shorten the notation, the expression “pldr(Ai)(k)” will be

sometimes used as an alternative); we therefore omit to show such implicit dependencies. Moreover, since within this work the vehicles are considered to be modelled by a first order, fully actuated system, the algorithm decision dcsAi(k) is assumed to be directly translated in the agent’s new position and

attitude (i.e. dcsAi(k) = Ai(k + 1)).

Section 2.2.4 will introduce a graphic notation aimed at describing this kind of statements.

2.2.4 Algorithm representation via Directed Acyclic Graphs

Graphs are traditionally used as a mean of representing network topologies; this is particularly true in the context of cooperative control related litera-ture. In this work no such use of graphs is made because whether two agents are connected or not is either trivially known to be true13or extremely dif-ficult to be established. Moreover such condition may not be true or false during the whole network evolution.

Directed Acyclic Graphs (DAG) will be used in order to represent algo-rithms; this is a technique typically used in the context of parallel computing. The reader is referred to the existing literature14 on the topic if interested and if more details are needed.

A DAG is a directed graph with no positive cycles in it (i.e. it is not pos-sible to follow a directed, non-empty, path leading from one node to the node itself). The DAG is represented as G = (N, A) where N = {1, 2, . . . , kNk}15 is the set of all nodes and A = {(i1, j1), (i2, j2), . . . , (ikAk, jkAk)} is the set of

all arcs16.

13

when the sensed agent happens to be the leader of the sensing one.

14

a highly regarded text on this topic is [1].

15

kN k is here intended to represent the number of elements in the set N . We are somehow assuming, here, that the number of nodes is finite.

16

(21)

CHAPTER 2. PROBLEM DEFINITION 16 In our usage of DAGs each node represents an operation17 performed in executing some algorithm while arcs are used in order to express data dependencies; that is, the operation represented by the node j needs the result of the operation represented by the node i if and only if (i, j) ∈ A.

Such representation is usually independent from the schedule. The schedule is the map that expresses which processor is intended to execute a given operation. In our context each operation, however, is thought as be-ing performed by a particular agent so that there is no need not to specify this relation in advance. Figure 2.2.4 represents a DAG with no arcs and is aimed at showing that operations are intended to be executed sequentially by each agent (i.e. each node represents the computation made, at a given time, by an agent in order to choose where to move towards).

We will now consider a very simple example in order to show a complete DAG. Two agents, namely 1 and 2, move cooperatively for 5 units of time. Agent 1 leads agent 2. Agent 1 moves according to an incremental law of motions (e.g. “at each step move in some direction and double your distance from a given point”); agent 2 moves with some law depending on its previous position and the sensed position of its leader (e.g. “move on the centroid of your current position and your leader’s current position”). Figure 2.2.4 shows the resulting DAG.

In section 2.2.3 some statements have been made. They can be conve-niently represented by using DAGs. What has been told there is that an algorithm decides, at time k, the position and attitude for every vehicle to be in at time k + 1 and that this information is based on the current position

17

here an operation is not constrained to be an elementary one; in fact we are interested in representing high level operations. Those can be thought of as the execution of some routine defining the algorithm of the single agent.

(22)

and attitude of the considered vehicle and on its leader’s position. Let’s con-sider, for example, three agents (namely 1, 2 and 3) moving cooperatively over 3 units of time. The leading function is specified as follows:

ldr(i, 0) =    0, i = 1 1, i = 2 2, i = 3 , ldr(i, 1) =    0, i = 3 3, i = 2 2, i = 1 , .... (2.5)

Notice that since only 3 units of time are being considered there is no need to define ldr(i, k), k = 2, 3, . . . : we only need to specify that the function ldr(· , · ) is, indeed, a leading function. Figure 2.2.4 shows the DAG for this example. The graphic nature of the representation has been exploited using a different shape for arcs where only partial information is transmitted (i.e. what has been called the external information). Introducing such a distinction among arcs right into the definition of a DAG would not be of any help for our concerns.

(23)

CHAPTER 2. PROBLEM DEFINITION 18

2.3

Goal

This section defines the cooperative goal for the network of agents. The objective of this work is to devise an algorithm that, when executed by every vehicle, makes the agents move towards a set of points with particular properties; conditions on the initial configurations will, of course, be required to hold.

The target set of points can be loosely described as one which allows every agent to sense both its leader—if any—and a common target point (that can, in turn, be considered to be the origin of the reference frame). A precise definition of this goal will be given.

The section starts with the description of the problem of sensing two different points in order to find necessary and sufficient conditions for such a situation to arise; it then introduces the Gram matrix and shows how this can be used to express the worked conditions. It will turn out that the simple problem described up to that point is such that the goal of the network of vehicles can be decomposed in several sub-problems of that kind; following such an approach, a definition of the main goal then is given.

2.3.1 Visibility of a pair of points—general discussion

In section 2.2 we have seen that a point can be sensed by an agent if and only if the point lies into a given cone and within some given minimum and maximum distance from the agent. We shall now examine what happens if we want an agent to sense two distinct points. The answer is indeed trivial— they can be seen if and only if both of them lie in the sensing region; a deeper analysis of the problem is, however, required.

Figure 2.5: example 0 v2 v1 v2− v1 θ β

(24)

Figure 2.5 shows an example configuration18. We want to devise the conditions that have to be met by v1 and v2 so that an agent with position

v1 can sense both the origin and v2 if it has an appropriate sensing axis. In

what follows the notation used in section 2.2 shall be used. Two necessary conditions are easily found:



m ≤ kv1k ≤ M

m ≤ kv2− v1k ≤ M . (2.6)

A third condition can be worked out by considering angles. It is intuitively clear that whenever −2α ≤ θ ≤ 2α it is possible to find a sensing axis that allows the agent having position v1 to sense both the origin and v2; on the

other side, if |θ| > 2α, either the origin or v2 (or both of them) cannot be

sensed, whatever the sensing axis. This is easily verified; as figure 2.6 shows if v2 is visible by an agent having position v1 and sensing axis iA and since

γ1 = θ + γ it is:

|θ| > 2α ⇒ |γ1| > 2α − α = α, (2.7)

where the fact that γ < α has been used. Of course the same reasoning applies if |θ| > 2α and iA is such that the origin is visible.

Figure 2.6: example

0

18

since only three points are involved in this reasoning it is always possible to reformulate the problem as a 2-dimensional one by projecting the vectors onto the plane that contains the origin, v1 and v2.

(25)

CHAPTER 2. PROBLEM DEFINITION 20 It is easy to see that what follows is a set of necessary and sufficient conditions:    m ≤ kv1k ≤ M m ≤ kv2− v1k ≤ M −2α ≤ θ ≤ 2α . (2.8)

2.3.2 The Gram matrix

When geometric configurations19are involved, lengths, distances and angles

can be expressed in terms of the Gram matrix.

Definition A configuration (of n vectors) over Rmis a set of vectors a

1, a2, . . . , an∈

Rm. Moreover we shall say that li = kaik20 is the lenght of the i-th vector; that dij = kai−ajk is the distance between aiand aj; and that ρij =

aT iaJ

kaikkajk

is the correlation coefficient between ai and aj.

Definition The Gram matrix for a configuration of n vectors over Rm is given by:

G = {Gij} = ATA, A = [a1. . . an]. (2.9)

It is, indeed, Gij = aTi aj and, in particular, Gii= l2i.

It should be noted that the one given here is not the definition of the Gram matrix classically given whithin some fields (e.g. optimization theory, system theory, . . . ); often the way to define it is considering what we would call a configuration of n vectors over Rnand by building the Gram matrix as it has

been presented here (i.e. the classical Gram matrix is a Gram matrix such that A is a square matrix). It is easily seen, then, that the classical Gram matrix is a special case of what we have defined. The reader is referred to [2, pp. 405–410] or [3] for more details.

The Gram matrix plays a very important role in a variety of different fields; for this reason several applications of it had yielded to the estabil-ishment of several results. An enumeration of the properties of the Gram matrix, as well as of different results, is beyond the goals of this treatment; therefore only the useful ones will be introduced when necessary.

19

we will soon properly define a configuration.

20

(26)

2.3.3 Visibility of a pair of points—the Gram matrix ap-proach

Equations 2.14 can be modified in order to take advantage of the definition of a Gram matrix for the problem now being examined.

With reference to figure 2.5 it is easy to see that:

kv2− v1k cos(θ) = kv1k + kv2k cos(π − β) (2.10)

and, therefore, in order for the condition |θ| ≤ 2α to hold, it must be21: kv2− v1k cos(2α) + kv2k cos(β) ≤ kv1k, (2.11)

the necessary and sufficient conditions have thus become:    m ≤ kv1k ≤ M m ≤ kv2− v1k ≤ M kv2− v1k cos(2α) + kv2k cos(β) ≤ kv1k . (2.12)

where α (and hence cos(2α)), m and M are known and kv1k, kv2k, kv2− v1k

and cos(β) can be obtained from a specifically designed Gram matrix. Notice that, in fact, if we consider a Gram matrix for a configuration such that vi

and vj are the vector of interest (e.g. vi = v1 and vj = v2) it is: kv1k = li,

kv2k = lj, kv2− v1k = dij = dji and cos(β) = ρij = ρji; it is easy to see (it

sufficies to consider that every component of the Gram matrix Gij = viTvj)

li =√Gii, dij =pGii+ Gjj− 2Gij and ρij = Gij

GiiGjj; this leads us towards

the following lemma.

Lemma 2 Given two vectors v1 and v2, the problem of finding the sensing

axis iA= (iAx, iAy, iAz)

T for a sensing agent with position p

A= v1 and cone

C = (iAx, iAy, iAz, m, M, α) such that it both senses the origin and v2, has a

solution if and only if:                    m ≤ √G11 ≤ M m ≤ √G11+ G22− 2G12 ≤ M √ G11+ G22− 2G12cos(2α) + +√G22G11G12G22 ≤ √ G11 , (2.13)

where G = {Gij} is the Gram matrix for the configuration of 2 vectors over

R2 v1, v2.

21

please remember that α ∈ [−π 2,

π

(27)

CHAPTER 2. PROBLEM DEFINITION 22 The previous lemma can be more compactly expressed in terms of distances and correlation coefficients by changing the conditions expressed as:

           m ≤ l1 ≤ M m ≤ dij ≤ M dijcos(2α) + l2ρ12 ≤ l1 , (2.14)

2.3.4 The 3-dimensional case

When considering two vectors in the 3-dimensional space v1 and v2 and

trying to figure out the difference between this case and what has been de-scribed so far, it turns out that the same considerations apply. Angles and lengths will continue to be able to describe the conditions for the visibil-ity of both v2 and the origin from v1 and so will the Gram matrix. The

only difference will be that G = {Gij} will now be the Gram matrix for a

configuration of 2 vectors over R3, being, again, v

1 and v2.

2.3.5 The overall goal

We shall now show the role that is played by the discussion presented so far in defining the global goal of the set of vehicles.

As previously mentioned, the goal that we want to describe can be loosely stated as “given a stable22 leading structure we want every vehicle to move to a position such that it senses both its leader—if any—and some target point23”. This goal is easily seen to be passible of being decoupled into

several problems of the kind described in section 2.3.1 (we will refer to these as the pair visibility problem).

First of all we must consider a configuration of vectors over R3 being pA1(k), . . . , pAn(k), where A1, . . . , An is the set of all vehicles and k is the

time. We will express the goal by putting constraints on the values of some elements of the Gram matrix for such a configuration Gij(k)24. The first

22

we recall what stable means in this context: ∃ˆk: k, j ≥ ˆk ⇒ ldr(i, k) = ldr(i, j) ∀i; this, in turn, includes the case when the leading function does not vary with time.

23

the target point will always be assumed to be the origin of the reference frame.

24

for the sake of simplicity we will assume, in what follows, that f : Ai→ i is the map

of choice between the set of all agents A and the set [1, . . . , n] ∈ N; more details can be found in section 2.2.2. Moreover every vehicle is assumed to be equipped with a sensor such that the field of view’s limits are m (minimum distance), M (maximum distance) and α(semi-aperture); considering different limits for each agent would imply a huge amount of notation while not changing the overall meaning of the whole discussion.

(28)

thing to do is stating that in order for our goal to be reached at time k it must be:

m ≤ Gii(k) ≤ M ∀i. (2.15)

Moreover if ldr(i, k) = j, we want the following conditions to hold:              m ≤ pGii(k) + Gjj(k) − 2Gij(k) ≤ M pGii(k) + Gjj(k) − 2Gij(k) cos(2α) + +pGjj(k)GiiG(k)Gij(k)jj(k) ≤√Gii ; (2.16)

that, togheter with the condition on Gii, add up to the condition for the

single pair visibility problem. When an agent has no leader, no additional conditions regarding it will be put in place; it will, therefore, be only con-strained to lie within some given minimum and maximum distance from the origin (as for equation 2.15). Our goal can now be fully stated.

Definition Given a set of evolving sequences for positionsn{dcsAi(k)}k≥0|Ai ∈ A

o we say that the network of agents reaches the goal at time ˆk if and only if

l > ˆk ⇒ {Gij(l)} satisfies the stated conditions25; where {Gij(k)} is the

Gram matrix for the configuration pA1(k) , . . . , pAn(k) over R

3.

25

here it is assumed that the goal is reached only if the desired conditions keep holding indefinitely.

(29)

Chapter 3

Results

This chapter will translate the mathematical framework defined hereby into the accomplishment of the actual cooperation goal.

The first section of this section is aimed at thoroughly analyse the fea-sibility of the proposed goal. Within the second section a distributed algo-rithm for the accomplishment of the rendezvous task is introduced. Finally, simulation results are presented.

3.1

Mathematical analysis

The purpose of this section is analysing the feasibility of the goal.Three aspects are considered, namely: how to keep connectivity among vehicles; the constraints that this objective imposes as far as dynamic changes in the leading structure are concerned; finally some conditions for the overall goal to be feasible are derived.

Maintaining connectivity plays a very important role. As being in visual contact is the only way in which a cooperative behaviour can be established, keeping (part of) the visual links that are in place—if any—is the only mean to maintain cooperation. The first concern when designing cooperative algorithms is to devise some connection maintaining strategy.

In order to allow vehicles to dynamically choose a new leader relying on available information, it is necessary to find out the conditions that the algorithm in charge of choosing the new leader must satisfy as to guarantee the overall network to be always fully connected.

Thirdly, an iterative, distributed way of building a feasible Gram matrix is devised; it is not a method able to build every feasible Gram matrix, but it is, indeed, guaranteed to produce a solution to the overall goal.

(30)

3.1.1 Connection maintaining

The problem to be analyzed here is to estabilish whether it is possible or not to let every agent be able to keep the visual contact with its leader. Every agent should consider itself as a leader and act so that the connection maintaining strategy—if any—of its followers is feasible. Moreover, as we will see that it is, in fact, possible to do so, we want to characterize the set of admissible inputs.

A first result can be reached based on the definitions we have already given; it can be useful as an aid to think about what we will see in the remaining of this section.

Lemma 3 Given an agent A1 = (x1, y1, z1, φ1, θ1), a vector (x2, y2, z2)T

and a couple of positive real numbers m, M it is possible to find a sensing agent A2 = (x2, y2, z2, φ2, θ2) having cone:

C = (x2, y2, z2, m, M, α) (3.1)

such that A2 senses A1, if and only if m ≤ |pA1− pA2| ≤ M.

Lemma 3 tells us that whether it is possible or not to find an agent lying on a given position that sense a given agent depends only on the distace between the point and the given agent.

Proof The only if statement is part of the very definition of sensing; we need to prove the converse. First of all we show that given a vector p = (x, y, z)T ∈ R3 it is always possible to find two angles φ ∈ (−π, π] and

θ ∈ [0,π2] such that: 1 d   x y z  =   cos(φ) cos(θ) sin(φ) cos(θ) − sin(θ)   (3.2)

where d = kpk. For instance if z2 = d2 (and, therefore, x = y = 0), φ = 0, θ = arcsin −zd is a solution; otherwise we can find the solution by

noting that cos2(θ) = 1 − sin2(θ) 6= 0, cos(θ) > 0 and using the following

identity: 1 d2− z2  x2 y2  = 1 x2+ y2  x2 y2  =  cos(φ)2 sin(φ)2  (3.3) that allows us to write: φ = arctan 2sign(y)qx2y2

+y2, sign(x) q x2 x2 +y2  . The angle θ is trivially found as: θ = arctan 2−z,px2+ y2. If p = p

A1− pA2,

(31)

CHAPTER 3. RESULTS 26 In order to reach further results we will need to introduce the concept of limited sensing; this is presented here, as opposite as presenting it in section 2.2, since it is not needed in order to describe our framework; its definition is merely functional to the statement of the following results.

Definition Given two cones

C1 = (x1, y1, z1, m1, M1, α1), C2 = (x2, y2, z2, m2, M2, α2) (3.4)

we will say that they are congruent if there exists ρ ∈ R, |ρ| < min{m1, m2}

such that:

• iC1 = iC2 and

• M1− M2 = m2− m1 = m22 (cos (α2) − cos (α1)) = ρ.

Moreover we will write C1− C2 = ρ and C2− C1 = −ρ.

In the remaining of this section we will often need to define a second cone associated with a sensing vehicle; this cone Ci will be smaller than the

vehicle’s one (call it C) in the sense that: • C and Ci will be congruent;

• ∃ρ > 0 : C − Ci.

In particular we will consider whether a sensed vehicle could or could not be sensed if a limited sensing condition were in place (i.e. if Ci were the sensing

agent’s cone). The limited sensing is completely described by the parameter ρ so that it will be clear what we will be referring to when using sentences like “reducing, by the amount ρ, the sensing capability of the agent”. Figure 3.1 shows a 2d representation of what has been described so far.

The previous discussion leads us towards the following result regarding the ability of agents to maintain backward connectivity:

Theorem 4 Given a sensing agent A2 having outer cone C, a cone Ci such

that 0 < ρ = C − Ci and an agent A1 being visible by A2 even with ρ-limited

sensing capabilities, every agent ˆA1 for which kpAˆ1− pA1k ≤ ρ is sensed by

A2 (by means of its actual sensing cone).

This theorem basically ensures that a strong condition on the relation be-tween a sensing agent and a sensed agent guarantees the existence of a weak relation between the sensing agent and every agent being not too distant from the previously considered sensed agent.

(32)

Figure 3.1: A 2d representation of the limited sensing M2 m2 α2 M1 m1 α1 sensing axis

(33)

CHAPTER 3. RESULTS 28 Proof Let p = pA1 − pA2, s = pAˆ1 − pA1 and ˆp = pAˆ1 − pA2 = s + p;

since mi− ρ ≤ kpk − ksk ≤ kp + sk ≤ kpk + ksk ≤ Mi+ ρ it is true that

m ≤ kˆpk ≤ M.

All is left for us to do now is to show that hˆp, iA2i ≥ kˆpk cos(α). We

consider the following inequalities:

kp + sk cos(α) ≤ (kpk + ksk) cos(α) ≤ (kpk + ρ) cos(α) ≤ kpk cos(α) + ρ kpk(cos(αi) − cos(α)) ≥ mi(cos(αi) − cos(α)) ≥ 2ρ

−ρ ≤ hs, iA2i

(3.5) Since from the second inequality it can be inferred that kpk cos(α) + ρ ≤ kpk cos(αi)−ρ, the first inequality can be expanded by saying kp+sk cos(α) ≤

kpk cos(αi) − ρ; by using the third inequality it is possible to write:

kp + sk cos(α) ≤ kpk cos(αi) + hs, iA2i ≤ hp, iA2i + hs, iA2i (3.6)

and, therefore:

kp + sk cos(α) ≤ hˆp, iA2i . (3.7)

The next step towards the estabilishment of our final result is the state-ment of an existance theorem for the forward connectivity maintaining prob-lem (i.e. the connection maintaining probprob-lem as far as a follower agent is considered).

Theorem 5 Given a sensing agent A2 having cone

C = (x, y, z, m, M, α), (3.8)

a reduced sensing cone

Ci = (xi, yi, zi, mi, Mi, αi), (3.9)

such that C − Ci = ρ and an agent A1 being sensed by A2, there exists a

sensing agent ˆA2 such that

pAˆ2 − pA2

≤ ρ that would sense A1 even if its sensing capabilities were reduced by an amount ρ.

Proof As a first step we will find a suitable position for the agent ˆA2, i.e.

a vector pAˆ2 such that:

   pAˆ2 − pA2 ≤ ρb mi ≤ pAˆ1− pA2 ≤ Mi ; (3.10)

(34)

then lemma 3 leads us to the completion of our proof. Let pAˆ2 = pA2 +

pA1−pA2

kpA1−pA2kt = pA2 +

pA1−pA2

d t. By such choice we can

now write: pA1 − pAˆ2 = kpA1− pA2k 1 − t d = |d − t|. (3.11) Being A1 partially sensed by A2 we can also write:

m ≤ d ≤ M. (3.12)

We now claim that d − t ≥ 0 so that asking mi ≤ |d − t| ≤ Mi is

equivalent to ask d − Mi ≤ t ≤ d − mi. If we take t = min{d − mi, ρ} we are

sure that t ≤ d − mi and since the inequality 3.12 holds we also know that

d − Mi ≤ M − Mi = ρ. It is trivial to see that d − Mi ≤ d − mi. We can

therefore conclude that:

d − Mi ≤ min{d − mi, ρ} ≤ d − mi. (3.13)

Our claim (d−t ≥ 0) must now be shown to be true. If t = min{d−mi, ρ} =

d − mi we have d − t = mi≥ 0 else, beeing t = min{d − mi, ρ} = ρ, we have

d ≥ ρ + mi and then d − t ≥ mi ≥ 0.

Using lemma 3 we can ensure that it is possible to find a sensing agent ˆ

A2 whose position vector is pAˆ2 and whose inner cone is

Ci= (xi, yi, zi, mi, Mi, αi) (3.14)

such that A1 is fully visible by ˆA2.

The reader should notice, now, that the main connection between this theorem and what has been said before is to be found in the statement kpAˆ2− pA2k ≤ ρ; this allows, in fact, a following agent not to worry about

whether it leads some other agent or not; for this reason this is quite a strong result.

With the previous discussion in mind we are eventually ready to state a complete result; it directly follows from theorem 5 and theorem 4

Corollary 6 Given an agent A1 beeing sensed by a sensing agent A2 there

exists a sensing agent ˆA2 : kpAˆ2 − pA2k ≤ ρ

1 such that every agent ˆA 1 :

kpAˆ1− pA1k ≤ ρ is sensed by ˆA2. 1

(35)

CHAPTER 3. RESULTS 30 The previous results can be interpreted as the statement of existance of an admissible set of points for every agent to move towards at each step so that the connection maintaining is guaranteed. Notice that this is true regardless of the leading structure of choice. The description we obtain from corollary, however, is not an explicit one; it would, on the contrary, be useful to have an easy to work with characterization of the feasible region; this is what the remaining of the section is about.

An explicit description of the feasible set is not easy to obtain; this is usually the case when such problems are dealt with. The reader is referred, as an example, to [7]. Authors of [7] face a problem for which they can claim the feasible region to be approximable with a convex set; moreover the convex set is entirely contained whithin the admissible set; this turns out to be enough for their purpose. In this work a similar approach is used: a feasible, convex approximation of the feasible set is given.

The problem is again divided in two steps; we will first consider a result regarding the attitude of the agents and then move towards the problem of considering agents’ positions.

Lemma 7 Given φ ∈ [−π, π], θ ∈ [−π2

2], α ∈ [0,π2] and β, γ ∈ [−√α2, α √ 2]

the angle beetween the sensing axis of the agents A = (x, y, z, φ, θ) and ˆA = (x, y, z, φ + β, θ + γ) is not greater than α.

Proof From the very definition of agent we have: did not check; no need to change notation

iA= (cos(φ) cos(θ), sin(φ) cos(θ), − sin(θ))T

iAˆ = (cos(φ + β) cos(θ + γ), sin(φ + β) cos(θ + γ), − sin(θ + γ))T .

(3.15) It turns out that iTAiAˆ = cos(β) cos(γ) + sin(θ) sin(θ + γ) (1 − cos(β)); we

will now show that this quantity is not smaller than cos(α) by finding the minimum of f(θ, β, γ) = iT

AiAˆover the given allowable region (in what follows

the stated conditions are intented to hold). First of all we note that:

min

θ f(θ, β, γ) = cos(β) cos(γ) − sin 2γ

2 

(1 − cos(β)) = f−γ2, β, γ (3.16) and thus we are interested in finding minβ,γcos(β) cos(γ) − sin2 γ2 (1 −

cos(β)). We now note that the value: min

β cos(β) cos(γ) − sin 2γ

2 

(1 − cos(β)) (3.17) is reached when β = ±α

2 and thus we now want to find minγcos

 α √ 2  cos(γ)− sin2 γ2 (1 − cosα 2 

). This last value is, again, reached when γ = ±α 2,

(36)

leading us to write: min f(θ, β, γ) = g(α) = cos2  α √ 2  − sin2  α 2√2   1 − cos  α √ 2  . (3.18) Since g(α) − cos(α) : [0, π/2] → R is a positive function, the statement is true.

We will now state the result concerning the position of the agents.

Lemma 8 Given a sensing agent A2 having cone C = (x, y, z, m, M, α), a check notation and

coherency

cone Ci = (x, y, z, mi, Mi, αi) such that Co− Ci = ρ and an agent A1 being

partially sensed by A2 a number t = max{−ρ,d−Mi}+min{ρ,d−m2 i} and vector

δ : kδk ≤ min{ρ,d−mi}−max{−ρ,d−Mi}

2 it is true that for every pA= pA2+ t

p d+ δ

it is possible to find at least one agent A having position vector pA that fully

senses A1. Moreover kpA− pA2k ≤ ρ.

Proof In what follows d = kpA1 − pA2k.

First of all we have to show that δ : kδk ≤ min{ρ,d−mi}−max{−ρ,d−Mi}

2 is

a non-empty set. That’s readily done by showing that max{−ρ, d − Mi} ≤

min{ρ, d − mi}. Since A1 is partially sensed by A2 we have mo ≤ d ≤ Mo

and then:

−ρ ≤ ρ (3.19)

−ρ = mo− mi≤ d − mi (3.20)

d − Mi≤ Mo− Mi = ρ (3.21)

d − Mi≤ d − Mi. (3.22)

As a second step we will notice that d − t ≥ 0 (it is trivial to show it by direct computation). We now write the distance:

kpA1 − pAk = (pA1 − pA2)  1 −dt  − δ (3.23) and notice that:

|d − t − kδk| ≤ kpA1 − pAk ≤ d − t + kδk; (3.24)

and then:

mi ≤ |d−min{ρ, d−mi}| ≤ kpA1−pAk ≤ d−max{−ρ, d−Mi} ≤ Mi. (3.25)

(37)

CHAPTER 3. RESULTS 32 This leads us toward the following result.

Theorem 9 Given: positive constants mo, Mo, αo, mi, Mi, αi, ρ such that if

C1 = (cx, cy, cz, mo, Mo, αo) and C2 = (cx, cy, cz, mi, Mi, αi) are two cones

then they are congruent and C1− C2= ρ; two points pA1 and pA2 such that

mo ≤ kpA1 − pA2k = d ≤ Mo; a vector δ : kδk ≤

min{ρ,d−mi}−max{−ρ,d−Mi}

2

and two angles β, γ ∈h−αo

2, αo

√ 2

i

then if we say t = max{−ρ,dMi}+min{ρ,d−mi}

2 and we call (x, y, z)T = p A2+t pA1−pA2 d +δ and (px, py, pz) T = p A1−(x, y, z) T

and if ˆφ = arctan2(py, px) and ˆθ = arctan

 − pz

(p2 x+p2y)



the sensing agent A = (x, y, z, ˆφ + β, ˆθ + γ) with sensing axis ia = (ax, ay, az)T having cones

Co = (ax, ay, ax, mo, Mo, αo) and Ci = (ax, ay, az, mi, Mi, αi) fully senses

A1.

The proof is omitted since it is straightforward to connect this problem to previous lemmas.

3.1.2 Leading function constraints

Let us introduce this section whit a brief informal discussion. Imagine a classical directed graph representing the interconnections among vehicles. When only one leader per agent is allowed two conditions are possible if a cycle in the directed graph representing data passing in the network of vehicles is in place, either

• there are no free agents2 or

• there exist at least two un-connected components of the connectivity graph.

Moreover, if the network is made of n vehicles and (exactly) n − 1 data links a free agent will always be present and there will be un-connected components if and only if cycles are present. It is easy to obtain such results by using graphs in the conventional way in order to represent our problem. Such an analysis, however, is possible by taking advantage of the DAG’s definition given above. Such reasoning will be useful in order to obtain further results. Figure 3.2 shows a portion of a DAG where two time instants have been considered (namely k and k + 1); as it should have be noticed at once, several odd edges have been added to the DAG as a substitute for horizontal forward links. As strange as such additional edges may seem, it is clear that no information regarding the status of the network

2

(38)

Figure 3.2: A DAG with backward link substitution.

at time k + 1 can be intended to be considered in making any choice at time k. By introducing them, however, some results can be stated. First of all it is clear that the graph in figure 3.2 has m = 2n nodes where an usual graph would have had n of them; moreover if the network is made of exactly n − 1 links the DAG will have m − 1 = 2n − 1 edges. Therefore the DAG will have unconnected components if and only if some directed cycle is present in it. Moreover, since additive backward edges connect every bit of the right side of the DAG with its correspondent node on the left side, un-connected components—if any—will be formed by components belonging to both the DAG’s sides. This leads us to say that a cycle in the DAG with added backward edges is present if and only if the network has un-connected components3. These are the reasons why the conditions:



ldr(i, k) 6= i, . . . , ldr (ldr (. . . ldr (i, k) . . . , k) , k) 6= i, . . .

∀ k ∃! i : ldr(i, k) = 0 (3.26)

have been introduced in defining the leading function (see definition 2.2.2). For the sake of completeness it should be said that the second condition in 3.26 would have been introduced for another reason, too; in order to understand why, consider the following example of a leading function:

ldr(i, k) = 

i − 1 if i 6= 0 & agent i senses agent i − 1 at time k

0 otherwise ,

(3.27) where we have assumed that f : Ai → i is the map of choice between the

set of all agents A and the set [1, . . . , n] ∈ N; more details can be found in

3

the presence of exactly one free agent, as well as the DAG having exactly m − 1 edges are implicit assumptions.

(39)

CHAPTER 3. RESULTS 34 section 2.2.2. Such a leading function leads to a quite simple interpretation that can be expressed, roughly speaking, by saying: agents should move on a queue, when they manage to. Of course, as for the identically null leading function, every algorithm implements a connection maintaining strategy4 when this leading function is considered. The second condition in 3.26 is aimed at avoiding that a leading function can be get rid of by switching agents into a free operation mode as in this example.

The expressed conditions are first-order conditions, meaning that they only involve the definition of the leading function given a fixed time instant k (or, what is the same, two time instants on the DAG). An ∞-order condition has been introduced as well when defining what a stable leading function is (see what follows definition 2.2.2). Some second-order conditions on the leading function will be derived throughout this section.

Up to this point the reader has not been encouraged (nor suggested) to think the leading function as being dynamically generated by the agents themselves; it is, however, highly desiderable for it to be dynamic, as op-posite to being entirely defined in advance; this is true, at least, when time-varying leading functions are considered. In terms of the notation introduced in section 2.2.3, the decision taken by the algorithm will no more be completely translated into the position and attitude of a vehi-cle; it will contain also some information about the leading function; i.e. dcsAi(k) = (Ai(k + 1) , ldr (i, k + 1)).

Our goal is to define how the leading function can be generated in itinere. We want it to be generated in a distributed way (i.e. agent i should carry, at time k, computations involving only information available to it and pro-ducing the value of ldr(i, k + 1)). Moreover, we want the DAG generated by ldr(i, k + 1) to be connected.

It is clear that, since the decision about ldr(i, k + 1) has to be made, the DAG will be modified by including all the information used at time k (i.e. it will show data passing aimed at both computing the new value of the leading function and the new position and attitude of each vehicle). Let’s consider, as an example, the scenario represented in figure 3.3, where dashed edges are used to indicate links that are only used whithin the computation of the new value for the leading function. Agents 2 and 3 in figure are immagined to sense every other agent (at the shown time-instant); in the shown example they both change their leaders at time k. Please notice that, while the solid portion of the DAG regarding the time interval [k, k + 1] (we will call this the k ÷ k + 1 stage of the DAG) has no cycles in it (we are implicitly considering additive backward links to be in place) so that full connectivity has been maintained, this cannot be known at time k by

4

the meaning of the expression implementing a connection maintaining strategy will be discussed.

(40)

Figure 3.3: A DAG representing the information passed in order to dynam-ically generate the leading function.

means of distributed information. The decided value ldr(2, k + 1), in fact, is not supposed to be known by agent 3, as well as agent 2 does not know what the value of ldr(3, k + 1) has been choosen to be; how to guarantee the distributeness of the algorithm will be discussed.

Figure 3.4: A DAG’s composite node representing a vehicle in more detail; three tasks are executed by the vehicle.

Some aspects have to be clarified in order to allow the reader to under-stand the meaning of what has been said so far. Altough we will keep using the DAGs as shown in figure 3.3, a more complete representation is possible and has to be introduced. Consider figure 3.4, where a schematic representa-tion of the computarepresenta-tion tasks of a vehicle is given; this basically is a graphical

Figura

Figure 2.1: Agent φ θ xy z
Figure 2.2: Cone
Figure 2.3: Cone
Figure 2.4: The DAG of a system with time-varying ldr(· , · )
+7

Riferimenti

Documenti correlati

Si comunica che il Consiglio delle Autonomie Locali è convocato in VIDEOCONFERENZA il giorno 12 novembre alle ore 14.00 in prima convocazione e alle ore 15.00

- Tipologia di antenna - Sezione trasversale - Economia dell'opera - Valore estetico. -

Università di Pisa Scuola di Ingegneria.. RELATORI

NB: prima di procedere all’esecuzione degli interventi semestrali/annuali/biennali occorre procedere al sezionamento degli organi di intercettazione idraulica

(Istituto di Idraulica, Facoltà di Ingegneria, Università di Padova).. It.) dal titolo «Norme tecniche per la progettazione e la costruzione delle dighe di sbarramento»,

⇒ la velocità di applicazione e la successione di fasi di carico, scarico e ricarico producono effetti sul terreno (modifica delle condizioni di drenaggio con possibile accumulo

È la prova di laboratorio più diffusa (per la sua flessibilità e la ripetibilità dei risultati ottenuti) per la misura delle proprietà dinamiche del terreno ad alti livelli

¾ Effetti di valle 2D o 3D: sono legati alla interazione tra onde sismiche e morfologia sepolta con effetti di focalizzazione delle onde sismiche e generazione di onde di