• Non ci sono risultati.

Report R. Baldoni, A. Milani, L. Querzoni, S. Scipioni, S. Tucci-Piergiovanni Università di Roma “La Sapienza" February 23, 2007

N/A
N/A
Protected

Academic year: 2021

Condividi "Report R. Baldoni, A. Milani, L. Querzoni, S. Scipioni, S. Tucci-Piergiovanni Università di Roma “La Sapienza" February 23, 2007"

Copied!
16
0
0

Testo completo

(1)

Report

R. Baldoni, A. Milani, L. Querzoni, S. Scipioni, S. Tucci-Piergiovanni

Università di Roma “La Sapienza"

February 23, 2007

1 Introduction

Middleware for data distribution is a natural match and often a fundamental architec- tural building block for a large class of real-time, mission and safety critical application domains, such as industrial process control, air traffic control, defense systems, etc.

These application domains are characterized by real-time information which flows from sensors to controllers and from controllers to actuators. Timeliness and reliability of data distribution are essential for such systems correctness and safety. If the system fails in delivering data on time instability may arise thus resulting in threats to either infrastructures or human lives.

Historically, most of the pub/sub middleware standards, e.g. Common Object Re- quest Broker Architecture (CORBA) Event Service (CosEvent) [15], the CORBA No- tification Service (CosNotification) [16] and Java Message Service (JMS) [10] etc., as well as most proprietary solutions have lacked the support necessary for real-time, mission and safety critical systems. The main limits of these solutions are related to either a limited or not existing support for Quality of Service (QoS) either to the lack of architectural properties which promote dependability and survivability, e.g., lack of single point of failure.

Recently, in order to fill this gap, the Object Management Group (OMG) has stan- dardized the Data Distribution Service (DDS) [7]. This standard gathers the experience of proprietary real-time pub/sub middleware solutions independently engineered and evolved within the industrial process control and in the defense systems applications domain. The resulting standard is based on a completely decentralized architecture and provides an extremely rich set of configurable QoS.

Currently, several vendors provide their own DDS implementation. Each imple- mentation is characterized by additional services and proprietary extensions to the standard. At present, the DDS specification does not address the issue of interoper- ability among different vendors implementations. Hence the integration of multiple DDS applications based on heterogeneous implementations requires the development of custom bridges between them.

For this reason, OMG is currently working on defining a companion specifica- tion for DDS, describing a standard interoperability protocol which will allow standard DDS implementations from different vendors to exchange messages. The definition of

(2)

such a protocol should take into account the requirements on which the DDS specifica- tion is built on: efficient and scalable information diffusion, absence of single points of failures, efficient discovery of publishers and subscribers and enforcement of the QoS contracts. While solutions for all such issues can be easily dealt with when consider- ing a single implementation, it is not simple to identify the requirements to guarantee different solutions interoperability. This imposes to find a trade-off between the effi- ciency that can be achieved through proprietary extensions and the generality induced by interoperability needs.

This document contains an extended report about the activities done on the above said topics during the first year of collaboration between Universitá di Roma “La Sapienza” and Selex-SI. In this report we identify the main challenges in the realization of a scalable, QoS-driven, interoperable data distribution middleware. In particular, we identify three of the basic DDS features related respectively to scalable diffusion, time- liness and data availability. Then we analyze how their implementation could influence the interoperability protocol.

2 Data Distribution Service

In this section we provide a basic explanation of the DDS specification since we will explain basic concepts used in next Sections. First, DDS is based on the abstraction of a Global Data Space (GDS) (see Figure 1), where publishers and subscribers respectively write (produce) and read (consume) data. In the reminder of this Section we will provide a characterization of the entities that constitute this global data space.

G lob al D ata Sp a ce

D istributed

N ode

P ub

S ub P u b

S ub

S u b

S u b

S ub

Figure 1: DDS Global Data Space.

Topic. A topic defines a type that can be legally written on the GDS. In the present standard, topics are restricted to be nonrecursive types defined by means of OMG In- terface Definition Language (IDL). The DDS provides the ability to distinguish topics of the same type by relying on the use of a simple key. Finally, topics can be associated with specific QoS.

Publisher. Topics allow the definition of the application data model, as well as the association of QoS properties with it. On the other hand, publishers provide a mean

(3)

of defining data sources. A publisher, can declare the intent of generating data with an associated QoS, and to write the data in the GDS. The publisher declared QoS has to be compatible with that defined by the topic.

Subscriber. Subscribers read topics in the global data space for which a matching subscription exist (the rules that define what represents a matching subscription are described below).

Subscription. A subscription is the logical operation which glues together a sub- scriber to its matching publishers. In the DDS a matching subscription has to satisfy two different kind of conditions. One set of conditions relates to concrete features of the topic, such as its type, its name, its key, its actual content. The other set of condi- tions relates to the QoS. Regarding the QoS, the matching follows an requested/offered model in which the requested QoS has to be the same, or weaker, then the offered. As an example, a matching subscription for a topic which is distributed reliably can be requesting the topic to be distributed either reliably or as best effort.

Quality of Service One of the key distinguishing features of the DDS when com- pared to other pub/sub middleware is its extremely rich QoS support. By relying on a rich set of QoS policies, the DDS provides the ability to control and limit (1) the use of resources, such as, network bandwidth and memory, and (2) many non functional properties of the topics, such as, persistence, reliability, timeliness, etc. In Section 4 we will provide an overview of the most interesting QoS defined by the DDS and we classify them with respect to the aspect they allow to control. In particular we want to show the QoS mostly difficult to be implemented in a WAN environment.

3 Activity Report

3.1 Overview

During this year we deeply studied the DDS and its application environment. We identified several weaknesses of DDS specification, that arise in Wide-Area Network and in a Network-Centric scenario. The definition of the Network-Centric scenario is one of the work conducted in collaboration with Selex-SI. Our contribution can be divided in several phases:

1. We analyzed the DDS specification and its interoperability protocol.

2. We analyzed the behavior of DDS in WAN and we identify several challenges related to scalability and to the implementation of QoS policies in a WAN envi- ronment. We will show an in-depth view of these problems in Section 3.2.

3. We worked with Selex-SI to define a working environment for DDS applications.

This environment, analyzed in NATO documents, should be highly dynamic and composed by several networks that can be connected and deployed only for a mission. A black, core, network is the hearth of this scenario and around this

(4)

rotate clouds of networks composed by national, regional, mobile and ad-hoc sub-networks. The resulting scenario is very dynamic and requires that DDS must have several adaptive mechanism to avoid necessity of a expensive manual configuration. This phase should produce several requirements that have to be used in testbed and analysis of proposed solution, developed during collabora- tion. Currently we identified some actors, like Data Center, laptops or sensors, characterized by different degree of mobility, different uptime and Resources availability. The actors have different roles in scenario which correspond inter- esting measurements.

Current environment is characterized, moreover, by the QoS required by partic- ipant Entities and used Topics. Respectively these QoS are defined in terms of ordering and persistence properties, defined in DDS specification, and in terms of Expected Data Type used by the Entities. An Expected Data Type defines a set of QoS required by a specific class of Topics and characteristics of Event source in terms of size and rate of produced events.

4. At last we chose to face one of these problems evoked by Network-Centric sce- nario and DDS weaknesses: the problem of Data Ordering. A basic building block to realize Data Ordering is Clock Synchronization. In this way, we ana- lyze several algorithms to realize Clock Synchronization in distributed systems and currently we are developing a new algorithm based on self-synchronization properties of coupled oscillators. An analysis of this problem and proposed so- lutions can be found in Section 5.2

3.2 Limits of the DDS specification in WAN settings

We analyzed DDS specification and the Real-Time Publish Subscribe (RTPS) interop- erability protocol in order to verify if the current standard can support a large number of subscribers. Then we proposed QoS in WAN environment.

At current state we identified two class of problems related to DDS and RTPS specification:

1. Limits of the DDS Specification for supporting Scalability, DDS specifica- tion is well-suited for controlled environment (e.g. LAN) where latency, work- load and other network parameters are well-known but it can present Scalability Problem.

2. Challenging QoS Implementation in WAN because some QoS policies cannot be easily implemented in WAN.

In the reminder of this Section we analyze and motivate the above said problems and we propose possible solutions and new architectural components. Finally, we show current and future work about both class of problems.

(5)

4 Supporting Scalability for DDS in WAN settings

The ability to support a large number of subscribers and high message rate is one of the main objectives of the DDS specification. But currently publishers and subscribers are organized in an overlay according to a relationship of mutual knowledge between each other.

Figure 2: DDS direct linked topology

In fact, the current DDS semantic is based on a simple overlay structure, where a publisher of a topic is directly linked with all the subscriber for the same topic. This solution can be implemented in a simple and efficient way. Then it should be the first, natural option in basic, small-scale applications. However, when the number of subscribers grows as well as the rate of update of the data objects, this basic overlay organization presents obvious scalability limits.

A solution to this problem is to organize the subscribers into a hierarchy (i.e. a tree overlay rooted at the publisher) by using overlay networks where each subscriber also acts as publisher for the subscribers at the lower level. If the tree degree is bounded each node processes a lower number of messages then it can achieve a higher throughput.

The increase in latency due to the additionally introduced network hop can be balanced by the lower value for time required for a publisher to send a data sample to all the subscribers interested to it. Moreover, the fact that each entity knows a lower bounded number of other entities makes the update process, roughly, manageable in logarithmic time.

We abstract this requirement into an additional DDS component, namely the In- teroperable Discovery Service (IDS). This service extends the DDS discovery service with the support for building and maintaining the subscriber tree overlay. The service is interoperable in the sense that it manages overlay including entities belonging to different DDS implementations.

An implementation of the interoperable discovery protocol, directly at the level of the interoperability protocol, can be based on one of the several distributed construction of tree overlays. Possibly, each implementation can have its own protocol. The interop- erability protocol should be extended with the notion of Indirection Endpoint which is

(6)

Figure 3: DDS IDS component

an additional endpoint type representing a subscriber also capable of forwarding mes- sages. Primitives to add/remove nodes are also required as a basic interface between each implementation. Please note that most end-to-end QoS should be adapted to the tree-based diffusion mechanism. For example, theDEADLINEproperty, specified as the maximum latency from the publisher to a subscriber, has to be translated into more restrictive latency constrains for each level of the tree.

Test on DDS Implementation In order to validate the theoretical analysis about scal- ability problem, but also about other challenges, we need to develop a real testbed for, a least, a DDS implementation. Considering this, our group participates to RTI Univer- sity Program and, consequently, we can use a Developing Suite of their NDDS software to develop applications and tests in order to analyze the behavior of NDDS in a WAN environment. Actually Benchmarking projects for DDS implementation are developed only in Local Area Network scenarios and it limits the usability of these benchmarks.

Currently we are working on developing test sessions of NDDS, a proprietary im- plementation of DDS realized by RTI. These tests have to analyze behavior of NDDS in several WAN testbed and show eventual weaknesses of this DDS implementation.

We want to compare behavior of NDDS in WAN environment with some result ob- tained by Selex-SI people in a LAN testbed. We want, also, to show impact of a real WAN environment on a current implementation of DDS.

Consequently we are working on different aspects:

• We are identifying a set of measurements of throughput, for example Transfer rate and Delivery rate, and latency, e.g. distribution of latency of message and event diffusion. These measurements will repeat and show for different size of message data and Event rate. We want to show with these tests the behavior of NDDS when it is used in a large scale system, where the number of subscribers in a topic can be huge, and when it is in stress conditions.

• We are identifying a testbed characterized by three different scales: national,

(7)

continental and world-wide scale. Each scale has a different numbers of node and geographical distance among them.

• We are identifying several parameters related to product of RTI and to working scenario: for example the QoS related to reliability, e.g. best-effort or reliable communication, size of message data, and number of subscribers. At last other relevant working parameter is the Event rate (number of publications of new Events / second).

Actually we are developing a throughput and latency test applications using NDDS Developing Suite of RTI. We want to use these applications to estimate presented mea- surements and to aggregate data related to each subscriber in order to define global latency of event diffusion.

PlanetLab The testbed that we want to use in these tests is PlanetLab [2], an open network composed by 700 nodes, located in 300 sites. It can be used to support re- search made both by university and enterprise. Using PlanetLab allows us to use a real testbed where workload, latency and other network parameters are similar to a real environment. Eventual criticisms in NDDS could validate our theoretical analysis and show that the current standard may be modified and enhanced.

In a workshop organized by Selex-SI and the Universitá di Roma “La Sapienza”, we introduce PlanetLab and its abilities to support testing of a wide scale real setting advanced prototypes of distributed applications. During this workshop we presented a demo of a network monitoring application built on a opensource DDS implementation:

Ocera Real-Time Ethernet (ORTE). The publisher side of this application monitors the PlanetLab nodes where it is running and sends to subscribers an alert when the CPU is overloaded. Moreover, in the same workshop we showed some simulation results of a basic clock synchronization protocol running on PlanetLab. These results are reported in Section 5.2

5 Challenges for QoS implementation in WAN settings

In the follows we present fundamental QoS presented in DDS specification. These properties are a key distinguishing features of DDS compared to other pub/sub mid- dleware standard but they can be serious challenges in order to implement them in a WAN environment. The principal reason is the lack of control, in WAN, on latency, workload and other network parameters. We want to show three big blocks of QoS defined in DDS standard and in next sections analyze challenges and solutions to these class of problems.

Data Availability

The DDS provides the following QoS policies which allow to control the data avail- ability.

(8)

• The DURABILITYQoS policy provides control over the lifetime of the data written on the GDS.

• The LIFESPAN QoS policy allows to control the interval of time for which a data sample will be valid.

• TheHISTORYQoS policy provides a mean to control the number of data sam- ples have to be kept available for the readers.

Data Delivery

The DDS provides several QoS which allow to control how data is delivered and who is allowed to write a specific topic. More specifically the following QoS policies are defined.

• TheRELIABILITYQoS policy allows application to control the level of relia- bility associated with data diffusion.

• TheDESTINATION_ORDERQoS policy allows to control the order of changes made by publishers to some instance of a given topic. Specifically the DDS allows different changes to be ordered according to the source or the destination time-stamp.

• TheOWNERSHIPQoS policy allows to control the number of writers permitted for a given topic.

Data Timeliness

The DDS provides a set of QoS policies which allow to control the timeliness properties of distributed data. Specifically, the supported QoS are described below.

• The DEADLINEQoS policy allows application to define the maximum inter- arrival time for data.

• The LATENCY_BUDGETQoS policy provides a means for the application to communicate to the middleware the level of urgency associated with a data com- munication.

In this section we analyze the challenges hidden behind the realization of an inter- operable DDS and propose some basic solutions. The challenges are identified accord- ing to two classes of requirements addressed by the DDS specification:

• Data availability: requirements for QoS properties related to reliable delivery of events, that require to persistently store messages for retransmission and data ob- jects for surviving subscribers failures: data samples must be preserved and they might survive to their publishers and subscribers. Current DDS implementation only provides each subscriber storing a copy of the data samples it is interested in but a DDS implementation should include a component capable of storing copies of data objects and implement algorithms to keep them consistent. Currently we are not working on this component.

(9)

• Timeliness and Ordering: requirements for QoS properties related to deadline- constrained message delivery and control the order of changes made by pub- lishers to some instance of a given topic. the enforcement of these policies is based on the capability of DDS implementation to determine the time elapsed by message from the source to the destination. This requires time-stamps and clock-synchronization algorithms in order to establish a correct order among data samples. This component will be analyzed in Section 5.2.

We abstract the component for addressing the requirements of Timeliness and Or- dering into a service architecture where we identify a service. This service encapsulates a specific solution that can introduce specific modifications into the interoperability protocol.

5.1 Timeliness and Ordering

The enforcement of QoS policies regarding data timeliness, such asDEADLINEand LATENCY_BUDGETor the presence ofDESTINATION_ORDERpolicy, is based on the capability of the DDS implementation to determine the time elapsed by messages from the source to the destination. This obviously requires the presence of a logical global clock which is in practice a synchronization mechanism between clocks entities composing the DDS.

Reliable clock synchronization is a popular problem in distributed systems and several solutions have been proposed. The challenges posed by the DDS scenario are the possible lack of a time reference accessible by all nodes and the possible large number of nodes in the system. Moreover, as far as this paper is specifically concerned, a further challenge is how to achieve clock synchronization through a standard protocol involving different implementations of DDS.

The simplest solution is to organize the DDS application in domains, where each domain correspond to an isolated and self-contained environment, where the presence of a intra-domain synchronization technique can be assumed. For example, we can assume that a domain uses a same DDS implementation distributed in a controlled network, where either a common protocol such as NTP or a proprietary one can run. In any case, a time reference will exist for each domain, guaranteeing bounded clock drift for each domain participant. The DDS should also have a inter-domain synchronization component that runs a clock synchronization algorithm between time reference of each domain. Inter-domain sync should be able to face failures of the time references, then it must interact with the intra-domain in order to replace failed nodes. Both intra- and inter-domain synchronization constitute the Interoperable Clock Synchronization Service (ICSS) we consider in our extended architecture.

As an alternative, the implementation as an extension of the interoperability proto- col requires adding specific message types. In this case the synchronization protocol can be based over pluggable transport, allowing for the use of multicast or proprietary protocols where available.

Actually we are working on clock synchronization algorithm that can be used for inter-domain or intra-domain protocols. We think solving clock synchronization prob- lem can be basic building block to address order and timeliness issues. In next section

(10)

we will extend analysis of clock synchronization algorithms and define future works on this argument.

5.2 Clock Synchronization

Clock Synchronization is a fundamental problem related to requirement of ordering properties, in particular toDESTINATION_ORDERpolicy. Several solutions as exten- sion of the interoperability protocol are just presented in 5.1. In this section we want to focus on innovative solutions for distributed clock synchronization algorithms. The ne- cessity for clocks synchronization arises from the drift experienced by different internal clocks of computers belonging to a network. Even initially accurately set, real clocks will differ after some amount of time due to drift, caused by clocks counting time at slightly different rates. In a centralized system the solution is trivial; there could be a master server that dictates the system time. In a distributed system the problem takes on more complexity because of a global time is not easily known. The most used clock synchronization solution on the internet is the Internet Network Time Protocol (NTP) which is a client-server architecture based on UDP message passing. To synchronize his clock using this protocol an user will need to be aware of one or more NTP servers.

Online lists of public accessible NTP servers can be used to find a near NTP server.

In order to avoid the reliability problem induced by this centralized approach an user can choose several NTP server to upgrade the reliability of the system in case one of these servers become unreachable or its clock unreliable. In large scale systems or in a network centric approach the presence of central server is not be suitable or advisable.

This is because the presence of a NTP daemon cannot be guaranteed in every process involved in the distributed computation and because servers can become bottlenecks.

We want to synchronize the physical clock of the processes on a common value and we want to do this using the basic auto-synchronization property showed by network of coupled oscillators. In fact in these networks the oscillators cooperatively agree on the same clock and start, after a transitory, to oscillate synchronously.

The phenomenon of collective synchronization shows enormous system of oscilla- tors spontaneously locking to a common frequency, despite the inevitable differences in the natural frequencies of the individual oscillators. Biological examples include net- work pacemaker cells in the hearth, congregations of synchronously flashing fireflies and crickets that chirp in unison. There are also many examples in physics and engi- neering: array of lasers or microwave oscillators. A fruitful approach was pioneered by Winfree [20]. He formulated the problem in terms of a population of interacting oscillators, where the coupling was weak and the oscillators were nearly identical. He also introduced a model of coupled oscillation. Using numerical simulations and an- alytical approximations to validate his model, Winfree discovered that such oscillator populations could exhibit the temporal analogue of a phase transition. When the spread of natural frequencies is large compared to the coupling, the system behaves incoher- ently, with each oscillator running as its natural frequency. As the spread is decreased, the incoherence persists until a certain threshold is crossed. Then a small cluster of oscillators suddenly freezes into synchrony.

Much work with coupled oscillators has been done in globally connected oscilla- tor network where each oscillator is coupled with every other oscillator, e.g. [11, 18].

(11)

Some research has been done on non-standard topologies: Satoh in [17] performs ex- periments comparing the capabilities of networks of oscillators with two-dimensional lattice and random graph topology. He noticed that the system becomes globally syn- chronous much more effectively in the random case. In fact, Matthews at al. in [14]

note that the coupling strength required for global frequency locking in a random net- work is the same as the required in the fully interconnected case. Consequently we can use a random network of coupled oscillators without particular problems as regards the cooperative synchronization. A random network of processes can be obtained through several group membership protocol, such as the one proposed in [1, 19]. These proto- cols use a proactive approach to generate a graph topology similar to a random graph.

They simply change, randomly and periodically, the vertex of edges presented in the graph. We want to use this proactive approach in order to maintain synchronous the system despite the drift of internal clocks of PCs connected to the system. In fact, we have to consider that modern computers have quartz clocks and the drift among differ- ent computers is usually in the order of 1 second in a day. This drift can lead the system to incoherent state if the synchronization procedure is not repeated continuously.

The behavior is similar to the behavior present in the biological systems where con- tinuous synchronization are performed by the fireflies or the bright cells in seaweeds.

In these systems the removal of coupling leads rapidly to a incoherence state. Conse- quently we chose a proactive approach to guarantee the coherence of the system and a similar one is presented in [11, 18]. Our system must cooperate to achieve the clock synchronization and an evolutionary-like algorithm can be the right choice to reach this objective. In detail, similarly to biological system, when some nodes agree on the same clock this protocol increases the coupling strength of this value, i.e. the possibility that another process agrees on this value. Achieving clock synchrony in wide area network has not been a widely studied topic: several works by Dolev et al. [5, 4, 6, 9] pro- pose and analyze several self-synchronization protocols. They follow a deterministic approach. They use the synchronization properties of pulse oscillators. This latter is the simplest model that represents the behavior of an oscillator.

Most distributed systems encountered in practice are asynchronous, i.e. they do not guarantee a bound on message communication delays. But traditional determin- istic fault-tolerant clock synchronization algorithms such as those of [13, 12] assume bounded communication delays. Thus, they cannot be directly used to synchronize clocks in asynchronous systems. This problem is also present in the self-synchronization protocol presented in [4]. Moreover, these protocols typically require the transmission of broadcast messages each time the clocks are synchronized. So they cannot easily scale to larger networks. A probabilistic clock synchronization was proposed in [3] as a means to synchronize clocks in the presence of unbounded communication delays.

However, [3] discuss the use of probabilistic remote clock reading only to achieve ex- ternal clock synchronization. Actually the goal of our research is first to provide a probabilistic clock synchronization protocol. Then to achieve fault-tolerant internal clock synchronization using a biological clock synchronization paradigm.

We want to propose architectural and algorithmic solutions for every challenges showed in this report.

Preliminary results are obtained through mathematical simulation using Octave, a MatLab-like software and applying coupled oscillator equations. Results obtained

(12)

using a random graph and 2-lattice topologies are showed in Figure 4.

(a) Lattice 2D (b) Random Graph

Figure 4: Octave Simulations

These graphs are realized in an ideal failure-free setting where propagation and transfer delays are negligible. These simulations aim at proving that we can obtain internal clock synchronization also in weakly-connected topologies (random graph) and not only in clique (the most used topology in clock synchronization papers).

Other results are obtained using PlanetLab and an alpha release of clock synchro- nization application. This application, developed in C programming language, syn- chronizes virtual clock of PlanetLab nodes with precision of hundred of milliseconds.

A graph of behavior of application, running in 8 nodes, is showed in Figure 5.

Figure 5: Clock Synchronization on PlanetLab

These preliminary results, obtained in ideal and realistic settings, show us that a scalable internal clock synchronization in a dynamic Internet-like environment can be obtained by using an approach based on theory of coupled oscillators.

Future work on clock synchronization For what concern Clock Synchronization we are working on several parallel aspects related to theoretical studies, simulations

(13)

of algorithms and real testbeds. These works should produce a working application usable on a real Internet-like environment. First, we are studying theoretical problem of Clock Synchronization and we want to prove some basic properties in settings based on some assumptions like synchronous requests and timed asynchronous channels. These assumptions can be simplified to work on properties that in a real environment are very difficult to analyze.

Concurrently, we are developing a testbed based on a Java simulator: PeerSim [8]; in order to analyze correctness, convergence and scalability of our algorithms in not ideal settings. In these simulations we can introduce in different phases faults, different delay distributions or attacks. We can study the behavior of our protocols in a controlled environment. We can isolate and define eventual weakness (for example related to scalability or enemy attacks) of a protocol and study different solutions.

Finally, we are developing an application, currently written in C++ that we will test on PlanetLab. This application and testbed should validate our studies in a real working scenario and produce an application useful in Internet-based and large scale distributed systems.

References

[1] A. Allavena, A. Demers, and J. Hopcroft. Correctness of a gossip-based member- ship protocol. In 24th ACM Symposium on the Principle of Distributed Computing (PODC 2005), 2005.

[2] PlanetLab Consortium. Planetlab. http://www.planet-lab.org/.

[3] F. Cristian. A probabilistic approach to distributed clock synchronization. Dis- tributed Computing, 3:146–158, 1989.

[4] A. Daliot, D. Dolev, , and H. Parnas. Self-stabilizing pulse synchronization in- spired by biological pacemaker networks. In Sixth Symposium on Self-Stabilizing Systems, pages 32–48, 2003.

[5] A. Daliot, D. Dolev, and H. Parnas. Linear time byzantine self-stabilizing clock synchronization. Technical Report TR2003-89, School of Engineering and Com- puter Science, The Hebrew University of Jerusalem, December 2003.

[6] S. Dolev. Possible and impossible self-stabilizing digital clock synchronization ingeneral graph. Journal of Real-Time Systems, 12(1):95–107, 1997.

[7] Object Management Group. Data distribution service for real-time systems spec- ification, 2002.

[8] PeerSim Group. Peersim: A peer-to-peer simulator.

http://peersim.sourceforge.net/.

[9] T. Herman and S. Ghosh. Stabilizing phase-clock. Information Processing Let- ters, 5(6):585–598, 1994.

(14)

[10] Sun Microsystems Inc. Java message service api rev 1.1, 2002.

[11] Y. Kuramoto. Chemical oscillations, waves and turbolence, chapter 5. Springer- Verlag, 1984.

[12] L. Lamport and P. M. Mellar-Smith. Synchronizing clocks in the presence of faults. Journal of the ACM, 32(1):52–78, January 1985.

[13] J. Lundellus-Welch and N. Lynch. A new fault-tolerant algorithm for clock syn- chronization. Information and Computation, 77(1):1–36, 1988.

[14] P.C. Matthews, R.E. Mirollo, and S.H. Strongatz. Dynamics of a large system of coupled nonlinear oscillators. Physical D, 52:293, 1991.

[15] Object Management Group. CORBA event service specification, version 1.1.

OMG Document formal/2000-03-01, 2001.

[16] Object Management Group. CORBA notification service specification, version 1.0.1. OMG Document formal/2002-08-04, 2002.

[17] K. Satoh. Computer experiment on the cooperative behaviour of a network of interacting non linear oscillators. Journal of Physical Society, Jpn. 58, 2010, 1989.

[18] S.H. Strongatz and R.E. Mirollo. Phase-locking and critical phenomena in lattices of coupled nonlinear oscillators with intrinsic frequencies. Physical D., 31:143–

168, 1988.

[19] S. Voulgaris, D. Gavidia, and M. Steen. Cyclon: Inexpensive membership man- agement for unstructured p2p overlays. Journal of Network and System Manage- ment, 13(2):197–217, June 2005.

[20] A.T. Winfree. Biological rhythms and the behavior of populations of coupled oscillators. Journal of Theoretical Biology, 16:15–42, 1967.

(15)

A List of Activities

1. Analysis of DDS and RTPS specification

2. Definition of a working environment for large-scale DDS applications, in collaboration with Selex-SI, based on Network-Centric approach developed in NATO documents.

• Production of a scenario characterized by a set of Actors and QoS required by participant Entities and used Topics

3. Identification of several challenges related to (a) Scalability

• Identification of an additional DDS architectural component, Interop- erable Discovery Service, and RTPS Endpoint, Indirection Endpoint

• Development of a testbed for DDS implementation in WAN settings – Definition of a set of interesting measurements, working parame-

ters and application settings.

– Participation to RTI University Program and consequent access to their NDDS Developing Suite

– Development of test applications based on NDDS to analyze its behavior in dynamic, world-wide scale settings

– Participation to PlanetLab Consortium in order to access to Plan- etLab network

– Analysis of first result obtained by preliminary test applications running on National nodes of PlanetLab network

(b) Implementation of QoS policies in WAN environment, in particular to

• Data Availability: identification of a necessary architectural compo- nent capable to store data objects to enforce this policy. Currently we are not working on this component

• Timeliness and Ordering: identification of requirement of time-stamps and clock-synchronization algorithms to enforce these policies.

4. Analysis of solutions for Clock Synchronization protocol: a basic building block to realize Data Ordering

(a) Analysis of Clock Synchronization problem in Wide-Area Large Scale Dis- tributed Systems

(b) Identification of possible solutions based on gossip and full-locale mecha- nisms

(c) Analysis of first result obtained through mathematical simulation and an test application running on PlanetLab nodes

(16)

5. Organization of a workshop on PlanetLab by Selex-SI and University of Rome

“La Sapienza” where speakers

(a) introduced future vision about large-scale distributed systems

(b) introduced PlanetLab and its abilities to support testing of a wide scale real setting advanced prototypes of distributed applications

(c) presented a demo of a network monitoring application built on a open- source DDS implementation

(d) showed some simulation results of a basic clock synchronization protocol running on PlanetLab

Riferimenti

Documenti correlati

Verranno meno gli uffici con il compito di prendere tutte le decisioni, ma utilizzando i giusti strumenti, ogni unità dell’organizzazione sarà in grado di reperire autonomamente

Piri (2014a), Three-phase flow in porous media: A review of experimental studies on relative permeability, Rev. Piri (2014b), The effect of saturation history on three-phase

3 As far as the period between September 11, 2001 and March 31, 2003 is concerned, the electronic archives of the Corriere della Sera carry 441 articles relating to “weapons of

Il numero di pezzi sulle armi di distruzione di massa tocca un primo picco nel mese di settembre del 2002, in corrispondenza dell’anniversario degli attentati alle Torri Gemelle,

provide only simulation-based experimental evaluations of the protocol properties, we started from this basing building block of clock synchronization (i.e. a mean) in order to

Assuming as a deployment scenario a static network with symmetric communication channels, we show how the convergence time, of the proposed algorithm, depends on the scale N , and

Assuming as a deployment scenario a static network with symmetric communication channels, we show how the convergence time, of the proposed algorithm, depends on the scale N , and

14 Georgetown University Center on Education and the Workforce analysis of US Bureau of Economic Analysis, Interactive Access to Industry Economic Accounts Data: GDP by