• Non ci sono risultati.

Evaluation of Unstructured Overlay Maintenance Protocols under Churn

N/A
N/A
Protected

Academic year: 2021

Condividi "Evaluation of Unstructured Overlay Maintenance Protocols under Churn"

Copied!
7
0
0

Testo completo

(1)

Evaluation of Unstructured Overlay Maintenance Protocols under Churn

R. Baldoni S. Bonomi A. Rippa L. Querzoni S. Tucci Piergiovanni A. Virgillito

Dipartimento di Informatica e Sistemistica

Universit`a di Roma “La Sapienza”

Via Salaria 113, 00198 Roma, Italia

Abstract

An overlay network is formed on top of – and generally in- dependently from – the underlying physical computer net- work, by the peers (nodes) of a P2P system. The dynamics of peers is taken into account by devising appropriate over- lay maintenance protocols that are able to join and leave peers from the overlay. Due to the need for scaling in the number of nodes, overlay maintenance protocols have been simulated only in environments showing a very restricted behavior with respect to the possible concurrent and inter- leaved execution of join/leave operations.

In this paper we compare two overlay maintenance pro- tocols well suited to unstructured P2P systems, namely SCAMP and Cyclon, in an event-based simulation setting including concurrent and interleaved join and leave opera- tions as well as variable message transfer delay. This simu- lation setting allows to point out surprising results for both protocols. In particular, under a continuous and concurrent replacement of nodes, permanent partitioning of the overlay arises after a very small number of join/leave operations.

1. Introduction

P2P systems are at present a widespread technology as well as a hot research topic. A P2P system is a highly dy- namic distributed system in which nodes perpetually join and leave. For these characteristics, a P2P system can reach a potentially infinitely wide scale with a transient popula- tion of nodes.

Overlay maintenance is a fundamental problem in peer- to-peer (P2P) systems. An overlay is a logical network built on top of – and generally independently from – the under- lying physical computer network, by the peers (nodes) of the P2P system. Any overlay should exhibit a topology able to support a P2P application in an efficient and scal- able manner maintaining a satisfactory level of reliability.

Unstructured overlay networks have emerged as a viable solution to settle such issues [3, 4, 5, 8, 10] in order to effectively support large scale dissemination and flooding-

based content searching. An unstructured overlay shows good global properties like connectivity (for reliability), and low-diameter and constant-degree (for scalability) without relying on a deterministic topology. To cope with the in- herent P2P dynamics, however, a so-called overlay main- tenance protocol (OMP) is needed. The main goal of any OMP is properly arranging the overlay to keep as much as possible the desired global properties of the overlay over the time, despite the continuous and interleaved process of arrival/departure of nodes, i.e., churn. Amongst the most popular OMPs that do not use a central server we cite [3, 4, 5, 10].

All the above cited works include an experimental eval- uation of the protocols in which basic topological proper- ties of the overlay are evaluated, such as resilience to fail- ures (reliability) and distribution of node degree (scalabil- ity). However, at the best of our knowledge, the condi- tions under which experiments are made only consider a limited amount of possible dynamic behaviors. For exam- ple one of such typical experimental scenarios (as consid- ered in [5, 7, 10]) is divided in two phases where, firstly, all nodes in the system join and, successively, a portion of nodes leaves the system simultaneously. Moreover, each phase is divided in rounds and each node executes at most one join/leave operation atomically in a round, i.e. the ex- ecution of two operations in the same round cannot inter- leave. This type of experiments is intended to simplify the computation of the simulation in order to scale the simula- tion itself in the number of processes (usually these simula- tions reach 100.000 nodes) and then to evaluate, for exam- ple, the portion of nodes that can simultaneously leave the overlay without creating a partition in the overlay topology.

We believe that a further step is required to analyze the characteristics of OMPs in scenarios where more dynamic behaviors are admitted that actually mimic the possible dy- namics occurring in realistic P2P environments. The chal- lenge is to check if reliability and scalability properties are still preserved in this more severe setting. For example, a high interleaving between joins and leaves could perma- nently spoil the overlay connectivity, leading to a higher probability of node isolation and partitioning. Moreover,

(2)

unpredictable message delays (typical in a wide-area net- work), can provoke interleaving between messages sent in different rounds of a protocol, causing inconsistency in views at nodes that, again, has a negative effect on the over- lay properties.

In this paper we present the results of a first attempt in evaluating OMPs behavior under churn. In particular, the focus is in evaluating the overall robustness of the proto- cols over the time despite continuous overlay node replace- ments i.e., new nodes join the system while others leave. We chose two particular protocols, namely SCAMP [5] and CY- CLON [10] as representatives of two different approaches to overlay maintenance, respectively reactive maintenance, where the protocol undertakes actions in rearranging the overlay only upon arrival of nodes, and proactive main- tenance, where each node continuously gossips member- ship information (i.e., its view) among its logical neighbors.

Proactive maintenance protocols allow a better resilience to high churn in terms of concurrent join/leave operations per time unit at the price of a persistent activity of nodes, induc- ing a constant overhead on the network. Reactive mainte- nance protocols are more suited to environments showing a

“moderate” number of concurrent operations per time unit, where they eliminate the gossip overhead in period of inac- tivity.

Protocols were implemented in the same simulation en- vironment, namely Peersim [1]. Differently from other works using the same tool [10, 7], where simulations were performed following a round-based approach, here we use an event-based approach, in order to introduce aspects such as join/leave interleaving and unpredictable message de- lays. All such elements induce a high degree of concurrency in a run of a simulation, that was not present in the round- based simulations. This simulation shift brings to reduce the magnitude of the P2P systems to be analyzed (order of thou- sands of nodes) due to the enormous resource consumption.

Nevertheless, P2P systems formed by thousands of nodes are big enough to point out the main characteristic of each OMP protocol under churn.

Starting from an ideal P2P overlay network, the results of the simulations show the difficulty of the tested proto- cols to face continuous node replacement. Permanent par- titioning starts to occur when a low percentage of nodes forming the initial P2P network has been replaced by new joining nodes. This result is surprising when compared to churn-free (i.e., no join/leave operation interleave) simula- tions of the same protocols that showed partitioning only when a high percentage of nodes left concurrently the sys- tem. Though as expected the proactive approach of Cyclon results more suitable to resist to churn than SCAMP, its abil- ity to recover full connectivity strictly depends on the fre- quency of gossiping. Concerning SCAMP, the resistance to churn depends on the percentage of number of initial nodes

that remain in the system. If this percentage is below than 90%, the process of churn tends to disaggregate the overlay topology quite early leaving connected only a very small fraction of the nodes in the system.

We believe that this work, though not intended to rep- resent a comprehensive simulation study, clearly indicates that the impact of churn in OMPs deserves further study.

2. Protocols Description

The common characteristic of all OMPs is that each node maintains a limited number of links to other nodes in the system. We call this set of links the view of the node. The views should be such that the graph, resulting by interpret- ing links in the view as arcs and nodes as vertexes, is con- nected. OMPs differentiate among themselves with respect to the techniques they use for building and maintaining the views. We consider decentralized OMPs in which such pro- tocols do not require a central coordination. In this Section we describe in detail the two protocols that are subject of our study.

2.1. SCAMP

SCAMP [5] is a gossip-based protocol whose main in- novative feature is that the size of the view is adaptive w.r.t.

a-priori unknown size of the whole system. More precisely, view size in SCAMP is logarithmic of the whole system size. The protocol consists of mechanisms for nodes to join and leave, and to recover from isolation. The following is a brief description of these mechanisms.

Data Structures. Each node maintains two lists, a Par- tialView of nodes it sends messages to, and an InView of nodes that it receives messages from, namely nodes that contain its node-id in their partial views.

Join Algorithm. New nodes join the overlay by sending a join request to an arbitrary member, called a contact.

They start with a PartialView consisting of just their con- tact. When a node receives a new join request, it forwards the new node-id to all members of its own PartialView. It also creates c additional copies of the new join request (c is a design parameter that determines the proportion of failures tolerated) and forwards them to randomly chosen nodes in its PartialView. When a node receives a forwarded join re- quest, provided the subscription is not already present in its PartialView, it integrates the new node in its PartialView with a probability p = 1/(1 + sizeof P artialV iewn). If it decides not to keep the new node, it forwards the join re- quest to a node randomly chosen from its PartialView. If a node i decides to keep the join request of node j, it places the id of node j in its PartialView. It also sends a message to node j telling it to keep the node-id of i in its InView.

Leave Algorithm. The leaving node orders the id’s in its PartialView as i(1), i(2), ..., i(l) and the id’s in In-

(3)

View as j(1), j(2), ..., j(l). The leaving node will inform nodes j(1), j(2), ..., j(l − c − 1) to replace its id with i(1), i(2), ..., i(l − c − 1) respectively (wrapping around if (l − c − 1) > l). It will inform nodes j(l − c), ..., j(l) to remove it from their lists without replacing it by any id.

Recovery from isolation. A node becomes isolated when all nodes containing its identifier in their PartialViews have either failed or left. In order to reconnect such nodes, a heartbeat mechanism is used. Each node periodically sends heartbeat messages to the nodes in its PartialView. A node that has not received any heartbeat message in a long time re-joins through an arbitrary node in its PartialView.

2.2. Cyclon

Cyclon [10] follows a proactive approach, where nodes perform a continuous periodical gossiping activity with their neighbors in the overlay. The periodical gossiping phase (named “shuffle cycle”) has the aim of randomly mix- ing the views between neighbor nodes. Clearly, joins are managed in a reactive manner, while voluntary departures of nodes are handled like failures (no leave algorithm is pro- vided). A failure detection mechanism is provided in order to clean views from failed nodes.

Data Structures. Each node maintains only a single view of nodes it can gossip with (i.e., it corresponds to SCAMP’s PartialView). The size of the view is fixed and it can be set arbitrarily. Each node in the view is associated to a local age, indicating the number of shuffle cycles during which the node was present in the view.

Join Algorithm. A node A joins by choosing one node (con- tact) at random among those already present in the network.

The contact starts then a set of independent random walks from the contacted node. The number of random walks is equal to the view size, while the number of steps per each random walk is a parameter of the algorithm. When each random walk terminates, the last visited node, say B, adds A to its view by replacing one node, say C, which is added to A’s view using an empty slot.

Shuffle Algorithm. The shuffle algorithm is executed peri- odically at each node. A shuffle cycle is composed of three phases. In the first phase a node A, after increasing the age of all the nodes in its view, chooses its shuffle target, B, as the one with higher age among those in its view. Then, A sends to B a shuffle message containing l − 1 nodes ran- domly chosen in A’s view, plus A itself. In the second phase, B, once received the shuffle message from A, re- places l − 1 nodes in its view (chosen at random) with the l nodes received from A and send them back to A. In the final phase A replaces the nodes previously sent to B with those received from it. Overall, the result of one shuffle cycle is an exchange of l links between A and B. The link initially present from A to B is also reversed after the shuffle.

Handling Concurrency. In the specifications given in [10],

no action was defined in the scenario of two (or more) con- current shuffle cycles, e.g. when a node A, during a shuffle cycle in progress with B, is selected as a target node by re- ceiving a shuffle message from C. If concurrency is consid- ered, the nodes sent by A to B can be modified by the con- current shuffle involving A and C. In our implementation, we extend the original specification in order to address this situation: in case nodes to be replaced by A are no longer in its cache, it replaces some nodes chosen at random.

3. Simulation Study

In this Section we present the details of our simulation study. Results of the two protocols are presented sepa- rately1. We point out that this work is not intended to be a comparison between the two protocols, since they were de- signed for different purposes2. The simulations aims only at showing the behavior of these different protocols under conditions of churn and concurrency.

3.1. Experimental Setting

The simulation study was carried out by developing the two algorithms in Peersim [1]. The event-driven mode of Peersim was used for both protocols. Event-driven simula- tions in Peersim are based on a logical clock. At each time unit t of the clock one or more events can be scheduled.

The scheduled events are: join invocation, leave invocation, send and receive of messages.

Differently from the cycle-driven mode of Peersim, the event-driven mode allows to introduce concurrency. In par- ticular, (i) the not synchronized execution of joins, leaves and shuffle cycles, and (ii) the random delay between the send and receive of a message, allows joins, leaves and shuf- fle cycles to take a variable amount of time to execute, aug- menting the possibility of overlapping.

Simulations for both protocols were carried out as fol- lows. A run of a protocol is divided into three periods:

creation, churn and stability. During the creation period, nodes join until reaching a given value N . Neither leaves nor overlapping of joins occur along this phase. During the churn period, nodes continuously join and leave the network at a given churn rate C, i.e. at each unit of time, C nodes invoke the join and C nodes invoke the leave. The churn pe- riod ends when 3000 joins and 3000 leaves have occurred.

Thus, the churn period duration varies in function of the churn rate and, at the end of this period, the total number

1Both protocols implementations were validated comparing to the ones presented in [10] and [5] respectively. This comparison is shown in the Appendix A.

2SCAMP was originally targeted at the construction of overlays for large-scale information dissemination, for which the reactive nature of the protocol is more appropriate, while Cyclon is in general suited for applica- tions requiring a constant sampling of nodes in the network, e.g. searching, monitoring, etc.

(4)

of nodes in the overlay is still N . N was set to 1000 in all experiments3. Message delay varies uniformly at random between 1 and 10 time units. 10 independent runs were made for each experiment.

The metrics we focus on are (i) the average percentage of reached nodes (R) and (ii) the overlay clustering at the end of the stability period.

Evaluating average reachability. This metric is defined as the average number of nodes that can be reached from any node in the overlay, with respect to the total number of nodes. This metric is obviously related to the connectivity of the overlay graph, as any value lower than 100% indi- cates that at least one node cannot be reached by at least one other node. We evaluated the effect of the variation of the churn rate C (C=2,4,8,16,32,64) on R along the time (each point in these experiments is taken every 100 join and 100 leave invocations). In order to facilitate the compari- son between experiments resulting by simulating different churn rates, the churn period has been made equal for any churn rate by expressing the execution time as a normalized time (τ = 3000t∗C ∗ 100). Thus, a same value of normalized time corresponds to a same number of invoked joins and leaves for any churn rate. Both protocols have been evalu- ated with leaving nodes chosen uniformly at random in all experiments. Experimental results show that at the end of the churn period the set of initial 1000 nodes are almost completely replaced (in average, the 4% of the initial nodes remains during the entire simulation). For SCAMP we have also evaluated the impact on R of different policies in the choice of the leaving nodes while for Cyclon the impact on R of different shuffle frequencies.

Overlay clustering. In order to highlight the type of overlay connectivity when R is lower than 100%, we also show the clustering of the overlay at the end of the stability period:

the percentage of nodes forming the maximum connected component (main cluster), the percentage of isolated nodes and the percentage of nodes forming clusters with dimen- sion less than the 6% of the overlay4.

3.2. Evaluation of SCAMP

The results of experiments for SCAMP are presented in figures 1(a), 1(c) and 1(e). For SCAMP we chose a heart- beat period equal to 50 time units.

In the first experiment (Figure 1(a)) we tested the average reachability R along the time under churn. The plot clearly illustrates the dependence of R from C, showing how churn can permanently disrupt the overlay connectivity. For churn rates higher than 4, at the end of each run, R is close to 0%, meaning that the topology is entirely fragmented into small-sized partitions and many nodes become permanently

3In Appendix B is shown that changing the initial size with the same ratio between N and C brings to obtain the same experimental results.

4As we will see later, no cluster of size higher than 6% is ever created.

isolated as showed in Figure 1(c). We remark the great dif- ference with the results showed in [5], in which R is equal to 99% even after 50% of the nodes have been removed from the network. In our scenario, R starts to deviate from 99%

when only 10% of nodes have been replaced (for C = 2 after this substitution, i.e. for τ = 10/3, R = 98, 9%). The main reason behind this behavior is the poor connectivity of nodes replacing the old ones during churn period. Ini- tially, the overlay is formed by 1000 well-connected nodes.

After the replacement of the 10% of nodes, for C = 2, we have a degradation in connectivity since the new 10%

is poorly connected with respect the replaced 10%. The reason lies in the fact that the old 10% was obtained in an ideal manner during the creation period, while the new 10%

has been added to an overlay suffering from node departures and simultaneous joins (nodes joining concurrently are con- nected among them through the initial overlay disrupted by the deletion of some nodes). During the churn period con- nectivity keeps degrading with the progressive replacement of nodes in the overlay. While R is greater then 80%, the more the velocity of replacement the worst the connectivity shown by the replacing part of the overlay, e.g. for C = 2 after the replacement of 10% we have R = 98, 9% while with a higher churn rate C = 4 after the replacement of a same 10% we have R = 98%. However, for lower values of R, the slope of different curves become almost the same, pointing out a sub-linear degradation of R with respect to C and the dominating effect of the quantity of replaced nodes versus the velocity of their replacement. Interestingly, af- ter the churn stops (τ = 100) there is a small raise in R, for the churn rates lower than 8, witnessing the effect of the heartbeat mechanism during the stability period.

It is clear that, under these conditions, it becomes criti- cal for the protocol the presence of a well-connected cluster of nodes not subject to replacement. For testing this effect, in the second experiment we consider a variable percentage of nodes to be “permanent”, i.e. nodes joining during the creation period and never leaving the overlay, with a fixed churn rate equal to C = 2. Figure 1(e) shows the results of the experiment when changing the percentage of permanent nodes. Values chosen were 0%, 10%, 50% and 90%. In the

“Random” curve, nodes leaving the overlay were chosen at random, as in the previous experiment5. The plot shows the positive effect of the permanent nodes over R. The per- centage of reached nodes during the stability period is al- ways higher than the number of permanent nodes, meaning that the presence of a fixed connected cluster facilitates new joining nodes to remain connected to the main cluster.

5Random performs better than 0% because some permanent nodes (in average the 4%) are present

(5)

0 10 20 30 40 50 60 70 80 90 100 110 120 0

10 20 30 40 50 60 70 80 90 100

C = 2 C = 4 C = 8 C = 16 C = 32 C = 64

% Reached Nodes

(a) SCAMP: Variation of R along time with different churn rates

0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 0

10 20 30 40 50 60 70 80 90 100

C = 2 C = 4 C = 8 C = 16 C = 32 C = 64

% Reached Nodes

(b) Cyclon: Variation of R along time with different churn rates

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

2 4 8 16 32 64

churn rate

overlay percentage

Main cluster Clusters with dim.<6% Isolated nodes

(c) SCAMP: Overlay clustering at the end of the stability period with different churn rates

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

2 4 8 16 32 64

churn rate

overlay percentage

Main cluster Cluster with dim.<6% Isolated nodes

(d) Cyclon: Overlay clustering at the end of the stability pe- riod with different churn rates

0 10 20 30 40 50 60 70 80 90 100 110 120

0 10 20 30 40 50 60 70 80 90 100

% Reached Nodes

90% Permanent 50% Permanent 10% Permanent 0% Permanent Random

(e) SCAMP: Variation of R along time with different per- centages of permanent nodes with fixed churn rate C = 2

0 10 20 30 40 50 60 70 80 90 100 110 120

0 10 20 30 40 50 60 70 80 90 100

% Reached Nodes

Shuffle Timer = 20 Shuffle Timer = 40 Shuffle Timer = 80 Shuffle Timer = 160 Shuffle Timer = 320

(f) Cyclon:Variation of R along time with different shuffle rates with fixed churn rate C = 2

Figure 1. Experimental Results

(6)

3.3. Evaluation of Cyclon

All experiments with Cyclon use a view size set to 7, being the logarithm of the system size. The shuffle length l is 2, while the length of random walks in the join is 5.

In the first experiment (Figure 1(b)), we tested the effect of the variation of the churn rate C on R while Figure 1(d) shows the clustering of the overlay in all cases. The shuffle period is set to 20 time units. Again, a severe churn rate permanently disrupts the overlay connectivity, with nodes getting isolated from the largest partition. As a comparison with the results presented in [10], where R starts to decrease when 75% of nodes are removed, in our experiments R is lower than 100% starting from the first point (10% of substi- tuted nodes at τ = 10/3). There is an important difference with SCAMP: the churn rate affects more significantly the trend of R: the overall number of replacements its unim- portant (with C = 2, R remains almost 100% despite the number of overall replacements), the dominating effect is the velocity of the replacement since it impacts on the ef- ficiency of the shuffle mechanism: a slower replacement implies a higher number of shuffle cycles.

In Figure 1(f), we test the effect of varying the shuffling period. The churn rate is fixed to C = 2 and the shuffling timer varies from 20 to 320 time units. As expected, R de- creases faster with higher shuffling periods. Also the con- vergence in the stability period is slower. Finally, it is inter- esting comparing results in Figures 1(b) and 1(f) focusing on those curves where the number of operations between two shuffle periods is the same. For instance, let us observe the curve for C = 8 in Figure 1(b) (Shuffle Timer=20 and C = 8) and the curve for Shuffle Timer=80 in Figure 1(f) (Shuffle Timer = 80 and C = 2): in both experiments there are approximately 160 join and 160 leave invocations be- fore that a node shuffles. The fact that R is always higher in the first test, indicates that the velocity of replacement dom- inates over the number of replacements, making the shuffle less effective though it is performed more frequently.

4. Related Work

Different distributed OMPs supporting gossip-based dis- semination have been proposed [4, 5, 3, 10, 2]. These proto- cols provide each node with a small local view of the over- lay membership at each node and membership information spreads in an epidemic style [6]. However, [4, 5] do not take into account the issue of the overlay changing rate explic- itly. In [3] the authors express, through an analytical study, the time expected for the overlay to partition as a function of (i) the overlay size, (ii) the local view size and (ii) the overlay changing rate (called churn rate). The local view size needs to be larger than the churn rate for the expected time until partitioning to be exponential in the square of the local view size. The protocol proposed, however, has not

been evaluated through an event-based simulation study, i.e.

under concurrency and random message delays.

For completeness we also cite [2] since it is the algorithm that first introduces the main features of Cyclon: shuffles cycles and random walks. Cyclon is an improvement w.r.t.

to [2] obtained by using the aging mechanism.

This work extends and deepens the first results presented in [9] in which we began to evaluate the SCAMP behavior under churn with a very small overlay (only 100 nodes).

5. Concluding Remarks

The aim of the paper has been to test the robustness of the overlays obtained from SCAMP and Cyclon protocols with the precise intent to stress each protocol under severe churn situations, in order to determine their breakdown behavior.

Other aspects need further investigation. For example, in our experiments we assumed all nodes initially joining the system are not “disturbed” by concurrent leaves. This brings to the construction of an ideal initial overlay net- work. Now the problem is how one can set up a network with thousands of nodes in that way. A more realistic model should take this into account to see the effect of operation interleaving starting at a very early stage when the size of the P2P system is in the order of a more realistic tens of nodes.

References

[1] Peersim, http://peersim.sourceforge.net/.

[2] D. Rubenstein A. Stavrou and S. Sahu, A Lightweight, Robust P2P System to Handle Flash Crowds, IEEE Journal on Selected Areas in Communications 22 (2004).

[3] A. Allavena, A. Demers, and J. E. Hopcroft, Correctness of a Gos- sip Based Membership Protocol, Proceedings of the 24th ACM an- nual symposium on Principles of Distributed Computing (PODC05), 2005, pp. 292–301.

[4] P. Th. Eugster, R. Guerraoui, S. B. Handurukande, P. Kouznetsov, and A.-M. Kermarrec, Lightweight Probabilistic Broadcast, ACM Transanctions on Computer Systems 21 (2003), no. 4, 341–374.

[5] A. Ganesh, A. Kermarrec, and L. Massoulie, Peer-to-Peer Member- ship Management for Gossip-based Protocols, IEEE Transactions on Computers 52 (2003), no. 2, 139–149.

[6] R. A. Golding and K. Taylor, Group Membership in the Epidemic Style, Tech. Report UCSC-CRL-92-13, 1992.

[7] M. Jelasity, R. Guerraoui, A. Kermarrec, and M. van Steen, The peer sampling service: Experimental evaluation of unstructured gossip- based implementation, Proceedings of Middleware 2004, 2004.

[8] G. Pandurangan, P. Raghavan, and E. Upfal, Building Low-Diameter p2p Networks, IEEE Symposium on Foundations of Computer Sci- ence (FOCS01), 2001, pp. 492–499.

[9] R.Baldoni, A. Noor Mian, S. Scipioni, and S. Tucci Piergiovanni, Churn Resilience of Peer-to-Peer Group Membership: a Perfor- mance Analysis, In Proceedings of the International Workshop on Distributed Computing, December 2005.

[10] S. Voulgaris, D. Gavidia, and M. van Steen, CYCLON: Inexpensive Membership Management for Unstructured P2P Overlays, Journal of Network and Systems Management 13 (2005), no. 2.

(7)

Appendix A

In this Appendix we show the results of the comparison between our SCAMP and Cyclon implementations against the original ones, respectively GKM implementation [5]

and VGvS implementation [10].

We made the comparison between the two implementa- tions of Cyclon, using the original one provided in the Peer- sim library [1]. We simulated the following: starting from an initial regular random graph of 1000 nodes we measure the in-degree distribution (in-degree for a node is the num- ber of nodes that it receives messages from) after 100 shuf- fle cycles without overlapping shuffle cycles (cache size 20 and shuffle length l =5).

For SCAMP, as we do not have the original implementa- tion available, we made our simulations with the same pa- rameters under which original results were obtained (degree distribution of 100000 initial nodes after the removal of the 50% of nodes). Our experiments (Figures 2, 3) returned the same distribution of nodes degree as the original exper- iments meaning that our implementation is consistent with the protocols’ original specifications.

Appendix B

In this Appendix we show the results of R obtained start- ing from different N but maintaining the same ratio be- tween C and N . We compared the curve obtained with 1000 and C = 2, with the results obtained doubling the param- eters (N = 2000 and C = 4) and halving the parameters (N = 500, C = 1). Figures 4 and 5 show how, both for Cy- clon and SCAMP, maintaining unaltered the ratio between N and C brings to the same impact on R.

0 5 10 15 20 25 30 35 40

0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150

Number of Nodes

In-degree

Our Implementation VGvS Implementation

Figure 2. Comparison between degree distribution of our Cyclon implementation and the original one

0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34

0 500 1000 1500 2000 2500 3000 3500 4000

4500 Our Implementation

GKM Implementation

Number Of Nodes

Partial View Size

Figure 3. Comparison between degree distribution of our SCAMP implementation and the original one

0 200 400 600 800 1000 1200 1400 1600 1800

0 10 20 30 40 50 60 70 80 90 100

Initial Overlay Size = 500, C = 1 Initial Overlay Size = 1000, C = 2 Initial Overlay Size = 2000, C = 4

% Reached Nodes

t

Figure 4. Cyclon: variation of R for different N and the same ratio N/C

0 200 400 600 800 1000 1200 1400 1600 1800

0 10 20 30 40 50 60 70 80 90 100

Initial Overlay Size = 500, C = 1 Initial Overlay Size = 1000, C = 2 Initial Overlay Size = 2000, C = 4

% Reached Nodes

t

Figure 5. SCAMP: variation of R for different N and the same ratio N/C

Riferimenti

Documenti correlati

In the evaluation of wireless networks, where a node is free to move around the simulation area, the simulator requires also a mobility model to apply to the nodes..

We will relate the transmission matrix introduced earlier for conductance and shot noise to a new concept: the Green’s function of the system.. The Green’s functions represent the

The issue addressed in this work is the determination of the quenched critical point for the localization/delocalization phase transition of a polymer interacting with an

Abbiamo anche dimostrato per la prima volta che la proteina BAG3 viene rilasciata da cardiomiociti sottoposti a stress e può essere rilevabile nel siero di pazienti affetti da

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their creator with certain inalienable rights, that among these are life, liberty,

Aims: The aim of this study is analysing the accuracy of antenatal diagnosis, maternal and neonatal outcomes for pregnancies diagnosed with myelomeningocele- Spina

Certain complications revealed a possible pathophysiological development leading from EBV (e.g splenic rupture, encephalitis, phayngitis, respitaory obstruction,

In particular it linearizes the system in the following way: if the input signal at the gate increases, a larger drain current will flow through the inductor, its voltage drop