• Non ci sono risultati.

Assessing Data Availability of Cassandra in the Presence of non-accurate Membership∗

N/A
N/A
Protected

Academic year: 2021

Condividi "Assessing Data Availability of Cassandra in the Presence of non-accurate Membership∗"

Copied!
6
0
0

Testo completo

(1)

Assessing Data Availability of Cassandra in the Presence

of non-accurate Membership

Leonardo Aniello, Silvia Bonomi, Marta Breno, Roberto Baldoni

Cyber Intelligence and Information Security Research Center and

Department of Computer, Control, and Management Engineering “Antonio Ruberti”

University of Rome “La Sapienza”, via Ariosto 25, 00185, Rome, Italy

{aniello, bonomi, breno, baldoni}@dis.uniroma1.it

ABSTRACT

Data Centers are evolving to adapt to emerging IT trends such as Big Data and Cloud Computing, which push for in- creased scalability and improved service availability. Among the side effects of this kind of evolution, the proliferation of new security breaches represents a major issue that usually does not get properly addressed since the focus tends to be kept on developing an innovative high-performance tech- nology rather than making it secure. Consequently, new distributed applications deployed on Data Centers turn out to be vulnerable to malicious attacks. This paper analyzes the vulnerabilities of the gossip-based membership proto- col used by Cassandra, a well-known distributed NoSQL Database. Cassandra is being widely employed as storage service in applications where very large data volumes have to be managed. An attack exploiting such weaknesses is presented, which impacts on Cassandra’s availability by af- fecting both the latency and the successful outcome of re- quests. A lightweight solution is also proposed that prevents this threat from succeeding at the price of a negligible over- head.

1. INTRODUCTION

Within the context of cloud-based storages, Cassandra [10]

has gained a primary place during the last few years thanks to its proved capability to ensure great service availability, to easily allow on-the-fly cluster reconfigurations and to nicely fit with today’s flexibility requirements on data modeling.

For example, Adobe employs Cassandra as a distributed edge cache for its Adobe AudienceManager, a solution for managing customers’ data assets with the aim of helping them performing web analytics, web content management and online advertising [7]. Such cache has to constantly store from about 700 up to 900 million unique visitor pro- files, and it has to deliver impressive performances: 12 mil-

The work presented in this paper has been partially funded by the Italian Ministry for Education, University and Re- search through the PRIN project TENACE.

liseconds latency or less for more than 95% of requests. The wide employment of solutions where Cassandra is deployed in data centers and the present alarming situation of data center security [16, 6] make any attack on Cassandra a suit- able goal for cyber criminals. [13] describes confidentiality and privacy concerns related to information stored by Cas- sandra that could lead to data leakage through, for example, CQL (Cassandra Query Language) injections attacks.

In this paper, we focus our attention on attacks that exploit vulnerabilities of Cassandra gossip-based membership pro- tocol with the aim of corrupting the knowledge about the actual set of Cassandra nodes. Such vulnerabilities are due to the lack of proper mechanisms for ensuring both integrity and authentication of the messages exchanged within the membership protocol. The integrity issue enables the forg- ing of gossip messages by a byzantine node so as to make a dead node seem alive to the other nodes, allowing for the in- jection of zombie nodes. On the other hand, the inadequacy of provided authentication means allows for the creation of some gossip messages from scratch. Specifically, bogus shut- down messages are exploited by byzantine nodes for making alive nodes appear as dead, which translates to the introduc- tion of ghost nodes. The practical result of the presence of both zombie and ghost nodes is that clients requests can be routed within a cluster either towards dead nodes or away from living ones, so as to make part of the requests fail and actually reducing Cassandra main feature, namely availabil- ity, with unpredictable performance fluctuations at applica- tion level.

As an example, in the case of interactive applications (e.g., Adobe AudienceManager users’ profiles), web servers pro- viding dynamic contents may fail to generate web pages be- cause failures occur when retrieving required data from Cas- sandra. This degrades the user experience and consequently the reputation of the applications themselves, making users leave them and look for alternatives. When Cassandra is used as storage for back-end computations involving analyt- ics, data mining or users profiling (e.g., Hadoop/HBase grid processing infrastructure of Adobe AudienceManager), the proposed attack can make an elaboration miss some input or output values because of requests failures, therefore such elaboration can either fail or lose in accuracy. Our evalua- tions point out that the presence of only one byzantine node injecting non-accurate membership information can cause huge latency and throughput instability. The paper also shows that a simple solution, consisting in encrypting spe- cific information in the gossip messages, can prevent such attacks from taking place with negligible overhead.

(2)

The rest of paper is structured as follows: Section 2 de- scribes Cassandra, Section 3 introduces the attacks; Sec- tion 4 presents our evaluations; Section 5 defines the pro- posed solution and related evaluations; Section 6 outlines the related work and Section 7 concludes the paper.

2. CASSANDRA

Cassandra [10] is an open source distributed storage system, designed to manage huge volumes of data, while achieving high availability and scalability across many servers with no single point of failure. Users can specify the desired level of consistency they need, ranging from eventual consis- tency (typical of NoSQL systems) up to strong consistency.

Within Cassandra each piece of data is replicated across multiple distinct replicas (nodes) of the cluster according to a configuration parameter known as replication factor R, that represents the number of nodes holding a copy of a specific datum. Keeping such data consistent requires that the obtained overlay structure remains connected; consid- ering the possible large-scale of a Cassandra deployment, connectivity among nodes is maintained through a gossip- based membership protocol.

2.1 Executing Read and Write Operations

Cassandra provides two primitives to read and write data, namely: get(key) and put(key,value).

When performing a request, the generic client can express the desired consistency level, that has to be intended as the number C of replicas that have to answer to a request in order to consider it as successfully executed (e.g. one, a quo- rum, and so on), plus it can contact any node in the cluster, not necessarily a replica owning interested data, and such node will act as the coordinator c of the request, taking care of its execution according to R and C.

A read operation includes the following steps: (i) c identifies the R replicas responsible for the required data and ranks them according to their proximity and response time; (ii) c sends a complete read request to the highest ranked (most responsive) replica and additional read digest requests to the remaining ones; (iii) as soon as c receives a number of replies equals to C (including the reply with the actual data), it re- turns the result to the client.

In the case of a write operation, the coordinator c contacts all the replicas owning the specified data and waits for the number of replies established by the consistency level C.

The operation terminates when (i) c receives C replies or (ii) a timeout expires (10 sec. by default) or (iii) not enough replicas are found alive. In the first case, the write is con- sidered as successfully executed, conversely, it is said to be failed. Whenever an operation is successfully executed, not updated replicas (if any) will be eventually aligned by Cas- sandra.

2.2 Membership protocol

A Cassandra deployment consists of a collection of hetero- geneous nodes connected through a DHT, i.e. a logical ring.

Each node maintains locally a view of the cluster composi- tion and, to keep the ring connected, at each second it runs a gossip-based membership protocol (Figure 1) exchanging messages with up to three other nodes (one alive, one un- reachable and a seed) and whose aim is to keep track of the

initiator

node ni receiver

node nj

Figure 1: Message exchange in a single round of gossip; in 1.a nideliberately leaves announcing its departure, in 1.b nicrashes and eventually njdiscovers the leave.

alive (i.e. responsive nodes) and unreachable nodes (i.e. the maybe-gone unresponsive nodes) in order to maintain an ac- curate and updated membership.

This gossiping has to capture nodes that deliberately leave the ring or fail. In the first case (Figure 1.a), the node will announce its departure by broadcasting a shutdown mes- sage to every other node, that will consequently remove the leaving one from the set of alive and put it in the list of unreachable. In the second case (Figure 1.b), other nodes will eventually detect the leave as they will not receive any more messages from the failed one.

During a gossip round three messages are exchanged be- tween an initiator node and a receiver node, namely GossipDi- gestSynMsg, GossipDigestAckMsg and GossipDigestAck2Msg;

plus a special message is defined to propagate the voluntary leave of a node (i.e. GossipShutdownMsg).

The first three messages exploit two main data structures for the propagation: gossip digests and EndpointStates.

The first one is a tuple representing “historical” information about the generic node i. It contains i’s IP address, its generation (as the integer value representing the number of crash-and-recovery/join-leave events i went through) and its version (as the integer value representing the progress of the computation at the current generation), being structured as follows:<IP addressi, generationi, versioni>.

The EndpointState represents the complete internal state of the generic node i, with its HeartbeatStatei (containing generation and version), the list of states i went through (ApplicationStateListi), a flag used by the owner node to mark i as dead or alive (isAlivei) and a local timestamp (U pdateT imestampi) used by the local failure detection mech- anism to discover unavailable nodes. It is structured as fol- lows: <HeartBeatStatei, ApplicationStateListi, isAlivei, U pdateT imestampi> and it is generated, updated and firstly diffused by i itself, then it is propagated by every node that received it.

The first message exchanged is the GossipDigestSynMsg, that is sent by the initiator and contains a list of digests, one di- gest for each node in the initiator view.

When the GossipDigestSynMsg is delivered, the receiver checks every digest contained in the list and, using its local infor- mation about the node i (which is stored in i’s local End- pointState), it compares local and remote generation and

(3)

version numbers in order to determine the most updated version: if the received ones are the most updated, then the receiver contacts the initiator asking for the complete updated status; conversely, the receiver sends a status up- date to the initiator. Both requests are sent through a GossipDigestAckMsg, which contains either update requests made by the receiver in the form of a list of digests, or status updates for the initiator in the form of pairs <IP addressi, EndP ointStatei>, or even both.

Upon the reception of the GossipDigestAckMsg, the initiator performs two operations: (i) it updates its local view by re- placing old information with those contained in the received EndPointStates; (ii) it creates a list of pairs <IP addressi, EndP ointStatei> for each node i in the received digest list (whose update is being requested) and sends it back to the receiver through a GossipDigestAck2Msg . As soon as the receiver receives such message, it updates its local informa- tion with those contained in the list of EndPointStates, then the gossip exchange ends.

When a node i departs from the cluster, it broadcasts to all the nodes in its view a GossipShutDownMsg (Figure 1.a) containing its IP address and a field representing its new state i.e. <IP addressi, SHU T DOW N >; such departure will be propagated to all the nodes as part of the informa- tion contained in the EndPointStates (isAlive field).

2.3 Gossip-based membership vulnerabilities

We found that the vulnerabilities of the membership pro- tocol are mainly related to the lack of any mechanism to authenticate gossip messages and check their integrity, i.e.

any node trusts the information received and assumes that messages cannot be forged. As a consequence, each node assumes that (i) the EndPointState for node i has been ac- tually generated by i itself and contains only correct infor- mation; (ii) a digest <IP addressi, generationi, versioni>

really concerns node i and has been generated on the basis of the EndPointState created by i itself; (iii) the shutdown message of a node i has been really sent by i itself. Thus, if an attacker can control a node m so as to forge a digest or an EndPointState or a shutdown message and can send them on behalf of some other node, then the attacker can (at least temporarily) bias the views of other nodes.

3. GOSSIP-BASED ATTACKS

In the following, we present two possible attacks where an adversary orchestrates one or more malicious Cassandra nodes, spreading fake membership information into the network, trying to lower the accuracy of the membership protocol in order to affect the performance of the upper level storage mechanism.

The first attack makes use of EndpointStates and gossip di- gests in order to exploit the temporary crash/leave of nodes to create zombies; the second one exploits bogus shutdown messages in order to create ghosts.

In the first attack, an attacker m initially behaves as any correct Cassandra node but as soon as a node i leaves (ei- ther due to a crash or maintenance), m starts diffusing false information about i. By manipulating its local state of i, m can lead other nodes to believe that i is still alive: at each gossip round, m updates i’s local heartbeat state by incre- menting the version number, then creates a fake digest and finally sends it out to gossip partners.

In the second attack, m aims to simulate the temporary death of other (alive) nodes by sending shutdown messages on their behalf. In particular, m selects a victim node i and starts sending shutdown messages on its behalf to a subset of cluster nodes, excluding i itself. Upon the reception of the shutdown message, and accordingly to the gossip shutdown procedure, every receiver node j automatically takes i out of the set of alive nodes and puts it in the set of unreachable ones. Note that i remains in the list of unreachable until j receives a new heartbeat from i.

3.1 Impact of inaccurate membership

The execution of both read and write operations is influ- enced by the replication factor and by the consistency level specified by the client. To this matter, what the proposed attacks do is undermining the membership accuracy in such a way that either (i) available replicas seem to be unavail- able (ghosts) or (ii) unavailable replicas seem to be avail- able (zombies). For example, suppose that a client c makes a read request cr with consistency level one (meaning that the request coordinator only needs to wait for the complete data from a single replica), and also suppose that (i) in the cluster there is a malicious node m carrying out a zombie attack; (ii) m is pushing false information about some dead node r so that the coordinator believes that r is alive; (iii) r is the replica identified by the coordinator as the one in charge for serving the data read request. According to the read mechanism of Cassandra, the client request cr would fail as the coordinator would ask only to r for the data, but it would never get a reply because r is dead, so the client request would inevitably fail after the expiration of the pre- configured timeout.

Similarly, suppose that the same client c now makes a write request cw with a consistency level quorum and that (i) the replication factor in the cluster is R = 3; (ii) there exists a malicious node m executing the ghost attack; (iii) m is pushing false shutdown messages to everyone; (iv) the co- ordinator receives cw exactly right after the reception of the shutdown messages for all the nodes in the ring. In this case, the coordinator would need two replies from the highest ranked replicas, but it would not see any available due to the reception of the shutdown messages and so the request cw would immediately fail. Note that the ghost at- tack causes temporary corruptions of the view at every tar- geted node, but if the client request occurs when the node has a corrupted view, then the execution of the operation is very likely to fail. Note also that the stronger the imposed consistency level is, the greater the damage caused by this behavior is.

4. EXPERIMENTS

The goal of the experimental evaluations we carried out is to quantify the impact of the attack on the interaction of a client with a Cassandra cluster. Both interactive appli- cations (i.e. web servers providing dynamic contents) and batch computations (i.e. long-running elaborations for an- alytics and data mining) usually access data stored in Cas- sandra by using some client library such as Astyanax [12].

These libraries provide a pool of threads for connecting to Cassandra, where each thread connects to a specific node and issues read and write requests sequentially. For this reason, we considered as reference case a sequential batch

(4)

Figure 2: Percentage of failed read requests for distinct consis- tency levels with respect to the number of ghost nodes dissemi- nated by the malicious node during gossip protocol.

of 1000 writes followed by 1000 reads performed by a single client connected to a node having a corrupted membership view. The choice of a sequential batch of requests rather than concurrent streams of requests sent by a multi-threaded client to distinct nodes allows to better analyze the actual consequences of the attack on the capability of a node with a non-accurate membership view to act as request coordina- tor.

Within this context, we measure the percentage of failed read/write requests and their throughput over time from the client point of view. We collect these measures in distinct scenarios differing by the required consistency level selected by the client and by the percentage of nodes that are mali- ciously signaled as dead (ghost nodes). We ran our test on a Cassandra cluster with 16 nodes. Each node is equipped with 2x2800 MHz CPUs, 2 GB RAM, 18 GB of disk stor- age and runs Ubuntu 12.04 as operating system. One of the nodes runs the malicious version of Cassandra service and waits for a node to be shutdown in order to begin its byzan- tine activity. For each test, we shutdown a node and wait a small amount of time to allow the attacker to gossip wrong information. Then, we launch the batch of writes and reads to a node with poisoned membership view. In each exper- iment, we consider the malicious node spread both (i) one zombie node and (ii) a given percentage of ghost nodes that changes for each test (e.g., 25% of ghost nodes means that one node out of four is maliciously signaled as dead). Fig. 2 shows the percentages of failed read requests as the number of ghost nodes changes, for distinct consistency levels. The most evident result is that the attack becomes more effec- tive as the consistency gets stricter. This was reasonable to expect because the tighter the consistency is, the more the required alive replicas are, and the practical effect of the attack is indeed to lower the number of nodes that seem to be running. The impact of the number of ghosts when the consistency is fixed is trickier to understand. We run three batches for each distinct consistency level and number of ghost, and the large values of standard deviation reported in Fig. 2 suggest that there is no precise relationship holding.

As also suggested by the example discussed in Section 3.1, the number of ghosts seen by a node really depends on the interleaving of the gossip messages sent by the attacker with those sent by the other nodes. This is a non determinis- tic process because there is no synchronization at all among the scheduling of gossiping operations of distinct nodes. Fur- thermore, which replicas are required to fulfill a read request is determined by the key specified at runtime in the request

Figure 3: Comparison between safe and attacked scenario on successful reads count over time when consistency level is one.

Note that batch completion time is about 19 times larger when the attack takes place.

Figure 4: Comparison between safe and attacked scenario on successful writes count over time when consistency level is one.

Note that some writes fail when the attack takes place.

itself, so it cannot be foreseen whether a poisoned coordi- nator receiving a request at a certain instant would have a membership view where enough of the needed replicas are correctly seen as alive. We also investigated how the attack affects the batches of requests over time. Figure 3 reports at each second how many read requests have been successfully completed at that time since the beginning of the batch for both a safe scenario, where no attack is executed, and an- other one where the membership is being corrupted. The consistency level is set to one and the time scale is made logarithmic. The most evident result is that the completion time of the test batch is more than one order of magnitude larger when the attack takes place which leads to a huge decrease of the throughput. This is due to the very few re- quests for which the coordinator asks the data to the zombie node (see Section 2 for the read path). Indeed, when a data request is sent from the coordinator to the zombie node, the read remains blocked at the client side until a timeout of 10 seconds expires at the coordinator because no reply has been received yet. In the case of consistency level set to quorum, the completion time of the test batch is two order of magnitude larger when the attack takes place (see Fig. 5).

The impact of these delays can be terrible for interactive and data mining applications. Using the same notation, Fig. 4 and Fig. 6 detail how a batch of writes is affected by the attack over the time in the case of consistency level one and quorum respectively. The figures show a similar behav- ior. There is no extension of the completion time, because a write only fails when all the required replicas are seen as dead by the coordinator, and this can be detected immedi- ately without having to wait for any timeout to expire. The only effect of the attack is that some writes fail, which again

(5)

Figure 5: Comparison between safe and attacked scenario on successful reads count over time when consistency level is quorum.

Note that batch completion time is about 80 times larger when the attack takes place.

Figure 6:Comparison between safe and attacked scenario on suc- cessful writes count over time when consistency level is quorum.

Note that some writes fail when the attack takes place.

can be quite damaging for applications that need to persist data on stable storage.

5. ROBUST CASSANDRA

This section proposes a possible solution to obviate to the absence of an authentication mechanism within Cassandra.

The basic idea is to modify the original membership proto- col, forcing the nodes to sign critical information through asymmetric cryptography.

5.1 Authenticated gossip

In this modified version of the protocol, each Cassandra node has a private key σ, used to create the signature, and a set of public keys ρi (one for each cluster node), used to verify signed information.

The core idea is to use a new gossip digest object whose con- tent is represented by the node’s IP address plus the original digest object encrypted with σ, something in the form of:

<IP addressi, < IP addressi, geni, versi>σ>; the second field represents i’s signature. At each gossip round, a node i: (i) updates its heartbeat, (ii) generates the signature, (iii) creates its digest and (iv) sends it out in the list of gossip digests. When node j receives the digest, it checks which node the digest is related to by looking at the IP address sent in clear, then selects the public key ρiassociated with i and tries to decrypt the remaining part of the digest. If the digest was really sent by i then j correctly gets the en- crypted content, otherwise j can discard it. When node j gets the content of the signature, it compares the IP address contained in it with the one sent in clear, then it executes the original gossip protocol. If j has to store newer informa-

Figure 7: Comparison between original and robust Cassan- dra on provided read request latency over time when consis- tency level is one.

Figure 8: Comparison between original and robust Cassan- dra on provided write request latency over time when con- sistency level is one.

tion about i, it also stores the most recent signature received from it in i’s local EndpointState, in order to be able to cre- ate digests about i and propagate the related membership information. Let us notice that the proposed mechanism also works in the presence of a more powerful attacker able to spoof IP addresses.

Evaluations We evaluated the impact of this enhanced ver- sion of the membership protocol on the performances deliv- ered by Cassandra. Using the same setting described in Section 4, we ran a set of batches for different levels of con- sistency, where each batch consists in 100000 writes followed by as many reads. We then evaluated the average request latency over time and made a comparison with the perfor- mance provided when employing the original membership protocol. Fig. 7 shows how the average read latency varies over time for both the original Cassandra and the robust version when consistency level is one. As the picture makes evident, the overhead entailed by the enhanced protocol is negligible, indeed on average at steady state (after 20 sec- onds from the beginning of the batch) the latency provided by robust Cassandra is only 0.31% higher. Such minor in- crease is mainly due to the encryption and decryption op- erations performed on the signatures and they involve that small overhead thanks to the very low frequency they get executed. Fig. 8 reports a similar result for the latency of the writes. Also in this case, the difference of the perfor- mances is actually imperceptible, indeed the growth of the average latency at steady state is about 0.17%.

6. RELATED WORK

Traditional solutions to build byzantine tolerant storages can be divided into two categories: replicated state machines

(6)

[14] and byzantine quorum systems [11, 3, 1]. Both the ap- proaches are based on the idea that the state of the storage is replicated among processes and the main difference is in the number of replicas involved simultaneously in the state maintenance protocol. These solutions work considering a stable membership, while our attacks inject uncertainty into the local view of each node, which creates troubles to tradi- tional BFT protocols to ensure correctness along the time.

An Intrusion-tolerant system with proactive recovery [5, 15]

maintains a correct behavior of a replicated service despite an attacker controls up to f replicas (with total number of nodes n ≥ 3f + 1) by using a byzantine replication pro- tocol and a software rejuvenation mechanism. The latter mechanism ensures that each replica is periodically rejuve- nated with a new diverse software configuration, removing the effects of some prior intrusion, and therefore ensuring the validity of the above invariant along the time. The at- tack shown in this paper is actually an intrusion where the presence of a single byzantine node can inject major per- formance instability. Note that the byzantine node is not trying to compromise the consistency protocol, but it rather attacks the underlying membership protocol. Software reju- venation can help in reducing the bad effects of such attack.

However, not all the fake information injected by the byzan- tine node before being stopped by the intrusion tolerant sys- tem is removed by the system itself (i.e., membership will be not precise for long time due to the persistent presence of zombie nodes) and the byzantine node could exploit the time before being stopped by the rejuvenation mechanism to install itself into another executable: a script could be used to automatically compromise a Cassandra node again just after the rejuvenation phase.

Mechanisms used to spread fake information in this paper are inspired by the malicious behaviors of a node running a gossip membership protocol studied by Bortnikov et. al in [4] for unstructured peer-to-peer networks. Other papers provides practical solution to the accuracy problem of the membership in unstructured peer-to-peer including [2, 9].

Finally, the attack shown in this paper has some relationship with the Sybil Attack [8] even though it is much simpler to deploy than Sybil attack, because a malicious node does not need to create and manage multiple identities. Addition- ally, the attack shown in our work is also more difficult to detect than a Sybil one as it can be confused with erroneous information coming from the failure detection module. Nev- ertheless, the enhanced version of the membership protocol described in Section 5 can help to mitigate the effect of a Sybil attack as the use of asymmetric cryptography can be used to validate an identity.

7. CONCLUSIONS

Cloud-based distributed databases are becoming the stan- dard storage solution for a large class of emerging applica- tions. The growing complexity of data centers where such databases are deployed and the general trend to overlook security aspects in new applications, together with the con- siderable value of stored data and the relevance of the ser- vices relying on these storages, make attacking them a very tempting goal.

This paper presented a possible attack to Cassandra, a widely employed NoSQL distributed storage. Such attack exploits some vulnerabilities of its gossip-based membership proto- col to corrupt nodes membership views with the aim of

making some read and write requests fail randomly, with a consequent relevant degradation of delivered performance to underlying applications. This attack degrades expected performance (in terms of throughput and latency) of read operations of one or two orders of magnitude according to the selected consistency level while write operations increase their failure rate. Additionally the attack is very difficult to be revealed by an intrusion detection systems as the bad behavior could be generated by some module of the Cassan- dra system (e.g. failure detection). A lightweight solution is proposed to protect Cassandra from such attack which employs asymmetric cryptography mechanisms and entails minimal overhead as shown by the experiments.

8. REFERENCES

[1] R. Baldoni, S. Bonomi, and A. S. Nezhad. An

algorithm for implementing bft registers in distributed systems with bounded churn. In SSS’11, 2011.

[2] R. Baldoni, M. Platania, L. Querzoni, and S. Scipioni.

Practical uniform peer sampling under churn. In ISPDC ’10, 2010.

[3] R. A. Bazzi. Synchronous byzantine quorum systems.

Distributed Computing, 13(1):45–52, Jan. 2000.

[4] E. Bortnikov, M. Gurevich, I. Keidar, G. Kliot, and A. Shraer. Brahms: byzantine resilient random membership sampling. In PODC ’08, 2008.

[5] M. Castro and B. Liskov. Practical byzantine fault tolerance and proactive recovery. ACM Trans.

Comput. Syst., 20(4):398–461, Nov. 2002.

[6] Cisco Systems. Cisco Annual Security Report, 2013.

Available: http://www.cisco.com/en/US/prod/

vpndevc/annual_security_report.html.

[7] DataStax. Case Study: Adobe, 2011. Available:

http://www.datastax.com/wp-content/uploads/

2012/11/DataStax-CS-Adobe.pdf.

[8] J. R. Douceur. The sybil attack. In First International Workshop on Peer-to-Peer Systems, IPTPS ’01, pages 251–260, London, UK, UK, 2002. Springer-Verlag.

[9] G. P. Jesi, A. Montresor, and M. van Steen. Secure peer sampling. Computer Networks, 2010.

[10] A. Lakshman and P. Malik. Cassandra: structured storage system on a p2p network. In PODC ’09, 2009.

[11] D. Malkhi and M. Reiter. Byzantine quorum systems.

Distributed Computing, 11(4):203–213, Oct. 1998.

[12] Netflix. Astyanax, Cassandra Java client, 2012.

Project Homepage:

https://github.com/Netflix/astyanax.

[13] L. Okman, N. Gal-Oz, Y. Gonen, E. Gudes, and J. Abramov. Security issues in nosql databases. In TRUSTCOM ’11, 2011.

[14] F. B. Schneider. Implementing fault-tolerant services using the state machine approach: a tutorial. ACM Computing Surveys, 22(4):299–319, Dec. 1990.

[15] P. Sousa, A. Bessani, M. Correia, N. Neves, and P. Verissimo. Highly available intrusion-tolerant services with proactive-reactive recovery. Parallel and Distributed Systems, IEEE Transactions on, 2010.

[16] Symantec Corporation. State of the Data Center Survey, Global Results, 2012. Available:

http://bit.ly/OHGNw0.

Riferimenti

Documenti correlati

Title: Effect of the change of social environment on the behavior of a captive brown bear (Ursus arctos).. Article Type: Original

We show that, whatever strategy is chosen, the probability of ever reaching the Solar System is strictly smaller than (r/R).. This bound is shown to be optimal, in the sense that

Objectives: Study of the structure of supergravity by means of its symmetries, in particular dualities, with the aim of applying them to concrete physical models, such as graphene

Naturalmente un sistema a basso costo deve per definizione essere molto semplice, facile da dislocare e allo stesso tempo deve comunque assicurare delle

The frequency separation between the reference laser and the + laser beam is measured with a Fabry-Perot spectrum analyzer and the ring laser perimeter length is corrected in order

Per lo screening ANA sono presenti in commercio anche prodotti diagnostici basati su metodo immunoenzimatico ELISA (dosaggio qualitativo), tuttavia essi possono solo affiancare ma

In summary, the MSP MOI data indicated high multi-clonality in West Africa, with slightly higher levels in Burkina Faso than Mali, and subtle differences in the estimates derived

The direct equating coefficients between forms with common items were com- puted using the Haebara method, and the chain equating coefficients to convert the item parameters from