Home → Magazine Archive → August 2014 (Vol. 57, No. 8) → Quantifying Eventual Consistency with PBS → Abstract

Quantifying Eventual Consistency with PBS

By Peter Bailis, Shivaram Venkataraman, Michael J. Franklin, Joseph M. Hellerstein, Ion Stoica

Communications of the ACM, Vol. 57 No. 8, Pages 93-102
10.1145/2632792

[article image]


Data replication results in a fundamental trade-off between operation latency and consistency. At the weak end of the spectrum of possible consistency models is eventual consistency, which provides no limit to the staleness of data returned. However, anecdotally, eventual consistency is often "good enough" for practitioners given its latency and availability benefits. In this work, we explain this phenomenon and demonstrate that, despite their weak guarantees, eventually consistent systems regularly return consistent data while providing lower latency than their strongly consistent counterparts. To quantify the behavior of eventually consistent stores, we introduce Probabilistically Bounded Staleness (PBS), a consistency model that provides expected bounds on data staleness with respect to both versions and wall clock time. We derive a closed-form solution for version-based staleness and model real-time staleness for a large class of quorum replicated, Dynamo-style stores. Using PBS, we measure the trade-off between latency and consistency for partial, non-overlapping quorum systems under Internet production workloads. We quantitatively demonstrate how and why eventually consistent systems frequently return consistent data within tens of milliseconds while offering large latency benefits.

Back to Top

1. Introduction

Modern distributed data stores need to be scalable, highly available, and fast. These systems typically replicate data across different machines and increasingly across data-centers for at least two reasons: first, to provide availability when components fail and, second, to provide improved performance by serving requests from multiple replicas. Configuring and maintaining replicated data has significant consequences for application and data store design.1 Performance at scale is critical for a large class of applications and, in practice, increased latencies may correspond to large amounts of lost revenue.22 For example, at Amazon, 100 ms of additional latency resulted in a 1% drop in sales,15 while 500 ms of additional latency in Google's search resulted in a corresponding 20% decrease in traffic.16 However, lowering latency in distributed data stores has a cost: contacting fewer replicas for each operation can adversely impact achievable semantic guarantees.

0 Comments

No entries found