Home → Magazine Archive → October 2015 (Vol. 58, No. 10) → Crash Consistency → Abstract

Crash Consistency

By T. S. Pillai, V. Chidambaram, R. Alagappan, S. Al-Kiswany, A. C. Arpaci-Dusseau, R. H. Arpaci-Dusseau

Communications of the ACM, Vol. 58 No. 10, Pages 46-51

[article image]

back to top 

The reading and writing of data, one of the most fundamental aspects of any von Neumann computer, is surprisingly subtle and full of nuance. For example, consider access to a shared memory in a system with multiple processors. While a simple and intuitive approach known as strong consistency is easiest for programmers to understand,14 many weaker models are in widespread use (for example, x86 total store ordering22); such approaches improve system performance, but at the cost of making reasoning about system behavior more complex and error prone. Fortunately, a great deal of time and effort has gone into thinking about such memory models,24 and, as a result, most multiprocessor applications are not caught unaware.

Similar subtleties exist in local file systems—those systems that manage data stored in your desktop computer, on your cellphone,13 or that serve as the underlying storage beneath large-scale distributed systems such as Hadoop Distributed File System (HDFS).23 Specifically, a pressing challenge for developers trying to write portable applications on local file systems is crash consistency (that is, ensuring application data can be correctly recovered in the event of a sudden power loss or system crash).


No entries found