Home → Magazine Archive → December 2021 (Vol. 64, No. 12) → Technical Perspective: A Recipe for Protecting Against... → Full Text

Technical Perspective: A Recipe for Protecting Against Speculation Attacks

By Timothy Sherwood

Communications of the ACM, Vol. 64 No. 12, Page 104

[article image]

Save PDF

There has been a great deal written about the threat posed by Spectre and Meltdown style attacks to our computing infrastructure. The authors of "How to Live in a Post-Meltdown and -Spectre World" (Communications, Dec. 2018, p. 40) rightly note that "Meltdown and Spectre were particularly difficult to patch" and that "the scope of vulnerabilities such as Meltdown and Spectre is so vast that it can be difficult to address." There are many nuances to such an attack {see "Spectre Attacks: Exploiting Speculative Execution" (Communications, July 2020, p. 93), but part of the reason they are so problematic is they really describe a new recipe for attacks. Specifically, they show how to use a fundamental aspect of machine operation, speculation, against the memory read protections enforced by that very same machine. While any given instance of the attack might rely on the peculiarities of a specific memory hierarchy or software organization, this recipe is surprisingly general.

Many new solutions to these attacks have been proposed since the vulnerability was disclosed, but most of them only address specific instances of the vulnerability rather than the underlying problem. They can block a specific set of attacks, but not new instances of the recipe. A simple tuning of parameters, changing of exfiltration paths, or use of other micro-architectural conflicts can defeat many of these approaches. Unlike a bug or a bit-flip error, an adversary will purposefully and intelligently find new unprotected paths to work around a countermeasure. An approach capable of providing long-term protection needs to speak to the fundamental issues at the heart of this new class of attacks. While the following paper is not the end of the speculation-based attacks, it might be a beginning of an end.

Nearly everything in a machine short of commit (typically the very last stage of the processor pipeline) happens speculatively and "make the common case fast" is a mantra etched into the mind of everyone up and down the hardware stack. So, do we have to give up speculation and timing variation entirely to address speculation-based attacks? One would certainly hope not because the performance impact will be dire. Instead of abandoning speculation, the following paper proposes a new computer architecture that intelligently limits the impact of speculation in very specific ways such that it simultaneously allows enough predictive execution that reasonable performance can be maintained while also guaranteeing the memory hierarchy can't be used as part of such an attack. The key idea is it is "safe to execute and selectively forward the results of speculative instructions" that read data you wish to keep secret as long as you can also "prove that the forwarded results do not reach potential covert channels." This is easy to write, but harder to realize, and then even harder to prove. While I leave it to the authors to describe how the technique works, I find it satisfying both as an engineer {in that I find it very implementable) and a scientist {because it is formally grounded in a way that gets to a fundamental aspect of computer security). The authors provide one of the very first solutions that is a real recipe for fixing this problem more generally. There is absolutely a reason why this work has already been honored with a "Best Paper Award" at the IEEE/ACM International Symposium on Microarchitecture and selected for IEEE Micro Top Picks of 2019.

An approach capable of providing long-term protection needs to speak to the fundamental issues at the heart of this new class of attacks.

Looking forward I cannot help but look back. The threat posed by timing channels, albeit in a different context, is described in this very publication 48 years ago in "A Note on the Confinement Problem" (Communications, Oct. 1973, p. 613). Butler Lampson notes that a "service can transmit information which a concurrently running process can receive by observing the performance of the system" and goes on to warn us the "communication channel thus established is a noisy one, but the techniques of information theory can be used to devise an encoding which will allow the information to get through reliably no matter how small the effects the service on system performance are, provided they are not zero." This statement remains as true today as it was then. The fact that a channel is noisy, slow, unpredictable, or seemingly difficult to set up, is little comfort when one understands that our field has spent decades developing robust means of communication under just such conditions. Two options appear to remain—we accept that arbitrary system state {for example, memory values) will just always leak over time and learn to live in that new reality, or we develop new software and hardware techniques, like the ones proposed here, that thoughtfully and even provably limit information flows. Perhaps a mix of these two options is inevitable, but hardware/software systems that can maintain our existing abstractions while simultaneously providing real clarity on where information can and cannot escape is a noble objective. This paper gives me real hope that such an objective is truly achievable.

Back to Top


Timothy Sherwood is a professor in the Department of Computer Science at the University of California, Santa Barbara, CA, USA. He is also the co-founder of Tortuga Logic.

Back to Top


To view the accompanying paper, visit doi.acm.org/10.1145/34912011

Copyright held by author.
Request permission to (re)publish from the owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2021 ACM, Inc.


No entries found