Opinion
Security and Privacy Privacy and security

Computer Security Is Broken: Can Better Hardware Help Fix It?

Computer security problems have far exceeded the limits of the human brain. What can we do about it?
Posted
  1. Introduction
  2. Security Building Blocks
  3. Secure Computation in Hardware
  4. Adding Secure Computation to Legacy Architectures
  5. Limits of Human Comprehension
  6. Security and the Technology Industry's Future
  7. Author
  8. Footnotes
  9. Figures
Computer Security Is Broken, illustration

In 1967, the Silver Bridge collapsed into the Ohio River during rush hour. Instead of redundancy the bridge used high-strength steel. The failure of a single eyebar was catastrophic.a Today’s computing devices resemble the Silver Bridge, but are much more complicated. They have billions of lines of code, logic gates, and other elements that must work perfectly. Otherwise, adversaries can compromise the system. The individual failure rates of many of these components are small, but aggregate complexity makes vulnerability statistically certain.

This is a scaling problem. Security-critical aspects of our computing platforms have seen exponential increases in complexity, rapidly overwhelming defensive improvements. The futility of the situation leads to illogical reasoning that structural engineers would never accept, such as claiming that obviously weak systems are “strong” simply because we are ignorant as to which specific elements will fail.

Back to Top

Security Building Blocks

To build strong security systems, we need reliable building blocks. Cryptographic algorithms are arguably the most important building block we have today. Well-designed algorithms can provide extraordinary strength against certain attacks. For example, Diffie-Hellman, RSA, and triple DES—known since the 1970s—continue to provide practical security today if sufficiently large keys are used.


To build strong security systems, we need better building blocks.


Protocols can greatly simplify security by removing reliance on communications channels, but in practice they have proven deceptively tricky. In 1996, I co-authored the SSL v3.0 protocol for Netscape, which became the basis for the TLS standard. Despite nearly 20 years of extensive analysis, researchers have identified new issues and corner cases related to SSL/TLS even relatively recently. Still, I believe we are reaching the point of having cryptographically sound protocols. Although breakthroughs in quantum computing or other attacks are conceivable, I am cautiously optimistic that current versions of the TLS standard (when conservative key sizes and configurations are chosen) will resist cryptanalysis over many decades.

Unfortunately, my optimism does not extend to actual implementations. Most computing devices running SSL/TLS are riddled with vulnerabilities that allow adversaries to make an end-run around the cryptography. For example, errant pointers in device drivers or bugs in CPU memory management units can destroy security for all software on a device. To make progress, we need another building block: simple, high-assurance hardware for secure computation.

Back to Top

Secure Computation in Hardware

Consider the problem of digitally signing messages while securing a private key. Meaningful security assurances are impossible for software implemented on a typical PC or smartphone due to reliance on complex and untrustworthy components, including the hardware, operating system, and so forth. Alternatively, if the computation is moved to isolated hardware, the private key’s security depends only on the logic of a comparatively simple hardware block (see Figure 1). The amount of security-critical logic is reduced by many orders of magnitude, turning an unsolvable security problem into a reasonably well-bounded one.

In the 1990s, I started investigating secure hardware, assuming that simple mathematical operations like encryption and digital signing would be straightforward to secure. The problems I encountered were a lot more interesting and less intuitive than I expected.

I noticed small data-dependent correlations in timing measurements of devices’ cryptographic operations. Cryptographic algorithms are extremely brittle; they are very difficult to break by analyzing binary input and output messages, but fail if attackers get any other information. The timing variations violated the algorithms’ security model, and in practice allowed me to factor RSA keys and break other algorithms.b

I bought the cheapest analog oscilloscope from Fry’s electronics and placed a resistor in the ground input of a chip doing cryptographic operations. The scope showed power consumption varying with the pattern of branches taken by the device’s processor. I could easily identify the conditions for these branches—the bits of the secret key. Upgrading to a digital storage oscilloscope enabled far more advanced analysis methods. With my colleagues Joshua Jaffe and Benjamin Jun, I developed statistical techniques (Differential Power Analysis, or DPA) to solve for keys by leveraging tiny correlations in noisy power consumption or RF measurements.c

Side channels weren’t the only issue. For example, scan chains and other test modes can be abused by attackers. Researchers and pay TV pirates independently discovered that glitches and other computation errors can be devastating for security.d

Fortunately, practical and effective solutions have been found and implemented to these issues. For example, nearly 10 billion chips are made annually with DPA countermeasures. Although there is a possibility that unexpected new categories of attack may be discovered, based on what we know, a well-designed chip can be robust against non-invasive attacks. Strategies for addressing invasive attacks have also improved greatly, although still generally assume some degree of security through obscurity.

Back to Top

Adding Secure Computation to Legacy Architectures

Today’s computing architectures are too entrenched to abandon, yet too complex to secure. It is practical, however, to add extra hardware where critical operations can be isolated. Actual efforts to do this in practice vary wildly in cost, programming models, features, and level of security assurance.

Early attempts used standalone security chips, such as SIM cards in mobile devices, TPMs in PCs, and conditional access cards in pay TV systems. These were limited to single-purpose applications that could bear the cost—typically a dollar or more. The security chip’s electrical interface was also a security risk, for example allowing pay TV pirates to steal video decryption keys for redistribution.


Better hardware foundations can enable a new evolutionary process that is essential for the technology industry’s future.


Another strategy is to add security modes to existing designs. Because the existing processor and other logic are reused, these approaches add almost no die area. Unfortunately, this reuse brings significant security risks due to bugs in the shared logic and separation mechanisms. Intel’s Software Guard Extensions (SGX)e leave almost all of the (very complexf) processor in the security perimeter, and do not even appear to mitigate side channel or glitch attacks. Trusted Execution Environments (TEEs) typically use ARM’s TrustZone CPU mode to try to isolate an independent “trusted” operating system, but security dependencies include the CPU(s), the chip’s test/debug modes, the memory subsystem/RAM, the TEE operating system, and other high-privilege software.

The approach I find most compelling is to integrate security blocks onto large multi-function chips. These cores can create an intra-chip security perimeter that does not trust the RAM, legacy processors, operating system, or other logic. In addition to providing much better security integration than separate chips, on-die cores cost 1–2 orders of magnitude less. Examples of on-chip security hardware include Apple’s Secure Enclave, AMD’s Secure Processor, and Rambus’s CryptoManager Cores. Depending on the application, a security core may offload specific functions like authentication, or can be programmable. Over time, these secure domains can improve and evolve to take on a growing range of security-sensitive operations.

Back to Top

Limits of Human Comprehension

Security building blocks must be simple enough for people to comprehend their intended security properties. With remarkable regularity, teams working on data security dramatically overestimate what can be implemented safely. I set a requirement when working on the design of SSL v3.0 that a technically proficient person could read and understand the protocol in one day. Despite this, multiple reviewers and I missed several important but subtle design problems, some of which were not found until many years later.

Vulnerability risks grow with the number of potential interactions between elements in the system. If interactions are not constrained, risks scale as the square of the number of elements (see Figure 2).

Although secure hardware blocks may appear to be simple, they are still challenging to verify. Formal methods, security proofs, and static analysis tools can help to some extent by augmenting our human brains. Still, there are gaps between these methods and the messiness of the real world. As a result, these approaches need to be combined with careful efforts to keep complexity under control. For example, expanding a system from 8 to 11 elements approximately doubles the number of potential interactions. A tool that eliminated 50% of defects would be extraordinary, yet this modest increase in complexity could exhaust its benefits. Nevertheless, these approaches can help us extend our abilities as we work to create new building blocks.

Back to Top

Security and the Technology Industry’s Future

The benefits of feature enhancements to existing products have diminishing returns. The most important capabilities get implemented first. Doubling the lines of code in a word processor will not make the program twice as useful. A smartphone with two CPUs is only a little better than a phone with one.

From a security perspective, extra features can create risks that scale faster than the increase in complexity (see Figure 3). As a result, the technology industry faces a troubling curve: as complexity increases, the benefits from added features are undermined and ultimately overwhelmed by risk. When more advanced products become less valuable, innovation stalls.

Many applications are near or already beyond the point where value begins to decline without new approaches to security. Current efforts to build the ‘Internet of Things’ or ‘Smart Cities’ using conventional hardware and operating systems will produce connected systems that are plagued with hidden vulnerabilities. The costs of coping with these weaknesses, and the serious failures that will occur anyway, can easily overwhelm the benefits.

Secure compute building blocks do not eliminate the link between complexity and risk, but can fundamentally change the risk calculus. Critical functions can scale independently from each other and from the rest of the system. Each security-relevant use case is exposed to dramatically less overall complexity and can be optimized separately on the value/complexity curve.

Although Moore’s Law is slowing, transistor costs will continue to fall. If hardware budgets for security stay constant, the number of security blocks that can be added to each chip will increase exponentially. These blocks can be tailored for different use cases. They can also be organized with redundancy to avoid single points of failure like the one that doomed the Silver Bridge.

Better hardware foundations can enable a new evolutionary process that is essential for the technology industry’s future. Problems that are unsolvable today due to unconstrained interconnectedness can be isolated. Over time, secure domains can improve and evolve, addressing a growing range of security-sensitive needs. Eventually, claims of security may even be based on a realistic understanding of what humans can successfully implement.

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. A simple fixed-function secure computation block.

F2 Figure 2. Vulnerability risks increase with the number of potential interactions between system elements.

F3 Figure 3. Increased complexity increases risk.

Back to top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More