Opinion
Computing Applications

Inside Risks: Risks of Relying on Cryptography

Posted
  1. Article

Cryptography is often treated as if it were magic security dust: "sprinkle some on your system, and it is secure; then, you’re secure as long as the key length is large enough—112b, 128b, 256b" (I’ve even seen companies boast of 16,000b.) "Sure, there are always new developments in cryptanalysis, but we’ve never seen an operationally useful cryptanalytic attack against a standard algorithm. Even the analyses of DES aren’t any better than brute force in most operational situations. As long as you use a conservative published algorithm, you’re secure."

This just isn’t true. We’ve seen attacks that hack into the mathematics of cryptography and go beyond traditional cryptanalysis, forcing cryptography to do something new, different, and unexpected. For example:

  • Using information about timing, power consumption, and radiation of a device when it executes a cryptographic algorithm, cryptanalysts have been able to break smartcards and other would-be secure tokens. These are called "side-channel attacks."
  • By forcing faults during operation, cryptanalysts have been able to break even more smart cards. This is called "failure analysis." Similarly, cryptanalysts have been able to break other algorithms based on how systems respond to legitimate errors.
  • One researcher was able to break RSA-signed messages when formatted using the PKCS standard. He did not break RSA, but rather the way it was used. Just think of the beauty: we don’t know how to factor large numbers effectively, and we don’t know how to break RSA. But if you use RSA in a common way, then in some implementations it is possible to break the security of RSA … without breaking RSA.
  • Cryptanalysts have analyzed many systems by breaking the pseudorandom number generators used to supply cryptographic keys. The cryptographic algorithms might be secure, but the key-generation procedures were not. Again, think of the beauty: the algorithm is secure, but the method to produce keys for the algorithm has a weakness, which means there aren’t as many possible keys as there should be.
  • Researchers have broken cryptographic systems by looking at the way different keys are related to each other. Each key might be secure, but the combination of several related keys can be enough to cryptanalyze the system.

The common thread through all of these exploits is they’ve all pushed the envelope of what constitutes cryptanalysis by using out-of-band information to determine the keys. Before side-channel attacks, the open crypto community did not think about using information other than plaintext and ciphertext to attack algorithms. After the first article, researchers began to look at invasive side channels, attacks based on introducing transient and permanent faults, and other side channels. Suddenly there was a whole new way to employ cryptanalysis.

Several years ago, I was talking with an NSA employee about a particular exploit. He explained how a system was broken; it was a sneaky attack, one that I didn’t think should even count. "That’s cheating," I said. He looked at me as if I’d just arrived from Neptune.

"Defense against cheating" (that is, not playing by the assumed rules) is one of the basic tenets of security engineering. Conventional engineering is about making things work. It’s the genesis of the term "hack," as in "He worked all night and hacked the code together." The code works; it doesn’t matter what it looks like. Security engineering is different; it’s about making sure things don’t do something they shouldn’t. It’s making sure security isn’t broken, even in the presence of a malicious adversary who does everything in his power to make sure things don’t work in the worst possible way at the worst possible times. A good attack is one that the engineers never thought about.

Defending against these unknown attacks is impossible, but the risk can be mitigated with good system design. The mantra of any good security engineer is: "Security is a not a product, but a process." It’s more than designing strong cryptography into a system; it’s designing the entire system such that all security measures, including cryptography, work together. It’s designing the entire system so that when there’s an unexpected attack, the system can be upgraded and resecured. It’s never a matter of "if a security flaw is found," but rather "when a security flaw is found."

This isn’t a temporary problem. Cryptanalysts will forever be pushing the envelope of attacks. And whenever crypto is used to protect massive financial resources (especially with worldwide master keys), these violations of designers’ assumptions can be expected to be used more aggressively by malicious attackers. As our society becomes more reliant on a digital infrastructure, the process of security must be designed in from the beginning.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More