News
Architecture and Hardware News

Researchers Probe Security Through Obscurity

Obfuscation protects code by making it so impenetrable that access to it won't help a hacker understand how it works.
Posted
  1. Introduction
  2. Further Reading
  3. Author
  4. Figures
encryption, illustration

An attempt to build a way of preventing the reverse engineering of software has led to an explosion of ideas that could reshape the world of cryptography, even though the core objective of hiding the way software operates may prove to be too difficult to achieve practically.

Companies would like to protect the intellectual property stored in their software by hiding the way it operates. One option is to use encryption to scramble the code stored in nonvolatile memory so it only appears as plain text when decrypted, ready for execution. Some embedded processors now have on-chip decryption engines to avoid actual program instructions appearing on the memory bus, so the machine instructions cannot be intercepted by a hacker armed with a logic analyzer. Yet there is still the potential for attackers to use techniques such as side-channel analysis to recover and use the actual encryption key.

Obfuscation takes a different approach, by making the code in principle so impenetrable and misleading that access to it does not help the attacker understand how it works. The approach researchers take in defining useful obfuscation is to apply the idea that the program needs to be so unintelligible that it is not possible for an adversary to derive more information from the obfuscated program than if the only access they had was to a black-box implementation.

The virtual black box (VBB) is a concept developed in a paper written in 2001 by Boaz Barak, then at the Weizmann Institute of Science, and co-workers from a collection of U.S. universities that at the same time laid the groundwork for today’s work, but also seemed to dash their hopes. The paper appeared to prove it was mathematically impossible to provide general-purpose VBB obfuscation. The obfuscation techniques in use today can only rely on heuristics.

Cryptography researcher Sanjam Garg of the University of California, Los Angeles (UCLA), says, “Obfuscation is something for which practitioners are currently using ad hoc and insecure techniques. In other words, the need for obfuscation is so dire that practitioners are willing to even use insecure tools with the hope that they might at least help somewhat.”

Until recently, it looked as though vulnerable, ad hoc obfuscation would be all that software developers were able to access. However, scientists—including members of the team that worked on the 2001 paper—have explored ways to find chinks in the armor of the impossibility proof. Garg argues the paper’s proof is based on a highly unusual form of code that is almost never encountered in real life.

“The fact that we cannot VBB obfuscate, that is good for our understanding, but does not limit us in any real way. It is possible that we might be able to VBB obfuscate everything that is more natural,” Garg says.

The question is the degree to which a mathematically proven technique can obfuscate arbitrary pieces of source code into chunks of software as unfathomable as a black box.

Brent Waters, from the department of computer science at the University of Texas at Austin, says, “It is very possible that almost all of the interesting programs and functionalities that we would care about can be VBB-obfuscated. The problem is, right now we do not know how to characterize which programs can be VBB-obfuscated. We have strong reason to believe that some weak functionalities can be, and that some contrived ones cannot. This leaves a huge amount of ground in the middle that is unknown.”

A breakthrough came last year, quickly followed by a barrage of papers that built on the work. UCLA-based Amit Sahai, who worked with Barak on the original paper, teamed with other authors, including Waters and Garg, to create a framework under which attackers would find it impossible to tell the difference between two versions of a computer program that were obfuscated in different ways. Barak and colleagues called this indistinguishability obfuscation (IO).

Cornell University post-doctoral researcher Elette Boyle, who presented work on a related form of obfuscation at the Theory of Cryptography Conference in San Diego earlier this year, described it as a “seminal paper” that threw open the doors of obfuscation research once again. “After this work, there has been a flood of new results, but it’s a little hard to work with. You really have to have circuits that are completely equivalent; for the same input, they have the same output,” she says.

Although it appears to have limited applicability, as it can only make it impossible for an attacker to determine any meaningful difference between two obfuscated versions of the same program, the existence of a usable IO technique demonstrated a new angle of attack on the impossibility proof. The problem is building on this base to create a general-purpose theory of obfuscation. Even for the form of IO developed initially by Garg and colleagues, the types of programs it could handle were severely limited. And the attacker in the proof was assumed to be allowed only to perform a restricted set of actions that a real-world adversary would not be compelled to use.


“We need multiple breakthroughs and unfortunately, breakthroughs happen at random times and it is hard to predict when they will happen.”


In the space of less than a year, researchers have expanded the range of programs that IO and related forms of obfuscation can attack and greatly reduced the need for artificial assumptions. Yet Garg says it could still take more than a decade to get to the point where obfuscation expands beyond forms of IO that make it possible to hide the internal workings of software. It will be hard to maintain the pace of development seen over the past year, he says.

“We need multiple breakthroughs and unfortunately, breakthroughs happen at random times and it is hard to predict when they will happen. There has been a lot of activity since last summer but, to be completely honest, this is an outcome of a gold rush,” Garg says.

Even though it does not make general-purpose obfuscation possible, the emergence of IO has suggested other possibilities to researchers in the field of encryption.

“When the IO definition first was introduced, also in the Barak et al. paper, it did not receive much attention, relatively, as it was not clear how useful it was. However, recent work shows that it is a very powerful tool for designing cryptography,” Waters says.

Researchers have shown that IO can form the underpinnings of new encryption techniques. One possible application is “functional encryption,” which would make it possible to selectively hide parts of the same block of data from different users through the use of different decryption keys, rather than the situation today in which the only option available for encryption is to provide a single key that unlocks the entire file.

Says Waters, “There is a fundamental gap between how we think of sharing data and the actual mechanisms for doing it.” What we have today, he adds, are engineering solutions that work around this problem. “This is where the science of cryptography comes in: rethinking what we really mean by encryption.”

Using cryptography built on a framework suggested by the IO frameworks, researchers such as Waters have shown that more flexible types of encryption are possible. For example, the IO-based functional encryption technique makes it theoretically possible to provide new keys to users that will unlock different segments of the data even after the data has been encrypted by a master key. Boyle and other researchers have suggested other types of encryption derived from IO that would allow data to be unlocked based on new types of keys, such as the solution to an NP-hard mathematical problem. Although these new types of encryption that have emerged so far may not appear to have immediate applications, researchers are confident the work will greatly expand the applicability of encryption.

The ultimate hope is that work in this field will sidestep the need to use specialized functions not just to provide a level of obfuscation, but to build stronger security than we have today. Potentially, the schemes will not rely so heavily on computational complexity. The result would be that, as more powerful computers appear, they do not weaken the encryption we employ.

If further research dashes hopes of building a mathematically proven general-purpose software obfuscator, some usable forms have emerged that may make it possible to hide key elements of a program and deliver at least partially on the promise of mathematically proven obfuscation.

“Imagine you have some buggy piece of software that crashes on bad inputs; you could apply a patch that filters out the bad inputs, and obfuscate it so that people can’t look at it and learn the bad inputs that will crash the software,” says Omer Paneth, a Ph.D. student in the computer science department of Boston University who is researching cryptography. Paneth says this activity is possible if the filter is a member of the family of “evasive functions,” functions that return zero for almost every input, but a useful result for a select few.

Evasive functions make it hard for an adversary to learn anything from their behavior, and they also escape the impossibility proof provided by the 2001 paper, according to Paneth. Yet researchers such as Garg believe the science will expand further to allow more widespread use of obfuscation.


The hope is that work in this field will sidestep the need to use specialized functions to build stronger security than we have today.


“I would refrain from suggesting that we will be able to run Microsoft Office in an obfuscated form very soon, but I don’t see why programs of a reasonable size cannot be obfuscated. This will need a huge research as well as a huge engineering effort, but I don’t see why this couldn’t be done,” Garg says.

Back to Top

Further Reading

Barak, B., Goldreich, O., Impagliazzo, R., Rudich, S., Sahai, A., Vadhan, S., Yang, K.
On the (Im)possibility of Obfuscating Programs, Advances in Cryptology (2001) 2010 revision: http://www.wisdom.weizmann.ac.il/~oded/PS/obf4.pdf

Garg, S., Gentry, C., Halevi, S., Raykova, M., Sahai, A., Waters, B.
Candidate Indistinguishability Obfuscation and Functional Encryption for all circuits, 54th Annual Symposium on Foundations of Computer Science (FOCS), (2013)

Sahai, A., Waters, B.
How to Use Indistinguishability Obfuscation: Deniable Encryption, and More, IACR Cryptology ePrint Archive 2013: 454 (2013) http://eprint.iacr.org/2013/454

Barak, B., Bitansky, N., Canetti, R., Kalai, Y.T., Paneth, O., Sahai, A.
Obfuscation for Evasive Functions, Theory of Cryptography, Lecture Notes in Computer Science, Volume 8349, pp26-51 (2014)

Gentry, C., Lewko, A., Sahai, A., Waters, B.
Indistinguishability Obfuscation from the Multilinear Subgroup Elimination Assumption, IACR Cryptology ePrint Archive 2014: 309 (2014) http://eprint.iacr.org/2014/309

Back to Top

Back to Top

Figures

UF1 Figure. Actual obfuscated text from Ixia Connect’s JavaScript Obfuscations Contest.

Back to top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More