Research and Advances
Architecture and Hardware Contributed articles

Privacy-Preserving Network Forensics

Privacy-preserving attribution of IP packets can help balance forensics with an individual's right to privacy.
Posted
  1. Introduction
  2. Motivating Scenarios
  3. Background and Related Work
  4. Design Goals
  5. Architecture
  6. Clue
  7. Evaluation
  8. Conclusion
  9. Acknowledgments
  10. References
  11. Authors
  12. Footnotes
  13. Figures
  14. Tables
  15. Sidebar: key insights
Clue game piece and Confidential envelope

Research in network security has traditionally focused on defense—mechanisms designed to impede an adversary. However, paraphrasing security expert Butler Lampson, practical security requires a balance between defense and deterrence. While defense may block an adversary’s current attacks, only an effective deterrent can prevent the adversary from choosing to attack in the first place. But creating such a deterrent is usually predicated on an effective means of attribution—tying an individual to an action.

In the physical world, this link is established through concrete evidence (such as DNA, fingerprints, and writing samples), but the Internet has no such robust forensic trail. Indeed, functional anonymity is implicit in the Internet’s architecture, since the lowest-level identifiers—network addresses (IP addresses)—are inherently virtual and insecure. It can be extremely challenging to attribute an online action to a physical origin, let alone to a particular individual. Reducing this expectation of anonymity even slightly can potentially disincentivize a range of criminal activity and lengthen the effective lifetime of defense mechanisms.

Compelling though this line of thinking may be, there is a natural tension between the need for attribution and user expectations of privacy. While the public generally appreciates that criminal acts should be subject to scrutiny, civil libertarians are considerably less sanguine about exposing identifying information as a matter of course. Indeed, a recently leaked document, of allegedly International Telecommunications Union provenance, lends credence to libertarian fears, motivating the need for network-level “IP trace-back” capabilities via a government’s desire to identify anonymous political opponents.12 Though this is but one example, it is time to explore technical solutions that balance the enforcement interests of the state and the privacy interests of individuals.

We seek to achieve such a balance by introducing a new network-layer capability we call privacy-preserving forensic attribution. We propose a packet-level cryptographic signature mechanism allowing properly authorized parties to examine any packet, even those logged months prior, unambiguously identifying the physical machine that sent it. However, absent express authorization, packet signatures do not expose identifying information. Finally, we enforce the correct use of this mechanism by allowing any network element to verify the validity of the signatures it receives.

Our goal is principally to assess the viability of privacy-preserving attribution. Over the past four years, we have built a prototype system called Clue to explore the practical engineering challenges of building a privacy-preserving forensic-attribution capability. This illuminating experience revealed the architectural requirements of our approach while forcing us to confront the challenges of the underlying cryptographic overhead. Surprisingly, we found that much of the overhead can be hidden or amortized through careful protocol design alone. Thus, even our untuned user-level software prototype adds less than 30ms of latency to interactive traffic and achieves bulk TCP throughput exceeding 17Mbps. Moreover, this throughput, which is significantly greater than a typical broadband access connection, is limited by the speed of the receiver; aggregate server throughput can be considerably greater. While numerous challenges remain, our research demonstrates the feasiblity of privacy-preserving forensic attribution, encouraging wider consideration of our approach.

Back to Top

Motivating Scenarios

Forensic attribution would create a fundamentally new network-layer capability, with numerous potential applications, including the subset we survey here. For certain types of crimes, law-enforcement officers routinely face the challenge of how to map between traffic received at some point in the network and the physical device of origin. Establishing this mapping would allow investigators to determine if the same device was used in multiple crimes, if a particular activity was perpetrated by a known device, and potentially to track even the location of a targeted device via IP geolocation.

A concrete example comes from the following email one of the authors received from a corporal in the Rhode Island State Police Department: “We are currently attempting to locate an international fugitive who is wanted for [a violent crime]. We have identified a Pocket PC device attached to the Internet which the [fugitive] is apparently using …”

Though unable to discuss the details of the case publicly, we can consider the model: The police have a single packet trace known to be from the fugitive’s device (perhaps a threatening email message) and now seek to determine if other threatening messages were also sent from the same device, thereby identifying the fugitive’s current IP address and, hence, geographic area of operation.

Also increasingly common is for authorities to recover computer equipment when suspects are taken into custody. Tying it to other online actions, especially beyond a reasonable doubt, is challenging absent a strong forensic identifier. A strong forensic identifier would allow a recovered laptop to be directly and unambiguously bound to particular messages logged by law enforcement.

Back to Top

Background and Related Work

The value of forensic attribution—use of technical means to establish the presence of a person or object at a crime scene after the fact—has a long history in law enforcement, dating to the late 19th century.a Lacking an eyewitness to a crime, forensic methods often become a critical tool in an investigation.

Forensic professionals, security researchers, and Internet industry leaders alike recognize that Internet crime poses a special challenge for forensic attribution. Unlike physical evidence (such as fingerprints and DNA), digital objects are, prima facie, not unique. The Internet architecture places no technical restrictions on how a host generates packets, so every bit of a packet can be trivially manipulated in subtle ways to hide its provenance.

Indeed, criminals have long spoofed the source address of their Internet traffic to conceal their activity.7,16 While a range of systems has been proposed to detect and/or block IP source-address spoofing, such systems are deployed inconsistently, and none are foolproof, even in their ideal embodiment. A long line of literature has focused on tracing spoofed packets back to their source,17,19 but their approaches are motivated by network operational needs and focus on delivering topological path information, an even more abstract property than an IP address.

More important, IP addresses are not unique identifiers, even when used as intended. An IP address represents a topological location in the network for the purpose of routing, not as a way to specify a physical endpoint. It is common for protocols (such as DHCP, Mobile IP, and NAT) to dynamically change the mapping between IP address and physical machine as part of their normal use. While some mappings are logged, this data is commonly retained for only a limited period. “The Internet,” David Aucsmith wrote, “provides criminals two of the most coveted qualities: anonymity and mobility.”3

While we are unaware of other published attempts to provide networklevel forensic attribution to physical hosts, a number of related research projects make similar use of cryptographic mechanisms. The source-authentication systems, or “packet passports,” of Liu et al.14 and “Accountable Internet Protocol” of Andersen et al.1 both use cryptographic identifiers. However, these systems focus on ensuring the consistency and topological validity of the IP source address itself to prevent address spoofing and do not address either user privacy concerns or the need for long-term physical linkage required for forensic attribution.

Back to Top

Design Goals

Clue reflects the following basic requirements:

Physical names. Attribution must provide a link to a physical object (such as the sending computer). A physical computer can have an associated owner and permit association via sales-and-maintenance records. Moreover, given continuous ownership, a physical computer may be reused in multiple attacks. Identifying this computer allows the attacks to be linked, even if the physical computer is never recovered. Finally, a physical computer accretes physical forensic evidence as a side effect of its use. Indeed, much of this article was written on a laptop with extensive fingerprint evidence on the screen and, upon examination, a range of hair and skin samples beneath the keyboard. If this laptop were found, it could be unambiguously linked to one of the authors via DNA or fingerprint comparisons;

Per-packet granularity. The best deterrence is when attribution is universal, applied equally to every packet. Moreover, by providing this capability at the network layer, attribution is transparently provided to all higher-layer protocols and applications. That is, there is an inherent benefit in not tying forensic attribution to any particular higher-level network construct. Forensic attribution is most effective when provided as a fundamental building block on which arbitrary higherlevel protocols, services, and applications can be built;

Unimpeachability. While we would be satisfied if a new attribution capability simply offered investigative value for those pursuing criminals, we hope that any attribution mechanism would be accepted as sufficiently accurate and trustworthy to provide evidentiary value in the courtroom as well. We therefore seek strong cryptographic mechanisms that are not easily repudiated; and

Indefinite lifetime. On the Internet, as in the physical world, many crimes are not detected until long after they are committed. Placing unnecessary restrictions on the time window for forensic discovery will undoubtedly be exploited by criminals to their advantage. Even today, many online criminals are savvy about the practical investigatory delays imposed by different bilateral mutual legal assistance treaties, locating their data accordingly. Thus, it should be possible to examine a packet and unambiguously attribute its origin long after the packet is received—even months or years later.

These requirements bring us to an architecture in which each packet is self-identifying—tagged with a unique nonforgeable signature identifying the physical machine that sent it. While such attribution does not definitively identify the person originating a packet, it is the critical building block for subsequent forensic analysis, investigation, and correlation, as it provides a beachhead onto the physical scene of the crime. We presuppose that sites with sufficient risk and/or value at stake will check such signatures, associate them with higher-level transactions, and log them for enough time to cover their risk. Building such a capability is straightforward using conventional digital signatures and some form of public-key infrastructure, albeit with some performance cost—and one significant drawback: complete lack of privacy.

Privacy requirements. The approach we’ve just described would allow anyone receiving such a packet to attribute its physical origin. There is also a history of vigorous opposition to such capabilities. For example, in early 1999, Intel Corporation announced that new generations of its popular Pentium microprocessors would include a new feature—the Processor Serial Number (PSN)—a per-processor unique identifier intended as a building block for future security applications. Even though this feature was completely passive, public-interest groups quickly identified potential risks to privacy stemming from an available globally unique identifier. In April 2000, Intel abandoned plans to include PSN in future versions of its microprocessors.

We thus posit another critical requirement for a practical forensics tool:

Privacy. To balance the need for forensic attribution against the public’s interest in privacy, packet signatures must be non-identifying, in a strong sense, to an unprivileged observer. Moreover, the signatures must not serve as an identifier (even an opaque one). As such, distinct packets sent from the same source must carry different signatures. Internet users should have at least the same expectation of anonymity they have today, except for authorized investigations.


Unlike physical evidence (such as fingerprints and DNA), digital objects are, prima facie, not unique.


A strawman solution to this problem is to digitally sign each packet using a per-source key that is in turn escrowed with a trusted third party. Indeed, the ill-fated Clipper chip used such an approach. If a single third party is not widely trusted (likely, given past experience), then the scheme may accommodate multiple third parties responsible for different sets of machines and/or a secret sharing approach in which multiple third parties collaborate to generate the keying material to validate the origin of a signature; for example, in the U.S., both the Department of Justice and the American Civil Liberties Union might be required to agree an investigation is warranted. However, this approach also involves a critical vulnerability. Since, by design, a normal observer cannot extract information from a packet signature, nothing prevents adversaries from incorrectly signing their packets, or random “signatures.” Any attempt at post-hoc authentication is useless. Thus, to be practical, our attribution architecture is motivated by a final requirement:

Attributability. To enforce the attribution property, any observer on the network must be empowered to verify a packet signature—to prove that the packet could be attributed if necessary, though the process of performing the proof must not reveal any information about the physical originator itself. This requirement has a natural fate-sharing property, since choosing to verify a packet is made by the recipient with a future interest in having an attribution capability.

Remaining challenges. As important as our design goals are, so, too, are our non-goals—what we do not attempt to accomplish. For one, our work is not designed to address IP-address spoofing. While there is operational value in preventing spoofing or allowing easier filtering of DDoS attacks, the virtual nature of IP addresses makes them inherently ill-suited for forensic purposes. More significant, our work is limited to attributing the physical machine that sent a particular packet and not necessarily the complete causal chain of events leading to the packet being generated. This distinction is common to most kinds of forensic investigations (such as unraveling offshore shell accounts in forensic accounting or insider communication in securities fraud investigations) but can manifest easily in the Internet context; for example, an attack might be laundered through one or more intermediate nodes, either as part of a legitimate anonymizing overlay network (such as Tor) or via proxies installed on compromised hosts, botnets, or other intermediaries.

In practice, unraveling complex dependencies is simultaneously critically important and fundamentally challenging. Previous work explored how such “stepping-stone” relationships may be inferred in the network,20 and a similar approach—attributing each causal link hop-by-hop—could be employed with our architecture as well. However, unambiguously establishing such causality is not possible at the network layer alone and will ultimately require both host support and, inevitably, manual investigation. While we have a vision for how such host services should be structured, it represents future work beyond the scope of this article.

Back to Top

Architecture

While these requirements appear challenging, there is a well-known cryptographic tool—a group signature11—that unties this particular Gordian knot. Here, we briefly review the properties of group signatures, describing their basic application to forensic packet attribution and the design issues resulting from this architecture.

Group signatures. A group signature provides the property that if a member of a group signs a message, anyone can verify that the signature was created by a group member but cannot determine which one, without the cooperation of a privileged third party, the group manager.

We describe group signatures using the formalized model of Bellare et al.,5 as well as a definitional extension due to Boneh et al.8 Specifically, a group-signature scheme assumes a group manager and a group of n unprivileged members, denoted 1, 2, … , n. The group manager has a secret key msk, each group member i ε {1, … , n} has its own secret signing key sk[i], and there is a single public signature-verification key pk.

The group manager uses a Key-Gen operation to create pk, msk, sk, distributing them appropriately. Subsequently, if a group member i uses its secret key sk[i] to Sign a message m and gets back a signature σ, anyone with access to the global signature-verification key pk can Verify that (m, σ) is a valid message-signature pair under the secret key of a group member. The group manager can use msk to recover the identity i of the signer using the Open operation.

The two principal security properties for group signature schemes—full-anonymity and full-traceability—imply other properties, including unforgeability, exculpability, and framing-resistance.5 A group signature scheme is CCA-fully anonymous if a set of colluding members cannot learn information about the signers’ identity i, even when adversaries are allowed to Open the signatures of all the messages besides the target message-signature pair. A group-signature scheme is fully traceable if a set of colluding members cannot create a valid message-signature pair (m, σ) that the group manager cannot trace back to one of the colluding parties; that is, either Verify(pk, m, σ) fails, meaning the signature is invalid, or Open(msk, m, σ) returns the identity of one of the colluding members.

Basic packet attribution. We apply group signatures to our problem in the following way: Each machine is a member of some group and provided with a secret signing key. Exactly how groups are constructed, and who is authorized to open resulting signatures, is very much a policy issue, but one pragmatic approach is that each computer manufacturer defines a group across the set of machines it sells. Being manufacturer-centered is particularly appealing because it sidesteps the key distribution problem, as manufacturers now commonly include trusted platform modules that encode unique cryptographic information in each of their machines. Moreover, a tamper-resistant implementation is useful for preventing theft of a machine’s signing key. This approach would also imply the manufacturer would act as group manager in any investigation, execute Open under subpoena, or escrow its msk (or shares thereof) to third parties.

Given a secret signing key, each machine uses it to sign the packets it sends. This signature covers all non-variant protocol fields and payload data. The name of the group and the per-packet signature are included in a special packet header field that is part of the network layer. Any recipient can examine the header, using Verify to validate that a packet was correctly signed by a member of the associated group (and hence could be authenticated by the group manager). The verify step does not require protected key material and, by virtue of fate sharing, need not be part of the trusted computing base.

Design. An implementation of group-signature-based packet attribution must address several other challenges before deployment is practical:

Replay. The basic approach we’ve outlined does not prevent an adversary from replaying messages sent (and signed) by other legitimate parties or shuffling the order in which a node receives the messages from other legitimate parties. In some cases, such replayed packets are immediately discarded by the receiving protocol stack or application (such as due to misaligned sequence numbers or routing information). On the other hand, an adversary might be able to mount an untraceable DoS attack or maliciously change application behaviors by replaying or shuffling others’ packets over time. We therefore desire some mechanism to bind these packets to a particular point in time.

A possible solution would involve having the sender include a monotonically increasing counter in each packet and the receiver discard any packets with duplicate sources and counters. However, the naive implementation of such an approach might require the receiver to maintain the list of source-counter pairs (such as through reboots). We assume loosely synchronized clocks; the signer includes the current time in each outgoing packet, and the receiver validates freshness directly. To handle jitter and network delays, as well as possible inconsistencies among different devices’ perception of time, one might employ a hybrid approach, including both a counter and the time in each packet.

Revocation. To ensure that verifiable packets are attributable back to a single physical machine, we assume the group-signature secret keys are stored in tamper-resistant hardware and not copyable to other devices. However, we anticipate that some secret signing keys will inevitably be compromised. There are two general frameworks for revoking these secret signing keys (due to Ateniese2 and Camenisch and Lysyanskaya9). Since most parties in our system are both signers and verifiers, we adopt the Camenisch-Lysyanskaya9 approach in which secret keys are revoked by globally updating the group public key and locally updating each unrevoked party’s secret signing key. In this scheme, the verifiers need not maintain a list of individual revocations, but public-key updates must be applied universally to ensure all subsequent signatures can be verified.

Middlebox modification. Middle-boxes, like network address translators (NATs), create a conundrum. In our architecture, senders sign all nonvolatile contents of outgoing packets, including source address. Thus, any packets traversing a NAT will no longer verify, as their contents have been changed. While some might consider this a shortcoming, it is a requirement of true attribution; signers can attest only to contents they transmitted. The only other option in our framework is to deem the source address volatile and exclude it from the packet signature. To do so would imply the source address has no significance beyond being a routing locater, though, unfortunately, this is not the case in today’s Internet, where end hosts use source addresses to demultiplex incoming connections, as well as to associate flows, with the appropriate IPsec associations.

This tension has been observed many times in the past, yielding two architecturally pure alternatives: future Internet architectures can either remove end host dependence on IP source addresses or make the presence of middleboxes explicit. For the time being, deployments requiring NAT-like functionality must make a trade-off between deployability and completeness, choosing between removing source addresses from the signature—thereby limiting the scope of the attribution—and encapsulating the original, signed packets in an IP-in-IP tunnel, exposing the middlebox to the receiver.

Related questions concern virtualization technologies. In a virtualized environment, the underlying machine must sign packets. Though technically feasible, we do not expand on specific approaches here.

Back to Top

Clue

We developed the Clue prototype to explore the systems issues related to implementing a real-world group-signature scheme. Clue uses Stanford’s Pairing-Based Cryptography (PBC) Library15 for group-signature operations that in turn uses the GNU Multiple Precision arithmetic library (GMP). To explore group signatures in the context of real network packets, we implemented Clue as a module in the Click Modular Router.13 As PBC and GMP are designed as user-mode libraries, Clue employs Click as a user-mode process rather than as a Linux or BSD kernel module. While this architecture incurs some performance penalty on its own, the cryptographic operations dominate the user-mode transitions in practice, and the user-mode penalty does not interfere with fundamental systemdesign issues.

Figure 1 outlines the packet trailer used by the current prototype. The Clue module is mostly a straightforward packet-transformation element. When signing, the module performs the following four tasks:

  • Collects nonvolatile elements of an IP packet;
  • Adds an 8B local NTP-derived time-stamp to implement replay detection;
  • Feeds the resulting data as input to the group-signature library to generate a signature; and
  • Appends the signature (and additional optimization information) to the original packet, adjusting the IP length field accordingly.

Tasks like recalculating checksums are left to other, standard Click elements in the pipeline performing these functions. Similarly, when verifying, the module performs the following five tasks:

  • Validates a packet’s freshness from its timestamp;
  • Collects the nonvolatile elements of an IP packet;
  • Strips the Clue trailer from the end of the packet;
  • Feeds the resulting data and signature to the group signature library; and
  • Pushes the original packet to one of two output ports, depending whether verification was successful.

Clue implements Boneh et al.’s revocation scheme,8 polling a well-known revocation service. As might be expected, cryptographic operation overhead can be high and dominate most performance measures. While we have little doubt that more-efficient group-signature schemes will emerge with faster implementations and that hardware implementation provides more capacity, here we focus on the optimization opportunities arising from the interaction between the dynamics of network protocols themselves and the underlying cryptographic primitives. We first describe the particular group-signature construction we use and then a series of optimizations we’ve implemented.

BBS short-group signatures. The Clue prototype uses the Boneh et al.8 short-group-signature scheme, which exhibits comparatively short signatures relative to group-signature schemes based on the Strong-RSA assumption of Baric and Pfitzmann.4 We also refine the BBS group-signature scheme for use with Clue’s optimizations. The following paragraph summarizes the basic BBS scheme at a level sufficient to understand our optimizations:

The BBS Sign algorithm (on input a group public key pk, the signer’s secret key sk[i], and a message m) first obtains a source of randomness v, derives values for the variables T1, T2, T3, R1, R2, R3, R4, R5 from σ and pk, then computes the value c as cH (m, T1, T2, T3, R1, R2, R3, R4, R5) where ← denotes assignment from right to left, and H is a hash function; for the security proofs, BBS model H is a random oracle.6,8 The signing algorithm then outputs

eq01.gif

where sα, sβ, sχ, sδ1, sδ7 are functions of c, v, and sk[i]. The BBS Verify algorithm, on input a group public key pk, a message m, and a signature σ = (T1, T2, T3, c, sα, sβ, sχ, sδ1, sδ7), derives R1, R2, R3, R4, R5 from pk and σ, computes c‘ as c‘ ← H (m, T1, T2, T3, R1, R2, R3, R4, R5), accepting the signature as valid exactly when c = c‘. None of Clue’s optimizations or extensions modify the BBS KeyGen or Open algorithms; we therefore do not survey their details here.

Optimizations. The following optimizations exploit artifacts of the BBS scheme itself, as well as properties of network protocols and clients; some of these optimizations may be of independent interest.

Precomputation (for sender). Clue is able to take advantage of client workloads to improve the overhead of Sign in the sending critical path. The Sign operation has two components, computing the Tj and Rj values, independent of packet data, using these values to sign a packet. The Tj and Rj computation step by far dominates the overhead of Sign. If Clue takes the Tj and Rj computation out of the critical sending path by precomputing them, Clue can greatly improve the throughput of using Sign. Most client workloads consist of applications with low average sending rates (such as email, Web browsing, and remote login), allowing signature precomputation to overlap I/O. Indeed, over long time scales, the CPU load of clients and servers alike is dominated by idle time—an effect further magnified by multicore processors. Thus, periods of idleness can be exploited to buffer signature precursors for subsequent periods of activity.

Windowed verification (for receiver). Clue takes advantage of the streaming nature of network protocols like TCP to amortize verification over multiple packets of data to reduce the overhead of Verify in the receive critical path. Deriving the Rj values from pk and σ creates a significant fixed overhead for Verify independent of the amount of signed data. When using Verify on the receiver, the attribution layer can accumulate a window of packets (such as a flight of TCP segments) and verify them all together to amortize perpacket verification overhead. We stress that the signer signs every window of k packets, even overlapping windows, and that the verifier has the option of either verifying the packets individually or verifying any window of its choosing. However, this verification optimization slightly increases the length of a signature.

To accommodate this scenario, we modify the BBS scheme as follows: Using our modified scheme, a verifier can choose to verify the signature on the j-th packet Pj in isolation (such as when no other packets are waiting to be verified or when there is packet loss) or verify in batch the signature on a window of k packets Pj–k+1, … , Pj. Clue achieves this goal by, on the signing side, first hashing the initial k –1 packets Pj–k+1, … , Pj–1 to a value h, then signing h||Pj as before finally including h in the resulting signature tuple; here || denotes string concatenation, and the hash function to compute h is H’H, and Pj is implicitly prefixed with a fixed-width length field. To avoid trivial hash collisions in h, when hashing the packets Pj–k+1, … , Pj–1, Clue also prepends each packet with a 4B length field, then concatenates the resulting length fields and packets together. Including h in the signature allows the receiver to verify the signature over the j-th packet Pj in isolation (by verifying the signature over h|| Pj). To verify the signature over the entire window Pj–k+1, …, Pj, the receiver first recomputes h.

In the Clue prototype the window size k is a parameter provided to the IP layer. We modified our TCP implementation to adaptively set k to match the sender’s congestion window. This setting maximizes performance, as it reflects the largest number of packets that can be amortized together without expecting a packet loss (losing the benefit of amortized verification).

Asynchronous verification (for receiver). The Clue prototype can also overlap computation with network delay to reduce protocol serialization with verification; for example, rather than wait until Verify completes on a TCP packet before sending an ACK, TCP can first optimistically send an ACK back to the sender to overlap the ACK with the Verify computation. Implementing this feature is inherently a layer violation since the Clue prototype allows TCP ACK processing to proceed independent of IP layer verification, but Clue prevents unverified data packets from being passed to the application.

Incremental verification (for receiver). Given the computational costs associated with the Verify algorithm, under some circumstances (such as DoS attacks), Clue may wish to be able to quickly reject packets that might not be attributable. While the Clue prototype cannot completely erase the cost for verification, it can decrease the amount of time to reject a nonverifiable packet by a factor of approximately three, at the expense of increased signature sizes; we make Verify incrementally verifiable. The average time to process and reject a nonattributable packet decreases, though the time to accept a legitimate packet remains essentially unchanged.

Clue’s incrementally verifiable version of the BBS group signature scheme builds on our earlier observation that (1) the bulk of the computation in Verify is spent computing R1, …, R5, and (2) an implementation can derive R’1, …, R5 in parallel. Technically, we change Equation 1 to

ueq01.gif

We then revise the Verify algorithm to, on input a signature σ, set c”H(m, T1, T2, T3, R1, R2, R3, R4, R5), and immediately reject if c”c. The modified verification algorithm would then derive the variables R1, R2, R3, R4, R5 from pk and T1, T2, T3, c, sα, sβ, sχ, sδ1, sδ7 in random order, immediately rejecting if RjRj Finally, the modified algorithm would accept the signature as valid, since failure to reject implies c = H(m, T1, T2, T3, R’1, R’2, R’3, R’4, R’5).

Other potential optimizations. A large class of related optimizations relax security guarantees in exchange for performance; for example, the receiver could randomly verify a packet with probability 1/n for some n. However, we have explicitly chosen not to explore such optimizations at this time, since our goal here is to examine an extreme point in the design space—where the attributability of each and every packet is enforced. For the same reasons, we have not yet explored protocol-specific fate-sharing optimizations (such as only signing and verifying TCP SYN packets). Such optimizations could dramatically reduce overhead, albeit in exchange for some increased risk of nonattributability (such as via TCP connection hijacking).

Back to Top

Evaluation

Having described and evaluated Clue’s security properties, we now turn to quantifying the overhead of our implementation’s basic security operations and measure its effect on TCP performance. We present these benchmarks to demonstrate the feasibility of our approach. We do not evaluate all aspects of Clue, leaving full consideration of revocation to future work.

Our user-level software prototype provides acceptable performance when using the optimizations described earlier. Clue adds about 30ms of end-to-end delay to sending a packet. For interactive applications like SSH, this extra delay is insignificant to users. Clue achieves a bulk TCP throughput of 17.5Mbps, which is greater than that enjoyed by the average wide-area Internet user. A typical Internet user browsing the Web using Clue would experience roughly the same performance as without using Clue.

Experimental setup. For our experiments, we use three hosts in a sender-delay-receiver configuration. The delay host runs Linux 2.6 with a hardware configuration of dual-2.8GHz Pentiums with 2GB of RAM and the NIST Net emulation package10 to introduce link delay. The sender is a dual-3.4GHz Pentium with 4GB of RAM, and the receiver runs dual-3.0GHz Pentiums with 16GB of RAM. Both sender and receiver run the Click-based implementation of Clue (discussed earlier) over Linux 2.6, using the default values for send and receive buffer sizes.

For all experiments, we use the d277699–175–167 parameter file prepackaged with PBC, yielding a group signature scheme with strength roughly equivalent to a standard 1,024-bit RSA signature.8 The BBS scheme outputs signatures 195B long using this parameter file.


Without a plausible threat of accountability, the normal social processes that disincentivize criminal behavior cannot function.


Microbenchmarks. We start by measuring the overhead of the basic cryptographic operations Sign and Verify and their variants, as described earlier. The table here outlines the average time taken across 100 iterations of these operations on the receiver. The first column of results for “1 packet” are overheads when executing on a single 1,277B packet as input; we chose 1,277, since the combination of the packet, 195B for basic BBS signature, 8B for timestamp, and an extra 20B for windowed signing optimization yield a 1,500B packet. The second column, “8 packets,” are results with eight packets as input; one of Clue’s optimizations amortizes overhead across windows of multiple packets. In either case, the per-packet overhead is sufficiently small (10ms–30ms total) to be unnoticeable in interactive traffic but substantial enough to have a significant effect on bulk TCP performance.

The precomputation optimization for the sender separates signature computation from signing the packet. The “precomp sign” result measures the step that remains on the critical path—signing using a set of precomputed values—and shows that almost all overhead of Sign comes from generating message-independent cryptographic values (the “precomputation” step), not from computing the message-dependent part of the signature or signing the packet itself. In our bulk-transfer experiments, we show that removing signature computation from the critical path of sending packets results in significant increase in throughput. Similarly, the row labeled “verify” represents the average time to verify a single signed packet of the same size; in our Clue implementation, verification is about 2.5x slower than signing.

The remaining two rows in the table measure the performance of Clue’s incremental verification scheme designed to defend against the flood of invalid packets described earlier. The “incremental verify” row is the time required to verify a valid packet signature using this scheme, essentially identical to the original verification operation, introducing negligible CPU overhead in the common case. In contrast, “corrupted incremental verify” measures the average time required to reject a corrupted signature. Using incremental verification Clue achieves a 70% reduction in overhead over the original scheme.

The only significant difference between the eight-packet times and the single-packet times occurs when signing a packet using precomputed values arising as a result of hashing the extra data in the additional packets. Note, however, that this cost is still roughly two orders of magnitude less than any other operation, so we do not observe any additional impact on bulk throughput. As a result, amortizing the attribution operations over multiple packets is a key mechanism for reducing receive overhead. In the experiments discussed in the following section, we show that large windows combined with precomputed signatures can dramatically improve performance over basic Sign and Verify alone.

TCP throughput. Bulk TCP throughput is an important performance metric for many Internet applications. Experimentally, our goal is to evaluate the effect of attribution on TCP throughput in our Clue prototype. We measure the TCP throughput performance of the attribution implementation relative to various baseline configurations across a range of network round-trip times (RTTs). In Clue, the implementation of attribution achieves a throughput within a factor of 1.2 of a Click forwarder at typical Internet RTTs. While privacy-preserving attribution has a non-negligible effect on bulk throughput on today’s client systems, the cost is not prohibitive and will continue decreasing over time, as CPU performance increases more quickly than typical Internet bandwidth.

We conduct ttcp benchmarks between the sender and receiver, requiring them to forward traffic through a delay host. For every test configuration, we run each individual transfer for at least 20 seconds. We require the sender to transfer all its data before it closes the connection, timing the transfer from when the sender connects to the receiver to when the sender receives the FIN from the receiver. Figure 2 outlines the results of the experiments for a number of configurations. We vary the roundtrip time (RTT) between sender and receiver on the x-axis and plot the throughput achieved using the ttcp application benchmark on the the y-axis; note the y-axis is a log scale, each point is the average of five runs, and error bars show the standard deviation.

As an upper bound, the”Linux”curve plots the forwarding rate of the default Linux networking stack on our hardware. To provide a more realistic baseline for our Clue implementation, we also show the performance of an unmodified user-level Click installation (“Proxy”); Click forwards packets received on its input to its output without processing. The difference between “Proxy” and “Linux” shows the overhead of interposing in the network stack at user level, including copying overhead when crossing the kernel boundary. However, an optimized, kernel-level packet-attribution implementation need not suffer this overhead. Though not shown, we also measured the performance of the provided Click IPsec module, finding its performance indistinguishable from the “Proxy” configuration.

The “Sign+Verify” line corresponds to the baseline performance of Clue using individual Sign and Verify on each IP datagram. Given the times required for Sign and Verify, as shown in the table, one would expect the 29ms required for the Verify operation to limit long-term bulk throughput to a maximum of 0.35Mbps. Not surprising, Clue’s implementation of the default “Sign+Verify” attribution process restricts bulk TCP throughput to approximately 0.33Mbps independent of the RTT.

The poor performance of “Sign+Verify” motivates the optimizations described earlier. While pre-computation dramatically decreases the overhead at the sender, it has only modest effect in isolation on TCP throughput, as performance is still receiver-limited. Similarly, asynchronous verification allows the receiver to issue ACKs immediately, but the potential for improvement is bounded by the effective decrease in flow RTT. Indeed, precomputation and asynchronous verification are most effective when combined with windowed verification and has the potential to move the performance bottleneck back to the sender.

The line in Figure 2 labeled “Precomp+Async+Win-8” is the performance of the Clue prototype when combining the three optimizations while using a fixed window size of eight packets. In theory, the larger the window size, the less overhead verification imposes. Indeed, progressively increasing the window size continues to increase throughput performance—to a point; most benefits are achieved with a window of 64 packets, as indicated by the line “Precomp+Async+Win-64” in Figure 2, exceeding 17.5Mbps at 20ms. Recall that windowed verification proceeds only in the absence of loss; if a packet is lost in a window, the remaining packets must be verified individually, negating any potential for improvement. Hence, our Clue implementation dynamically adjusts the window size to match the sender’s TCP congestion window. The “Precomp+Async+AdaptiveWin” line in Figure 2 shows its performance approaches the baseline for all but the smallest RTTs; at an RTT of 80ms—typical of TCP connections on the Internet18—this combination achieves a throughput of 9.6Mbps, within a factor of 1.2 of “Proxy” itself, and exceeds the capacity of most consumer broadband links.

Back to Top

Conclusion

Much of the Internet’s success can be attributed to its minimalist architecture. However, the related architectural freedoms also represent ripe vulnerabilities for adversaries trying to exploit the network to their own ends. Chief among them is the lack of accountability for user actions. Without a plausible threat of accountability, the normal social processes that disincentivize criminal behavior cannot function. We suggest modifying the Internet architecture to proactively enable network forensics while preserving the privacy of network participants under normal circumstances.

Our approach ensures: authorized parties can determine the physical identity of hardware originating any given IP packets; no other party can determine the identity of the originating physical hardware; and all network participants can simultaneously verify that a packet is well-formed and attributable by the trusted authority. While still some distance from being practicable, our technique may be a viable and promising foundation for future research. A separate research strand must still consider the broader contextual issues surrounding such a solution, ranging from the role of manufacturers to international law.

Back to Top

Acknowledgments

We thank Hovav Shacham of the University of California, San Diego, for advice and comments. This work is funded in part by National Science Foundation grants CNS-0627157 and CNS-0722031.

Back to Top

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. Clue packet-trailer format; shaded fields are explained in the section on optimizations.

F2 Figure 2. TCP throughput performance for combined optimizations; y-axis is in log scale.

Back to Top

Tables

UT1 Table. Overheads of cryptographic operations for both one- and eight-packet windows.

Back to Top

    1. Andersen, D., Balakrishnan, H., Feamster, N., Koponen, T., Moon, D., and Shenker, S. Accountable Internet Protocol. In Proceedings of the ACM SIGCOMM Conference (Seattle, Aug. 19–21). ACM Press, New York, 339–350.

    2. Ateniese, G., Tsudik, G., and Song, D. Quasi-efficient revocation of group signatures. In Financial Cryptography, M. Blaze, Ed. (Southampton, Bermuda, Mar. 11–14). Springer-Verlag, Berlin, 2002, 183–197.

    3. Aucsmith, D. The digital crime scene: A software prospective. In Proceedings of the CyberCrime and Digital Law Enforcement Conference (New Haven, CT, Mar. 26–28, 2004).

    4. Baric, N. and Pfitzmann, B. Collision-free accumulators and fail-stop signature schemes without trees. In Advances in Cryptology EUROCRYPT '97, W. Fumy, Ed. (Konstanz, Germany, May 11–15). Springer-Verlag, Berlin, 1997, 480–494.

    5. Bellare, M., Micciancio, D., and Warinschi, B. Foundations of group signatures: Formal definitions, simplified requirements, and a construction based on general assumptions. In Advances in Cryptology EUROCRYPT '03, E. Biham, Ed. (Warsaw, May 4–8). Springer-Verlag, Berlin, 2003, 614–629.

    6. Bellare, M. and Rogaway, P. Random oracles are practical: A paradigm for designing efficient protocols. In Proceedings of the ACM Conference on Computer and Communications Security (Fairfax, VA, Nov. 3–5). ACM Press, New York, 1993, 62–73.

    7. Bellovin, S.M. Security problems in the TCP/IP protocol suite. ACM SIGCOMM Computer Communication Review 19, 2 (Apr. 1989), 32–48.

    8. Boneh, D., Boyen, X., and Shacham, H. Short group signatures. In Advances in Cryptology CRYPTO 2004, M. Franklin, Ed. (Santa Barbara, CA, Aug. 15–19). Springer-Verlag, Berlin, 2004, 41–55.

    9. Camenisch, J. and Lysyanskaya, A. Dynamic accumulators and applications to efficient revocation of anonymous credentials. In Advances in Cryptology CRYPTO 2002, M. Yung, Ed. (Santa Barbara, CA, Aug. 18–2). Sringer-Verlag, Berlin, Germany, 2002, 61–76.

    10. Carson, M. and Santay, D. NIST Net: A Linux-based network-emulation tool. ACM SIGCOMM Computer Communication Review 33, 3 (July 2003), 111–126.

    11. Chaum, D. and van Heyst, E. Group signatures. In Advances in Cryptology EUROCRYPT '91, D.W. Davies, Ed. (Santa Barbara, CA, Apr. 8–11). Springer-Verlag, Berlin, 1991, 257–265.

    12. International Telecommunications Union. Traceback Use Cases and Requirements; http://politechbot.com/docs/itu.traceback.use.cases.requirements.091108.txt

    13. Kohler, E., Morris, R., Chen, B., Jannotti, J., and Kaashoek, M.F. The Click modular router. ACM Transactions on Computer Systems 18, 3 (Aug. 2000), 263–297.

    14. Liu, X., Yang, X., Weatherall, D., and Anderson, T. Efficient and secure source authentication with packet passports. In Proceedings of the Second Workshop on Steps to Reducing Unwanted Traffic on the Internet (San Jose, CA, July 7). USENIX, Berkeley, CA, 2006.

    15. Lynn, B. Pairing-Based Cryptography Library. Stanford University, Palo Alto, CA, 2006; http://crypto.stanford.edu/pbc/

    16. Moore, D., Voelker, G.M., and Savage, S. Inferring Internet denial of service activity. In Proceedings of the USENIX Security Symposium (Washington, D.C., Aug. 13–17). USENIX, Berkeley, CA, 2001, 9–22.

    17. Savage, S., Wetherall, D., Karlin, A.R., and Anderson, T. Practical network support for IP traceback. In Proceedings of the ACM SIGCOMM Conference (Stockholm, Aug. 28–Sept. 1), ACM Press, New York, 2000, 295–306.

    18. Shalunov, S. TCP Over WAN Performance Tuning and Troubleshooting, 2005; http://shlang.com/writing/tcp-perf.html

    19. Snoeren, A.C., Partridge, C., Sanchez, L.A., Jones, C.E., Tchakountio, F., Schwartz, B., Kent, S.T., and Strayer, W.T. Single-packet IP traceback. IEEE/ACM Transactions on Networking 10, 6 (Dec. 2002), 721–734.

    20. Zhang, Y. and Paxson, V. Detecting stepping stones. In Proceedings of the USENIX Security Symposium (Denver, Aug. 14–17). USENIX, Berkeley, CA, 2000, 171–184.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More