In cyberspace it's easy to get away with criminal fraud, easy to steal corporate intellectual property, and easy to penetrate governmental networks. Last spring the new Commander of USCYBERCOM, NSA's General Keith Alexander, acknowledged for the first time that even U.S. classified networks have been penetrated.2 Not only do we fail to catch most fraud artists, IP thieves, and cyber spieswe don't even know who most of them are. Yet every significant public and private activityeconomic, social, governmental, militarydepends on the security of electronic systems. Why has so little happened in 20 years to alter the fundamental vulnerability of these systems? If you're sure this insecurity is either (a) a hoax or (b) a highly desirable form of anarchy, you can skip the rest of this column.
Presidential Directives to Fix This Problem emerge dramatically like clockwork from the White House echo chamber, chronicling a history of executive torpor. One of the following statements was made in a report to President Obama in 2009, the other by President George H.W. Bush in 1990. Guess which is which:
"Telecommunications and information processing systems are highly susceptible to interception, unauthorized electronic access, and related forms of technical exploitation, as well as other dimensions of the foreign intelligence threat."
"The architecture of the Nation's digital infrastructure, based largely on the Internet, is not secure or resilient. Without major advances in the security of these systems or significant change in how they are constructed or operated, it is doubtful that the United States can protect itself from the growing threat of cybercrime and state-sponsored intrusions and operations."
Actually, it doesn't much matter which is which.a In between, for the sake of nonpartisan continuity, President Clinton warned of the insecurities created by cyber-based systems and directed in 1998 that "no later than five years from today the United States shall have achieved and shall maintain the ability to protect the nation's critical infrastructures from intentional acts that would significantly diminish" our security.6 Five years later would have been 2003.
In 2003, as if in a repeat performance of a bad play, the second President Bush stated that his cybersecurity objectives were to "[p]revent cyber attacks against America's critical infrastructure; [r]educe national vulnerability to cyber attacks; and [m]inimize damage and recovery time from cyber attacks that do occur."7
These Presidential pronouncements will be of interest chiefly to historians and to Congressional investigators who, in the aftermath of a disaster that we can only hope will be relatively minor, will be shocked, shocked to learn that the nation was electronically naked.
Current efforts in Washington to deal with cyber insecurity are promisingbut so was Sisyphus' fourth or fifth trip up the hill. These efforts are moving at a bureaucratically feverish pitchwhich is to say, slowlyand so far they have produced nothing but more declarations of urgency and more paper. Why?
Lawsuits and Markets
Change in the U.S. is driven by three things: liability, market demand, and regulatory (usually federal) action. The role and weight of these factors vary in other countries, but the U.S. experience may nevertheless be instructive transnationally since most of the world's intellectual property is stored in the U.S., and the rest of the world perceives U.S. networks as more secure than we do.4 So let's examine each of these three factors.
Liability has been a virtually nonexistent factor in achieving greater Internet security. This may be surprising until you ask: Liability for what, and who should bear it? Software licenses are enforceable, whether shrink-wrapped or negotiated, and they nearly always limit the manufacturer's liability to the cost of the software. So suing the software manufacturer for allegedly lousy security would not be worth the money and effort expended. What are the damages, say, from finding your computer is an enslaved member of a botnet run out of Russia or Ukraine? And how do you prove the problem was caused by the software rather than your own sloppy online behavior?
Deciding what level of imperfection is acceptable is not a task you want your Congressional representative to perform.
Asking Congress to make software manufacturers liable for defects would be asking for trouble: All software is defective, because it's so astoundingly complicated that even the best of it hides surprises. Deciding what level of imperfection is acceptable is not a task you want your Congressional representative to perform. Any such legislation would probably drive some creative developers out of the market. It would also slow down software developmentwhich would not be all bad if it led to higher security. But the general public has little or no understanding of the vulnerabilities inherent in poorly developed applications. On the contrary, the public clamors for rapidly developed apps with lots of bells and whistles, so an equipment vendor that wants to control this proliferation of vulnerabilities in the name of security is in a difficult position.
Banks, merchants, and other holders of personal information do face liability for data breaches, and some have paid substantial sums for data losses under state and federal statutes granting liquidated damages for breaches. In one of the best known cases, Heartland Payments Systems may end up paying approximately $100 million as a result of a major breach, not to mention millions more in legal fees. But the defendants in such cases are buyers, not makers and designers, of the hardware and software whose deficiencies create many (but not all) cyber insecurities. Liability presumably makes these companies somewhat more vigilant in their business practices, but it doesn't make hardware and software more secure.
Many major banks and other companies already know they have been persistently penetrated by highly skilled, stealthy, and anonymous adversaries, very likely including foreign intelligence services and their surrogates. These firms spend millions fending off attacks and cleaning their systems, yet no forensic expert can honestly tell them that all advanced persistent intrusions have been defeated. (If you have an expert who will say so, fire him right away.)
In an effective liability regime, insurers play an important role in raising standards because they tie premiums to good practices. Good automobile drivers, for example, pay less for car insurance. Without a liability dynamic, however, insurers play virtually no role in raising cyber security standards.
If liability hasn't made cyberspace more secure, what about market demand? The simple answer is that the consuming public buys on price and has not been willing to pay for more secure software. In some cases the aftermath of identity theft is an ordeal. In most instances of credit card fraud, however, the bank absorbs 100% of the loss, so their customers have little incentive to spend more for security. (In Britain, where the customer rather than the bank usually pays, the situation is arguably worse because banks are in a better position than customers to impose higher security requirements.) Most companies also buy on price, especially in the current economic downturn.
Unfortunately we don't know whether consumers or corporate customers would pay more for security if they knew the relative insecurities of the products on the market. As J. Alex Halderman of the University of Michigan recently noted, "most customers don't have enough information to accurately gauge software quality, so secure software and insecure software tend to sell for about the same price."3 This could be fixed, but doing so would require agreed metrics for judging products and either the systematic disclosure of insecurities or a widely accepted testing and evaluation service that enjoyed the public's confidence. Consumer Reports plays this role for automobiles and many other consumer products, and it wields enormous power. The same day Consumer Reports issued a "Don't buy" recommendation for the 2010 Lexus GX 460, Toyota took the vehicle off the market. If the engineering and computer science professions could organize a software security laboratory along the lines of Consumer Reports, it would be a public service.
Federal Action
Absent market- or liability-driven improvement, there are eight steps the U.S. federal government could take to improve Internet security, and none of them would involve creating a new bureaucracy or intrusive regulation:
- Use the government's enormous purchasing power to require higher security standards of its vendors. These standards would deal, for example, with verifiable software and firmware, means of authentication, fault tolerance, and a uniform vocabulary and taxonomy across the government in purchasing and evaluation. The Federal Acquisition Regulations, guided by the National Institute of Standards and Technology, could drive higher security into the entire market by ensuring federal demand for better products.
- Amend the Privacy Act to make it clear that Internet Service Providers (ISPs) must disclose to one another and to their customers when a customer's computer has become part of a botnet, regardless of the ISP's customer contract, and may disclose that fact to a party that is not its own customer. ISPs may complain that such a service should be elective, at a price. That's equivalent to arguing that cars should be allowed on the highway without brakes, lights, and seatbelts. This requirement would generate significant remedial business.
- Define behaviors that would permit ISPs to block or sequester traffic from botnet-controlled addressesnot merely from the botnet's command-and-control center.
- Forbid federal agencies from doing business with any ISP that is a hospitable host for botnets, and publicize the list of such companies.
- Require bond issuers that are subject to the jurisdiction of the Federal Energy Regulatory Commission to disclose in the "Risk Factors" section of their prospectuses whether the command-and-control features of their SCADA networks are connected to the Internet or other publicly accessible network. Issuers would scream about this, even though a recent McAfee study plainly indicates that many of them that do follow this risky practice think it creates an "unresolved security issue."1 SCADA networks were built for isolated, limited access systems. Allowing them to be controlled via public networks is rash. This point was driven home forcefully this summer by discovery of the "Stuxnet" computer worm, which was specifically designed to attack SCADA systems.4 Yet public utilities show no sign of ramping up their typically primitive systems.
- Increase support for research into attribution techniques, verifiable software and firmware, and the benefits of moving more security functions into hardware.
- Definitively remove the antitrust concern when U.S.-based firms collaborate on researching, developing, or implementing security functions.
- Engage like-minded governments to create international authorities to take down botnets and make naming-and-addressing protocols more difficult to spoof.
Political Will
These practical steps would not solve all problems of cyber insecurity but they would dramatically improve it. Nor would they involve government snooping and or reengineering the Internet or other grandiose schemes. They would require a clear-headed understanding of the risks to privacy, intellectual property, and national security when an entire society relies for its commercial, governmental, and military functions on a decades-old information system designed for a small number of university and government researchers.
Translating repeated diagnoses of insecurity into effective treatment would also require the political will to marshal the resources and effort necessary to do something about it. The Bush Administration came by that will too late in the game, and the Obama Administration has yet to acquire it. After his inauguration, Obama dithered for nine months over the package of excellent recommendations put on his desk by a nonpolitical team of civil servants from several departments and agencies. The Administration's lack of interest was palpable; its hands are full with a war, health care, and a bad economy. In difficult economic times the President naturally prefers invisible risk to visible expense and is understandably reluctant to increase costs for business. In the best of times cross-departmental (or cross-ministerial) governance would be extremely difficultand not just in the U.S. Doing it well requires an interdepartmental organ of directive power that can muscle entrenched and often parochial bureaucracies, and in the cyber arena, we simply don't have it. The media, which never tires of the cliché, told us we were getting a cyber "czar," but the newly created cyber "Coordinator" actually has no directive power and has yet to prove his value in coordinating, let alone governing, the many departments and agencies with an interest in electronic networks.
And so cyber-enabled crime and political and economic espionage continue apace, and the risk of infrastructure failure mounts. As for me, I'm already drafting the next Presidential Directive. It sounds a lot like the last one.
References
1. Baker, S. et al. In the Crossfire: Critical Infrastructure in the Age of Cyber War, CSIS and McAfee, (Jan. 28, 2010), 19; http://img.en25.com/Web/McAfee/NA_CIP_RPT_REG_2840.pdf. See also P. Kurtz et al., Virtual Criminology Report 2009: Virtually Here: The Age of Cyber Warfare, McAfee and Good Harbor Consulting, 2009, 17; http://iom.invensys.com/EN/pdfLibrary/McAfee/WP_McAfee_Virtual_Criminology_Report_2009_03-10.pdf.
2. Gertz, B. 2008 intrusion of networks spurred combined units. The Washington Times, (June 3, 2010); http://www.washingtontimes.com/news/2010/jun/3/2008-intrusion-of-networks-spurred-combined-units/.
3. Halderman, J.Q. To strengthen security, change developers' incentives. IEEE Security and Privacy (Mar./Apr. 2010), 79.
4. Krebs, B. "Stuxnet" worm far more sophisticated than previously thought. Krebs on Security, Sept. 14, 2010; http://krebsonsecurity.com/2010/09/stuxnet-worm-far-more-sophisticated-than-previously-thought/.
5. McAfee. Unsecured Economies: Protecting Vital Information. 2009, 4, 1314; http://www.cerias.purdue.edu/assets/pdf/mfe_unsec_econ_pr_rpt_fnl_online_012109.pdf.
6. Presidential Decision Directive 63, (May 22, 1998); http://www.fas.org/irp/offdocs/pdd/pdd-63.htm.
7. The National Strategy to Secure Cyberspace 2003. U.S. Department of Homeland Security.
Footnotes
a. The first quotation is from President G.H.W. Bush's National Security Directive 42, July 5, 1990, redacted for public release, April 1, 1992; http://www.fas.org/irp/offdocs/nsd/nsd_42.htm. The second quotation is from the preface to "Cyberspace Policy Review: Assuring a Trusted and Resilient Information and Communications Infrastructure," May 2009; http://www.whitehouse.gov/assets/documents/Cyber-space_Policy_Review_final.pdf.
DOI: http://doi.acm.org/10.1145/1839676.1839688
The Digital Library is published by the Association for Computing Machinery. Copyright © 2010 ACM, Inc.
CACM Administrator
The following letter was published in the Letters to the Editor in the February 2011 CACM (http://cacm.acm.org/magazines/2011/2/104382).
--CACM Administrator
In his Viewpoint "Why Isn't Cyber-space More Secure?" (Nov. 2010), Joel F. Brenner erroneously dismissed the value of making software manufacturers liable for defects, with this misdirected statement: "Deciding what level of imperfection is acceptable is not a task you want your Congressional representative to perform." But Congress doesn't generally make such decisions for non-software goods. The general concept of "merchantability and fitness for a given application" applies to all other goods sold and likewise should be applied to software; the courts are available to resolve any dispute over whether an acceptable level of fitness has indeed been met.
In no other commercial realm do we tolerate the incredible level of unreliability and insecurity characteristic of today's consumer software; and while better engineering is more challenging and the software industry could experience dislocations as its developers learn to follow basic good engineering practices in every product they bring to market, that lesson does not excuse the harm done to consumers from not employing basic good engineering practices.
L. Peter Deutsch
Palo Alto, CA
--------------------------------------------------
AUTHOR'S RESPONSE:
The challenge is in writing standards that would improve security without destroying creativity. "Basic good engineering" is not a standard. A "merchantability and fitness" standard works for, say, lawnmowers, where everyone knows what a defect looks like. It doesn't work for software because defining "defect" is so difficult, and the stuff being written is flying off the shelves; that is, it's merchantable. It's also sold pursuant to enforceable contracts. So while courts are indeed available to resolve disputes, they usually decide them in favor of the manufacturer. Deutsch and I both want to see more secure and reliable software, but, like it or not, progress in that direction won't be coming from Congress.
Joel F. Brenner
Washington, D.C.