News
Computing Applications News

Risky Business

Governments, companies, and individuals have suffered an unusual number of highly publicized data breaches this year. Is there a solution?
Posted
  1. Article
  2. Author
  3. Figures
attack 'for the lulz'
Data breaches are most often motivated by financial or political gain, but some hackers, like Lulz Security, attack companies for their own enjoyment or "for the lulz."

News about data breaches has been everywhere this year, with each new incident seemingly worse than the previous one. An attack on direct marketer Epsilon put an estimated 60 million records at risk. An intrusion into International Monetary Fund servers may have exposed confidential data about national economies. Attacks from Jinan, China sought to compromise the Gmail accounts of senior U.S. government officials and others, while Lockheed Martin suffered “a significant and tenacious attack” of an as-yet unspecified scope. And then there’s Sony. One million passwords were stolen from Sony Pictures, 77 million accounts were compromised at the company’s PlayStation network, and 25 million records were breached at Sony Online Entertainment. Taken together, it is the largest data breach on record. Although the annual number of breaches has fallen over the past few years, according to the Open Security Foundation’s DataLossDB, the stakes are higher than ever—and there is a lot of work to be done to protect our sensitive data.

Attacks vary widely in scope and motivation. Some hackers work for their own disruptive pleasure, or “for the lulz,” as one prominent group would have it. (Lulz Security, or Lulz-Sec, has claimed responsibility for a number of prominent attacks this year and regaled the world with statements like “You find it funny to watch havoc unfold, and we find it funny to cause it.”) Others are politically driven. However, most attackers are in it for the money.

“Revenge is a powerful motivator, but you need a return on your investment,” explains Scott Ksander, chief information security officer at Purdue University, whose networks are tested with, by his estimate, an average of 300–500 incidents each week. “Some people develop and sell tool kits for exploits, and the current estimate holds that this industry is worth $100 million a year. Others mine and sell personal information. The more I know about you, the more effectively I can masquerade as you. It’s the same concept that marketers use.”

Attack methods are changing, too. A June report by Cisco Systems notes a steep decline in the number of mass attacks, such as self-propagating worms, DDoS attacks, and spam. Instead, attackers are turning to small, highly focused campaigns that are customized to specific user groups or even specific users. Spear phishers, for example, use data about the places people bank or shop to more effectively trick them into clicking on malware-infested email attachments or logging onto fraudulent Web sites.

The cost of mounting such attacks is not trivial—perpetrators must acquire quality victim lists, conduct background research, and generate sophisticated-looking email messages and Web sites—but conversion rates are high. Also, the payoff per victim—an average of $80,000, by Cisco’s estimate—is up to 40 times greater than it is for mass attacks. Spear phishing is what victims of the Epsilon breach were warned against; ironically, it is also what caused the breach.

Meanwhile, the cost to affected companies, governments, and individuals is higher than ever. According to industry experts, Sony may spend up to $1 billion investigating the data breaches and dealing with the attendant problems. The Ponemon Institute, a Michigan-based research center, reports that each compromised record costs companies an average of $214 in 2010, up from $202 in 2009 and $197 in 2008. The cost to consumers varies by incident, but studies indicate that victims spend between $600 and $1,400 to resolve cases of identity theft, in addition to whatever money was stolen or scammed from them.

How, then, to respond? Basic principles are easy to agree on: Reduce software vulnerabilities, help users understand the risks, and minimize potential damage with secure network architectures and exploit mitigation techniques. Opinions differ, however, about how to accomplish those goals. Take software vulnerabilities, for example. Could new laws give companies an incentive to reduce them? Do we need new regulatory authorities? To what extent can we even expect to make bug-free software?

“Secure code is the first link in the chain,” says Charlie Miller, chief security researcher at Accuvant Labs. “People say, ‘We’re human, we can’t write perfect software.’ But we’re at 50% right now. We’re not even close.” Miller suggests that vendors create a cooperative fund to pay for the detection of bugs. “If a 17-year-old kid in Romania finds a bug, what will he do? Give it to the vendor for nothing or sell it to a black hat hacker?” Of course, some vendors pay for bugs, and TippingPoint’s Zero Day Initiative buys others, but the over-all economic incentives tend to favor would-be attackers.

*  Software Damages?

Legislation offers a different approach. In May, the European Commission introduced a proposal to hold companies liable for damages caused by faulty software. “We need to build trust so that people can shop around with peace of mind,” European Union consumer commissioner Meglena Kuneva explained in a press release.

Historically, however, software-related damages have been diffcult to prove. “There have been loads of law-suits around data breaches. The majority has been class-action suits, and I’m not aware of any that succeeded,” says Fred Cate, a law professor at Indiana University and director of the Center for Applied Cybersecurity Research.

Judges may dismiss suits because the victims are too diverse to be certified as a class or because there is no evidence they have been harmed as a direct result of the breach. (After all, Social Security and credit card numbers are stored in many places.) In 2006, for example, data aggregator ChoicePoint settled a U.S. Federal Trade Commission suit that was brought after it reported the theft of 163,000 user accounts. ChoicePoint established a $5 million restitution fund, but transferred most of it to the U.S. Treasury in 2008 after determining that only 131 consumers had presented valid claims.


According to industry experts, Sony may spend up to $1 billion investigating its multiple data breaches and dealing with the attendant problems.


Determining vendor responsibility can also be difficult. Say you have a bot on your computer that came through a plug-in to your Internet browser and compromised your operating system. Which company is at fault?

For the most part, very little has been done to legislate cybersecurity. Beyond the fields of health care and finance, the majority of proposals have stalled, and while U.S. and European lawmakers are now trying to standardize the patchwork of local security notification laws that require companies to alert the victims of a breach, the effectiveness of such rules may be limited if notifications become too common and consumers simply ignore them.

“Our legal response to this problem has been unimaginative,” says Cate. “There is, for example, no incentive for cable companies to encrypt or secure the modems they place in customers’ houses. As consumers, we have no incentive to use secure passwords, and companies have no incentive to make us use secure passwords. What if mobile phone companies got a dollar for every customer whose password they set? What if we had cybersecurity education programs, like we do for fire safety and AIDS?”


“Secure code is the first link in the chain,” says Charlie Miller, chief security researcher at Accuvant Labs. “People say, ‘We’re human, we can’t write perfect software.’ But we’re at 50% right now. We’re not even close.”


Information sharing is another idea that is proven easier to suggest than to implement. Because big breaches often evolve from smaller attacks, increased transparency about tactics and vulnerabilities could help contain the damage. With few exceptions, however, companies are slow to publicly acknowledge incidents, and they typically release little information about the attacks. Could they learn from academia, where security officers take a more collaborative approach?

“The Big Ten security people get together four times a year, and we email constantly,” says Purdue’s Ksander. “Generally, the communication is about sharing perspective on upcoming risks or solutions that an institution may be trying. We like to share both successes and failures. Learning about an attack at Iowa once helped clean up an event much faster here at Purdue.”

A recent White House proposal called for the Department of Homeland Security to work more closely with businesses and manage information and incidents. Critics questioned the model, which would require companies to report all “significant” incidents and place the government at the hub of a hub-and-spoke information sharing model, but many other methods could also work.

“You could go by industry, especially where there are already regulators,” suggests Cate. “You could do across-the-board reporting in annual reports and have the SEC [Securities and Exchange Commission] oversee it. Or you could do it in a less public and more detailed fashion, maybe on a state-by-state basis.”

Of course, protection can never be perfect, and one thing nearly everyone agrees on is the need for organizations to work proactively to mitigate damage. “Good sysadmin practices have been the same for 10 years,” says Art Manion, a vulnerability analyst at Carnegie Mellon University’s Computer Emergency Response Team. “Things like firewalls that block all traffic that doesn’t have a legitimate reason to cross a firewall or router or host. File system permissions on file server shares are another example—running services and programs, especially highly exposed stuff, with minimal privileges. Choosing where and how to store sensitive data, and how to transfer it, can also limit damage. If an attacker is able to steal encrypted data but not the decryption keys, damage is mitigated. These choices will largely depend on the needs and resources of individual sites, but I’m of the opinion that spending time, effort, and money on the basics is a better investment than security bells and whistles.”

*  Further Reading

Cate, F.H., Abrams, M.E., Bruening, P.J., and Swindl, O.
Dos and Don’ts of Data Breach and Information Security. The Centre for Information Policy Leadership, Richmond, VA, 2009.

Cate, F.H.
Information Security Breaches: Looking Back & Thinking Ahead. The Centre for Information Policy Leadership, Richmond, VA, 2008.

Cisco Systems
Email Attacks: This Time It’s Personal. Cisco Systems, San Jose, CA, 2011.

Center for Strategic and International Studies Commission on Cybersecurity for the 44th Presidency
Securing Cyberspace for the 44th Presidency. Center for Strategic and International Studies, Washington, D.C., 2008.

DataLossDB
http://datalossdb.org/

Back to Top

Back to Top

Figures

UF1 Figure. Data breaches are most often motivated by financial or political gain, but some hackers, like Lulz Security, attack companies for their own enjoyment or “for the lulz.”

Back to top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More