Having now completed 10 years of "Inside Risks," we reflect here on what has happened in that time. In short, our basic conclusions have not changed much over the yearsdespite many advances in the technology. Indeed, this lack of change itself seems like a serious risk. Overall, the potential risks have monotonously if not monotonically become worse, relative to increased system/network vulnerabilities and increased threats, and their consequent domestic and worldwide social implications with respect to national stability, e-commerce, personal well-being, and many other factors.
Enormous advances in computing power have diversely challenged our abilities to use information technology intelligently. Distributed systems and the Internet have opened up new possibilities. Security, reliability, and predictability remain seriously inadequate. Privacy, safety, and other socially significant attributes have suffered. Risks have increased in part because of greater complexity, worldwide connectivity, and dependence on systems and people of unknown trustworthiness; vastly many more people are now relying on computers and the Internet; neophytes are diminishing the median level of risk awareness. The mass-market software marketplace eagerly creates new functionality, but is not sufficiently responsive to the needs of critical applications. The development process is often unmanageable for complex systems, which tend to be late, over budget, noncompliant, and in some cases cancelled altogether. Much greater discipline is needed. Many efforts seek quick-and-dirty solutions to complex problems, and long-time readers of this column realize how counterproductive that can be in the long run. The electric power industry has evidently gone from a mentality of "robust" to "just-good-enough most-of-the-time." The monocultural mass-market computer industry seems even less proactive. Off-the-shelf solutions are typically not adequate for mission-critical systems, and in some cases are questionable even in routine uses. The U.S. government and state legislative bodies are struggling to pass politically appealing measures, but are evidently unable to address most of the deeper issues.
Distributed and networked systems are inherently risky. Security is a serious problem, but reliability is alsosystems and networks often tend to fall apart on their own, without any provocation. In 1980, we had the accidental complete collapse of the ARPAnet. In 1990, we had the accidental AT&T long-distance collapse. In 1999, Melissa spread itself widely by email infecting Microsoft Outlook users. Just the first few months of 2000 saw extensive distributed denial-of-service attacks (see "Inside Risks," April 2000) and the ILOVEYOU email Trojan horse that again exploited Microsoft Outlook features, propagating much more widely than Melissa. ILOVEYOU was followed by numerous copycat clones. The cost estimates of ILOVEYOU alone are already in the many billions of dollars (Love's Labor Lost?).
It is clear that much greater effort is needed to improve the security and robustness of our computer systems. Although many technological advances are emerging in the research community, those that relate to critical systems seem to be of less interest to the commercial development community. Warning signs seem to be largely ignored. Much remains to be done, as has been recommended here for the past 10 years.
©2000 ACM 0002-0782/00/0700 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2000 ACM, Inc.