For evaluating the proposed U.S. missile-defense shield, President Clinton has outlined four criteria relating to strategic value, technological and operational feasibility, cost, and impact on international stability. Strategic value is difficult to assess without considering the feasibility; if the desired results are technologically infeasible, then the strategic value may be minimal. Feasibility remains an open question, in the light of recent test difficulties and six successive failures in precursor tests of the Army's Theater High-Altitude Area Defense (THAAD), as well as intrinsic difficulties in dealing with system complexity. The cost is currently estimated at $60 billion, but how can any such estimate be realistic with so many unknowns? The impact on international stability also remains an open question, with considerable discussion domestically and internationally.
We consider here primarily technological feasibility. One issue of concern involves the relative roles of offense and defense, particularly the ability of the defense to differentiate between real missiles and intelligent decoys. The failed July 2000 experiment ($100 million) had only one decoy; it was an order of magnitude brighter than the real missile, to give the computer analysis a better chance of discriminating between one decoy and the one desired target. (The test failed because the second stage of the defensive missile never deployed properly; the decoy also failed to deploy. Thus, the goal of target discrimination could not be assessed.) Theodore Postol of MIT has pointed out this was a simplistic test. Realistically, decoy technology is vastly cheaper than discrimination technology. It is likely to defeat a defensive system that makes assumptions about the specific attack types and decoys that might be deployed, because those assumptions will surely be incomplete and perhaps incorrect.
Furthermore, the testing process is always inconclusive. Complex systems fail with remarkably high probability, even under controlled conditions and even if all subsystems work adequately in isolated tests. In Edsger Dijkstra's words, "Testing can be used to show the presence of bugs, but never to show their absence."
David Parnas's 1985 arguments  relative to President Reagan's Strategic Defense Initiative (SDI) are all still valid in the present context, and deserve to be revisited:
- Why software is unreliable.
- Why SDI would be unreliable.
- Why conventional software development does not produce reliable programs.
- The limits of software engineering methods.
- Artificial intelligence and the SDI.
- Can automatic programming solve the SDI software problem?
- Can program verification make SDI software reliable?
- Is the SDI office an efficient way to fund research?
Risks in the software development process seem to have gotten worse since 1985 (see "Inside Risks," July 2000). Many complex system developments have failed. Even when systems have emerged from the development process, they have typically been late, over budget, andmost importantlyincapable of fulfilling their critical requirements for trustworthiness, security, and reliability. In the case of missile-defense systems, there are far too many unknowns; significant risks always remain.
Some people advocate attacking incoming objects in the boost phase, which might seem conceptually easier to detect and pinpoint, although it is likely to inspire earlier deployment of multiple warheads and decoys. Clearly, this concept also has some serious practical limitations. Other alternative approaches (diplomatic, international agreements, mandatory inspections, and so forth) also need to be considered, especially if they can result in greater likelihood of success, lower risks of escalation, and enormous cost savings. The choices should not be limited to just the currently proposed U.S. approach and a boost-phase defense, but to other approaches as wellincluding less technologically intensive ones.
Important criteria should include honesty and integrity in assessing the past tests, detailed architectural analyses (currently missing), merits of various other alternatives, and overall risks. Given all of the unknowns and uncertainties in technology and the potential social consequences, the decision process needs to be much more thoughtful, careful, patient, and depoliticized. It should openly address the issues raised by its critics, rather than attempting to hide them. It should encompass the difficulties of defending against unanticipated types of decoys and the likelihood of weapon delivery by other routes. It should not rely solely on technological solutions to problems with strong nontechnological components. Some practical realism is essential. Rushing to a decision to deploy an inherently unworkable concept seems ludicrous, shameful, and wasteful. The ultimate question is this: Reflecting on the track record of similar projects in the past and of software in general, would we trust such a software-intensive system? If we are not willing to trust it, what benefit would it have?
©2000 ACM 0002-0782/00/0900 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2000 ACM, Inc.