Opinion
Computing Applications

Inside Risks: Missile Defense

Posted
  1. Article
  2. References

For evaluating the proposed U.S. missile-defense shield, President Clinton has outlined four criteria relating to strategic value, technological and operational feasibility, cost, and impact on international stability. Strategic value is difficult to assess without considering the feasibility; if the desired results are technologically infeasible, then the strategic value may be minimal. Feasibility remains an open question, in the light of recent test difficulties and six successive failures in precursor tests of the Army’s Theater High-Altitude Area Defense (THAAD), as well as intrinsic difficulties in dealing with system complexity. The cost is currently estimated at $60 billion, but how can any such estimate be realistic with so many unknowns? The impact on international stability also remains an open question, with considerable discussion domestically and internationally.

We consider here primarily technological feasibility. One issue of concern involves the relative roles of offense and defense, particularly the ability of the defense to differentiate between real missiles and intelligent decoys. The failed July 2000 experiment ($100 million) had only one decoy; it was an order of magnitude brighter than the real missile, to give the computer analysis a better chance of discriminating between one decoy and the one desired target. (The test failed because the second stage of the defensive missile never deployed properly; the decoy also failed to deploy. Thus, the goal of target discrimination could not be assessed.) Theodore Postol of MIT has pointed out this was a simplistic test. Realistically, decoy technology is vastly cheaper than discrimination technology. It is likely to defeat a defensive system that makes assumptions about the specific attack types and decoys that might be deployed, because those assumptions will surely be incomplete and perhaps incorrect.

Furthermore, the testing process is always inconclusive. Complex systems fail with remarkably high probability, even under controlled conditions and even if all subsystems work adequately in isolated tests. In Edsger Dijkstra’s words, "Testing can be used to show the presence of bugs, but never to show their absence."

David Parnas’s 1985 arguments [1] relative to President Reagan’s Strategic Defense Initiative (SDI) are all still valid in the present context, and deserve to be revisited:

  1. Why software is unreliable.
  2. Why SDI would be unreliable.
  3. Why conventional software development does not produce reliable programs.
  4. The limits of software engineering methods.
  5. Artificial intelligence and the SDI.
  6. Can automatic programming solve the SDI software problem?
  7. Can program verification make SDI software reliable?
  8. Is the SDI office an efficient way to fund research?

Risks in the software development process seem to have gotten worse since 1985 (see "Inside Risks," July 2000). Many complex system developments have failed. Even when systems have emerged from the development process, they have typically been late, over budget, and—most importantly—incapable of fulfilling their critical requirements for trustworthiness, security, and reliability. In the case of missile-defense systems, there are far too many unknowns; significant risks always remain.

Some people advocate attacking incoming objects in the boost phase, which might seem conceptually easier to detect and pinpoint, although it is likely to inspire earlier deployment of multiple warheads and decoys. Clearly, this concept also has some serious practical limitations. Other alternative approaches (diplomatic, international agreements, mandatory inspections, and so forth) also need to be considered, especially if they can result in greater likelihood of success, lower risks of escalation, and enormous cost savings. The choices should not be limited to just the currently proposed U.S. approach and a boost-phase defense, but to other approaches as well—including less technologically intensive ones.

Important criteria should include honesty and integrity in assessing the past tests, detailed architectural analyses (currently missing), merits of various other alternatives, and overall risks. Given all of the unknowns and uncertainties in technology and the potential social consequences, the decision process needs to be much more thoughtful, careful, patient, and depoliticized. It should openly address the issues raised by its critics, rather than attempting to hide them. It should encompass the difficulties of defending against unanticipated types of decoys and the likelihood of weapon delivery by other routes. It should not rely solely on technological solutions to problems with strong nontechnological components. Some practical realism is essential. Rushing to a decision to deploy an inherently unworkable concept seems ludicrous, shameful, and wasteful. The ultimate question is this: Reflecting on the track record of similar projects in the past and of software in general, would we trust such a software-intensive system? If we are not willing to trust it, what benefit would it have?

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More