Home → Magazine Archive → October 2020 (Vol. 63, No. 10) → A Holistic View of Future Risks → Abstract

A Holistic View of Future Risks

By Peter G. Neumann

Communications of the ACM, Vol. 63 No. 10, Pages 23-27

[article image]

This column considers some challenges for the future, reflecting on what we might have learned by now—and what we systemically might need to do differently. Previous Inside Risks columns have suggested that some fundamental changes are urgently needed relating to computer system trustworthiness.a Similar conclusions would also seem to apply to natural and human issues (for example, biological pandemics, climate change, decaying infrastructures, social inequality), and—more generally—being respectful of science and evident realities. To a first approximation here, I suggest almost everything is potentially interconnected with almost everything else. Thus, we need moral, ethical, and science-based approaches that respect the interrelations.

Some commonalities across different disciplines, consequent risks, and what might need improvement are considered here. In particular, the novel coronavirus (COVID-19) has given us an opportunity to reconsider many issues relating to human health, economic well-being (of individuals, academia, and businesses), domestic and international travel, all group activities (cultural, athletic, and so forth), and long-term survival of our planet in the face of natural and technological crises. However, there are also some useful lessons that might be learned from computer viruses, malware, and inadequate system integrity, some of which are relevant to the other problems—such as computer modeling and retrospective analysis of disasters, supply-chain integrity, and protecting whistle-blowers.


Michael Stellpflug

This part really hit home for me: "The discussion here may seem somewhat disconnected.." Is the take-away for developers to boycott projects that haven't passed a cultural assessment for all hypothetical applications? Or to stop using models because they are imperfect? Or was it more of a political ad for computerpeople? I agree these risks are concerning with increasing automation and AI, but still am not sure how to make sure each move in the game is inherently beneficialto society, if we aren't even playing the right game.

Displaying 1 comment