Home → Magazine Archive → December 2022 (Vol. 65, No. 12) → Can Universities Combat the 'Wrong Kind of AI'? → Abstract

Can Universities Combat the 'Wrong Kind of AI'?

By Devdatt Dubhashi

Communications of the ACM, Vol. 65 No. 12, Pages 24-26
10.1145/3522710

[article image]


The May 20, 2021 issue of the Boston Review hosted a Forum on the theme "AI's Future Doesn't Have to Be Dystopian," with a lead essay by the MIT economist Daron Acemoglu and responses from a range of natural and social science researchers.1 While recognizing the great potential of AI to increase human productivity and create jobs and shared prosperity, he cautioned that "current AI research is too narrowly focused on making advances in a limited set of domains and pays insufficient attention to its disruptive effects on the very fabric of society." In this he was following in the wake of a series of recent books that have centered on the disruptive effects of AI and automation on the future of jobs.6,7,8

In a previous paper, Acemoglu and Restrepo2 caution about the advance of what they term the "wrong kind of AI." What is the "wrong kind of AI"? According to Acemoglu and Restrepo, "the wrong kind of AI, primarily focusing on automation, tends to generate benefits for a narrow part of society that is already rich and politically powerful, including highly skilled professionals and companies whose business model is centered on automation and data." The current trajectory of "wrong kind of AI" "automates work to an excessive degree while refusing to invest in human productivity," and if unchecked, "further advances will displace workers and fail to create new opportunities … and the influence of these actors may further propagate the dominant position of this type of AI."1

0 Comments

No entries found