Home → Opinion → Articles → Can Generative AI Bots Be Trusted? → Abstract

Can Generative AI Bots Be Trusted?

By Peter J. Denning

Communications of the ACM, Vol. 66 No. 6, Pages 24-27

[article image]

In November 2022, OpenAI released ChatGPT, a major step forward in creative artificial intelligence. ChatGPT is OpenAI's interface to a "large language model," a new breed of AI based on a neural network trained on billions of words of text. ChatGPT generates natural language responses to queries (prompts) on those texts. In bringing working versions of this technology to the public, ChatGPT has unleashed a huge wave of experimentation and commentary. It has inspired moods of awe, amazement, fear, and perplexity. It has stirred massive consternation around its mistakes, foibles, and nonsense. And it has aroused extensive fear about job losses to AI automation.

Where does this new development fit in the AI landscape? In 2019, Ted Lewis and I proposed a hierarchy of AI machines ranked by learning power (see the accompanying table).2 We aimed to cut through the chronic hype of AI5 and show AI can be discussed without ascribing human qualities to the machines. At the time, no working examples of Creative AI (Level 4) were available to the public. That has changed dramatically with the arrival of "generative AI"—creative AI bots that generate conversational texts, images, music, and computer code.a


Herbert Bruderer

A very insightful article. ChatGPT indeed produces a lot of nonsense, cf.



Displaying 1 comment