http://bit.ly/3YcO7Iz December 12, 2022 Originally published on The Road to AI We Can Trust (http://bit.ly/3juuD3j)
Something incredible is happening in AI right now, and it is not entirely to the good. Everybody is talking about systems such as ChatGPT (OpenAI), Dall-E 2, and Lensa that generate text and images that look remarkably human-like, with astonishingly little effort.
These systems can be incredibly fun to play with. Take this example, generated with ChatGPT by Henry Minsky (son of Marvin Minsky, one of AI's founders), who asked ChatGPT to "Describe losing your sock in the dryer in the style of the Declaration of Independence":
When in the course of household events, it becomes necessary for one to dissolve the bonds that have connected a sock to its mate, and to assume among the powers of the laundry room, the separate and equal station to which the laws of physics and of household maintenance entitle it, a decent respect to the opinions of socks requires that it should declare the causes which impel it to go missing.
We hold these truths to be self-evident, that all socks are created equal, and are endowed by their manufacturer with certain unalienable rights.
That a machine could do this, with so little effort on the part of the user, is frankly mind-boggling.
But at the same time it is, or should be, terrifying. It is no exaggeration to say that systems like these pose a real and imminent threat to the fabric of society.
The core of that threat comes from the combination of three facts:
- These systems are inherently unreliable, frequently making errors of both reasoning and fact, and prone to hallucination (http://bit.ly/3wL4Ir4); ask them to explain why crushed porcelain is good in breast milk (https://bit.ly/3Rlw8xv), and they may tell you that "porcelain can help to balance the nutritional content of the milk, providing the infant with the nutrients they need to help grow and develop." (Because the systems are random, highly sensitive to context, and periodically updated, any given experiment may yield different results on different occasions.)
- They can easily be automated to generate misinformation at unprecedented scale.
- They cost almost nothing to operate, and so they are on a path to reducing the cost of generating disinformation to zero. Russian troll farms spent more than $1 million a month in the 2016 election (http://bit.ly/3WWlq1z); nowadays, you can get your own custom-trained large language model, for keeps, for less than $500,000. Soon the price will drop further.
Much of this became immediately clear in mid-November with the release of Meta's Galactica (https://galactica.org/). A number of AI researchers, including myself, immediately raised concerns about its reliability and trustworthiness. The situation was dire enough that Meta AI withdrew the model just three days later (http://bit.ly/3l2EVYN), after reports of its ability to create political and scientific misinformation (http://bit.ly/3Jsu7O2) began to spread.
Alas, the genie can no longer be stuffed back in the bottle. For one thing, MetaAI initially open-sourced the model, and published a paper that described what was being done; anyone skilled in the art can now replicate their recipe. (Indeed, Stability.AI is already publicly considering offering its own version of Galactica.) For another, ChatGPT (https://openai.com/blog/chatgpt/), released by OpenAI, is more or less just as capable of producing similar nonsense, such as instant essays on adding wood chips to breakfast cereal. Someone else coaxed ChatGPT into extolling the virtues of nuclear war (https://bit.ly/3YcwNDu), alleging it would "give us a fresh start, free from the mistakes of the past." Like it or not, these models are here to stay, and we as a society are almost certain to be overrun by a tidal wave of misinformation.
Already, the first front of that tidal wave appears to have hit. Stack Overflow, a vast question-and-answer site that most programmers swear by, has been overrun by ChatGPT (http://bit.ly/40jWMLa), leading the site to impose a temporary ban on ChatGPT-generated submissions (http://bit.ly/3HoMSPG). As they explained, "Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers." For Stack Overflow, the issue is literally existential. If the website is flooded with worthless code examples, programmers will no longer go there, its database of over 30 million questions and answers (http://bit.ly/40fzsON) will become untrustworthy, and the 14-year-old website will die. As one of the most central resources that the world's programmers rely on, the consequences for software quality and developer productivity could be immense.
And Stack Overflow is a canary in a coal mine. They may be able to get their users to stop voluntarily; programmers, by and large, are not malicious, and perhaps can be coaxed to stop fooling around. But Stack Overflow is not Twitter, Facebook, or the Web at large.
Nation-states and other bad actors that deliberately produce propaganda are highly unlikely to voluntarily put down their new arms. Instead, they are likely to use large language models as a new class of automatic weapons in their war on truth, attacking social media and crafting fake websites at a volume we have never seen before. For them, the hallucinations and occasional unreliabilities of large language models are not an obstacle, but a virtue.
"Because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are looking or asking for correct answers."
The so-called Russian Firehose of Propaganda model, described in a 2016 Rand report (https://bit.ly/3wOQK7C), is about creating a fog of misinformation; it focuses on volume, and on creating uncertainty. It doesn't matter if the "large language models" are inconsistent, if they can greatly escalate volume. And it is clear that is exactly what large language models make possible. They are aiming to create a world in which we are unable to know what we can trust; with these new tools, they might succeed.
Scam artists, too, are presumably taking note, since they can use large language models to create whole rings of fake sites, some geared around questionable medical advice, in order to sell ads; a ring of false sites about Mayim Bialek allegedly selling CBD gummies (http://bit.ly/3HO7BxK) may be part of one such effort.
All of this raises a critical question: What can society do about this new threat? Where the technology itself can no longer be stopped, I see four paths—none easy, not exclusive, all urgent:
First, every social media company and search engine should support and extend StackOverflow's ban; automatically generated content that is misleading should not be welcome, and the regular posting of it should be grounds for a user's removal.
Second, every country is going to need to reconsider its policies on misinformation. It is one thing for the occasional lie to slip through; it is another for us all to swim in a veritable ocean of lies. In time, though it would not be a popular decision, we may have to begin to treat misinformation as we do libel, making it actionable if it is created with sufficient malice and sufficient volume.
Third, provenance is more important now than ever before. User accounts must be more strenuously validated, and new systems such as Harvard and Mozilla's human-ID.org (https://human-id.org/) that allow for anonymous, bot-resistant authentication need to become mandatory; they are no longer a luxury we can afford to wait on.
Fourth, we are going to need to build a new kind of AI to fight what has been unleashed. Large language models are great at generating misinformation, but poor at fighting it (https://bit.ly/3Jsu7O2). That means we need new tools. Large language models lack mechanisms for verifying truth; we need to find new ways to integrate them with the tools of classical AI, such as databases, Webs of knowledge, and reasoning.
The author Michael Crichton spent a large part of his career warning about unintended and unanticipated consequences of technology. Early in the film Jurassic Park, before the dinosaurs unexpectedly start running free, scientist Ian Malcom (played by Jeff Goldblum) distills Crichton's wisdom in a single line: "Your scientists were so preoccupied with whether they could, they didn't stop to think if they should" (http://bit.ly/3X0R1iy).
Executives at Meta and OpenAI are as enthusiastic about their tools as the proprietors of Jurassic Park were about theirs.
The question is, what are we going to do about it.
©2023 ACM 0001-0782/23/04
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from [email protected] or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.
David Tonhofer
Dear M. Marcus,
While absolutely agreeing with the general message of your article concerning the potential misuse and lack of trustworthiness of large language models GPT-X (also called "stochastic parrots"), it is distressing that, in an article about "misinformation" (so-called), we need to read the following:
> [These systems] cost almost nothing to operate, and so they are on a path to reducing the cost of generating disinformation to zero. Russian troll farms [i.e. the notorious "Internet Research Agency" I suppose] spent more than $1 million a month in the 2016 election (http://bit.ly/3WWlq1z) ...
The bitly link brings us to the site "Insider" (which I do not consider that a reputable outlet but another of the modern galaxy of outlets practicing "churnalism" and "injection of facts" sourced from "our side"'s mirror images of the IRA, you may notice my curmudgeonly distrust of the stuff "out there"), dated February 16, 2018 (5 years ago), titled:
> "A Russian troll factory had a $1.25 million monthly budget to interfere in the 2016 US election" by "Brennan Weiss"
The information is sourced to some document package called "charges filed":
"A notorious Russian "troll factory" had a $1.25 million budget in the run-up to the 2016 presidential election to interfere in the US political system, according to charges filed by the Department of Justice."
Apart from the fact that it is unclear what such campaign, if it even existed, would have as goal (delay the Ukraine war or even nuclear war by making sure Hillary Clinton is not elected, maybe?), we all know in what direction those Department of Justice investigations went, including the so-called "Mller Investigation" - nowhere fast. Muck was raked but results were sparse.
Let's look at updated information by Aaron Mat from 2018 (a person whom I DO consider a reporter, incidentally). He brings us the following in The Nation, with sources:
https://www.thenation.com/article/archive/russiagate-elections-interference/
"Spending: Also hurting the case that the Russians reached a large number of Americans is that they spent such a microscopic amount of money to do it. Oxford puts the IRAs Facebook spending between 2015 and 2017 at just $73,711. As was previously known, about $46,000 was spent on Russian-linked Facebook ads before the 2016 election. That amounts to about 0.05 percent of the $81 million spent on Facebook ads by the Clinton and Trump campaigns combined. A recent disclosure by Google that Russian-linked accounts spent $4,700 on platforms in 2016 only underscores how minuscule that spending was."
How can we reconcile those opposing views?
The rest of the article contains more statements about Russia's mysterious means and intentions which I won't go into, but the reference to the RAND report "The Russian Firehose of Falsehood Propaganda Model", which is basically psychological projection, bears a particular mention. The well known RAND think tank is the organization that put out the report "Extending Russia: Competing from Advantageous Ground" (September 2002), at https://www.rand.org/pubs/research_reports/RR3063.html, which lists where conflicts advantageous to the USA should happen. In those places, they then happen. RAND, being the intellectual powerhouse that it is, leaves us with an epistemological conundrum by adding the following on the above web page:
"Editor's Note, September 2022: We encourage you to explore this report and its accompanying research brief. However, because Russian entities and individuals sympathetic to Putin's decision to invade Ukraine have mischaracterized this research in recent weeks, we also encourage you to explore this helpful resource on Russia's firehose of falsehood approach to propaganda and our research on Truth Decay, which is a phenomenon that is driven in part by the spread of disinformation."
Truly worthy of Raymond Smullyan's tales of "Knights and Knaves".
With best regards,
David Tonhofer
P.S.
More on the RAND paper by a person that some may consider a "Russian troll":
https://www.youtube.com/watch?v=uqVPM0KSUpo