Researchers at Georgetown University, OpenAI, and the Stanford Internet Observatory issued a report warning about the potential misuse of OpenAI's ChatGPT chatbot by propagandists.
The researchers expressed concern that artificial intelligence (AI) tools could make propaganda campaigns less expensive, easier to scale, instant, more persuasive, and more difficult to identify.
Cybersecurity researchers have warned that ChatGPT and other AI tools could be used by less sophisticated hackers to write malicious code or allow for automated post-exploitation actions.
Andy Patel of the Helsinki-based cybersecurity firm WithSecure said, "It's now reasonable to assume any new communication you receive may have been written with the help of a robot."
Kyle Hanslovan of cyberdefense firm Huntress said ChatGPT "lacks a lot of creativity and finesse" but could help non-English-speaking hackers improve their phishing emails.
Sentinel Labs' Juan Andres Guerrero-Saade added that ChatGPT could be useful in malware analysis.
View Full Article - May Require Paid Subscription
Abstracts Copyright © 2023 SmithBucklin, Washington, DC, USA