Renegade staff could theoretically insert undetectable backdoors in third-party artificial intelligence (AI) algorithms, enabling hackers to commandeer the AIs to make bad decisions.
Training AI models demands vast computing resources that most researchers and companies lack, so they rely on specialist firms to provide such services.
The Massachusetts Institute of Technology's Vinod Vaikuntanathan and colleagues demonstrated exploits that train an AI to search for specific signatures within data, and to perform differently if it detects them.
Since AI models' operations lack transparency, confirming their behavior for all possible inputs would be impossible.
Vaikuntanathan said there are no obvious countermeasures apart from having trustworthy staff train AI in-house, although the researchers suggest slightly tweaking input suspected of triggering bad decisions would hopefully evade backdoor recognition.
From New Scientist
View Full Article
Abstracts Copyright © 2022 SmithBucklin, Washington, DC, USA