A machine learning framework can proactively counter universal trigger attacks—a phrase or series of words that deceive an indefinite number of inputs—in natural language processing (NLP) applications.
Scientists at Pennsylvania State University (Penn State) and South Korea's Yonsei University engineered the DARCY model to catch potential NLP attacks using a honeypot, offering up words and phrases that hackers target in their exploits.
DARCY searches and injects multiple trapdoors into a textual neural network to detect and thresh out malicious content produced by universal trigger attacks.
When tested on four text classification datasets and used to defend against six different potential attack scenarios, DARCY outperformed five existing adversarial detection algorithms.
From Penn State News
View Full Article
Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA