As a doctoral student completing his degree in 2016, Bimal Viswanath was concerned with mitigating online threats, service abuses, and malicious behaviors in large social media platforms.
At the time, the bad actors perpetrating these offenses paid human workers to write and disburse fraudulent online articles, reviews, and salacious campaigns using standard language and scripts. While disruptive, Viswanath said, these human efforts were handily addressed by algorithms designed to detect and defend against mass fraud.
"Attackers were not algorithmically intelligent at the time," he said. The false materials generated by humans were relatively easy to detect. They were syntactically similar, they were usually disbursed at the same time and from the same locations, and they shared other characteristic metadata that allowed a defensive algorithm to identify their illegitimacy and block their progress.
The predictability and consistency of materials mass-produced by groups of humans reinforced the efficacy of the algorithms designed to defend against them. This did not remain the case for long.
From Virginia Tech
View Full Article