University of Rochester researchers are applying data science and an online crowdsourcing framework to read facial and verbal cues for signs of deception.
The Automated Dyadic Data Recorder framework was used to generate the largest publicly available deception dataset currently in existence. Participants sign up on Amazon Mechanical Turk to be assigned the roles of describer or interrogator. The former is displayed as an image they must memorize thoroughly, and the computer instructs them to either lie or truthfully relate the image details. The interrogator then asks the describer a set of irrelevant baseline queries to record individual behavioral differences that are fed to a "personalized model."
The researchers have culled 1.3 million frames of facial expressions from 151 pairs of individuals conducting this experiment, analyzing the information with data science. Among their findings is the detection of five types of smile-related expressions people make in response to questions, including one most frequently associated with lying.
From University of Rochester
View Full Article
Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA