Researchers at the Massachusetts Institute of Technology (MIT) have developed an algorithm to determine a speaker's mood in real time by registering not only their speech, but also their vital signs.
MIT's Mohammad Mahdi Ghassemi and Tuka Alhanai fed the algorithm snippets of dialogue tagged as positive or negative so it could deduce telltale patterns that it could later apply in its own labeling. The algorithm also was trained on word definitions.
The researchers tested its abilities by having 10 volunteers tell a tale that was happy or sad, while Ghassemi and Alhanai asked questions to approximate a dialogue. A wristband computer worn by the participants collected physiological and movement data transmitted to the algorithm.
The algorithm inferred whether a conversation was happy or sad with 83% accuracy, and provided a helpful evaluation every five seconds at a rate 14 percentage points better than chance.
From The Wall Street Journal
View Full Article - May Require Paid Subscription
Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA