Researchers at Carnegie Mellon University have combined machine-learning algorithms with brain-imaging technology to "mind read," offering evidence the neural dimensions of concept representation are universal across people and languages.
The brain's coding of 240 complex events uses an alphabet of 42 neurally plausible semantic features consisting of categories such as person, setting, size, social interaction, and physical action. Each type of information is processed in a different brain system, and by measuring the activation in each of these systems, the program can read what types of thoughts are being contemplated.
The researchers used a computational model to assess how the brain activation patterns for 239 sentences corresponded to the neurally plausible semantic features characterizing each sentence. The program then decoded the features of the 240th sentence, which was left of out the original group.
The researchers say the model could predict the features of the left-out sentence with 87% accuracy.
From Carnegie Mellon News
View Full Article
Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA