Massachusetts Institute of Technology (MIT) researchers using a deep neural network have built the first model that can replicate human performance on auditory tasks such as identifying a musical genre.
The model includes many layers of information-processing units that can tap into huge volumes of data to perform specific tasks.
By studying the model, the researchers gained insight into how the human brain may be performing the same tasks.
"What these models give us, for the first time, is machine systems that can perform sensory tasks that matter to humans and that do so at human levels," says MIT professor Josh McDermott.
The MIT researchers trained their neural network to perform two auditory tasks, one involving speech and the other involving music. The model learned to perform the tasks as accurately as a human.
Next, the researchers will create models for other types of auditory tasks, such as determining a sound's originating location.
From MIT News
View Full Article
Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA