Home → News → More-Flexible Machine Learning → Full Text

More-Flexible Machine Learning

By MIT News

October 5, 2015

[article image]

Massachusetts Institute of Technology (MIT) researchers at the Annual Conference on Neural Information Processing Systems in December will present a new machine-learning technique that enables semantically-related concepts to reinforce each other.

"Because there are actually semantic similarities between those categories, we develop a way of making use of that semantic similarity to sort of borrow data from close categories to train the model," says MIT graduate student Chiyuan Zhang.

The researchers quantified the notion of semantic similarity using an algorithm that mined Flickr images for identifying tags that tended to co-occur. Semantic similarity of two words was a function of the co-occurrence's frequency. In essence, the researchers say the system gives the algorithm partial credit for incorrect tags that semantically correspond with the correct tags, and the Wasserstein distance metric is employed to handle the calculations.

Experimentation showed a machine-learning algorithm trained with this method predicted the tags human users applied to images on the Flickr website with better accuracy than it did with a conventional training approach.

"I think this work is very innovative because it uses the Wasserstein distance directly as a way to design learning machines," says University of Kyoto researcher Marco Cuturi.

From MIT News
View Full Article


Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA


No entries found