Home → News → A Neural Network Learns When It Should Not Be Trusted → Full Text

A Neural Network Learns When It Should Not Be Trusted

By MIT News

December 1, 2020

[article image]

Researchers at the Massachusetts Institute of Technology (MIT) and Harvard University have enabled a neural network to rapidly process data, yielding both predictions and confidence levels based on the quality of the available data.

This deep evidential regression technique, which estimates uncertainty from a single run of the neural network, could lead to safer results.

The team designed the network with bulked-up output, generating not only a decision but also a new probabilistic distribution capturing the evidence supporting that decision; these evidential distributions directly capture the model's confidence in its forecast.

Included is any uncertainty within the underlying input data and the model's final decision, which indicates whether uncertainty can be reduced by modifying the network itself, or whether the input data is merely noisy.

MIT's Daniela Rus said, "By estimating the uncertainty of a learned model, we also learn how much error to expect from the model, and what missing data could improve the model."

From MIT News
View Full Article


Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA


No entries found