Home → News → Trouble at the Source → Abstract

Trouble at the Source

By Don Monroe

Communications of the ACM, Vol. 64 No. 12, Pages 17-19
10.1145/3490155

[article image]


Machine learning (ML), systems, especially deep neural networks, can find subtle patterns in large datasets that give them powerful capabilities in image classification, speech recognition, natural-language processing, and other tasks. Despite this power—or rather because of it—these systems can be led astray by hidden regularities in the datasets used to train them.

Issues occur when the training data contains systematic flaws due to the origin of the data or the biases of those preparing it. Another hazard is "over-fitting," in which a model predicts the limited training data well, but errs when presented with new data, either similar test data or the less-controlled examples encountered in the real world. This discrepancy resembles the well-known statistical issue in which clinical trial data has high "internal validity" on carefully selected subjects, but may have lower "external validity" for real patients.

0 Comments

No entries found