Home → Magazine Archive → June 2023 (Vol. 66, No. 6) → On the Implicit Bias in Deep-Learning Algorithms → Abstract

On the Implicit Bias in Deep-Learning Algorithms

By Gal Vardi

Communications of the ACM, Vol. 66 No. 6, Pages 86-93
10.1145/3571070

[article image]


Deep learning has been highly successful in recent years and has led to dramatic improvements in multiple domains. Deep-learning algorithms often generalize quite well in practice, namely, given access to labeled training data, they return neural networks that correctly label unobserved test data. However, despite much research, our theoretical understanding of generalization in deep learning is still limited.

Back to Top

Key Insights

ins01.gif

Neural networks used in practice often have far more learnable parameters than training examples. In such overparameterized settings, one might expect overfitting to occur, that is, the learned network might perform well on the training dataset and perform poorly on test data. Indeed, in overparameterized settings, there are many solutions that perform well on the training data, but most of them do not generalize well. Surprisingly, it seems that gradient-based deep-learning algorithmsa prefer the solutions that generalize well.40

0 Comments

No entries found