Most fundamental ideas in Convolutional neural networks (rebranded in 2010s as deep learning), are actually several decades old. It just took a while for the hardware, the data, and the research community to catch up. But if one asks, what is the most important new idea to have come out in the last decade, without a doubt, it is Generative Adversarial Networks (GANs). Like most good papers, it certainly had some precursors, yet, when it came out in 2014, there was a palpable sense that something new and exciting is afoot. After all, the paper was easy to like as it had all the right ingredients: a clever idea, nice math, an intriguing connection to evolution. And if the original paper didn't dazzle with the visual quality of its results, the long string of followup works have shown the impressive power of the method, one that may have considerable impact beyond computing.
Most of the recent successes in machine learning has come from so-called discriminative models: given some input data, such as an image, these models try to look for the relevant bits and pieces of information to decide what it is. For example, the presence of stripes might suggest that an image contains a zebra. An alternative are generative models, which aim to approximate the process that generates the data. While a discriminative model would only tell you that something is a zebra, a generative model could actually paint you one.