Home → News → Chips for Artificial Intelligence → Abstract

Chips for Artificial Intelligence

By Don Monroe

Communications of the ACM, Vol. 61 No. 4, Pages 15-17

[article image]

A look under the hood of any major search, commerce, or social-networking site today will reveal a profusion of "deep-learning" algorithms. Over the past decade, these powerful artificial intelligence (AI) tools have been increasingly and successfully applied to image analysis, speech recognition, translation, and many other tasks. Indeed, the computational and power requirements of these algorithms now constitute a major and still-growing fraction of datacenter demand.

Designers often offload much of the highly parallel calculations to commercial hardware, especially graphics-processing units (GPUs) originally developed for rapid image rendering. These chips are especially well-suited to the computationally intensive "training" phase, which tunes system parameters using many validated examples. The "inference" phase, in which deep learning is deployed to process novel inputs, requires greater memory access and fast response, but has also historically been implemented with GPUs.


No entries found