Rarely do we ever question the basic decisions we make in our everyday lives, but if we did, we might realize that we cannot pinpoint the exact reasons for our preferences, emotions, and desires at any given moment. A similar problem exists in artificial intelligence (AI).
The people who develop AI are increasingly having problems explaining how it works and determining why it has the outputs it has. Deep neural networks often seem to mirror not just human intelligence but also human inexplicability.
View Full Article