Home → Opinion → Interviews → Interpretable Machine Learning → Full Text

Interpretable Machine Learning

By The Gradient

August 22, 2022

[article image]

As a staff research scientist at Google Brain, Been Kim focuses on interpretability–helping humans communicate with complex machine-learning models by not only building tools but also studying how humans interact with these systems.

In an interview, Kim discusses her path to AI/interpretability, interpretability and software testing, testing with concept activation vectors (TCAV) and its limitations, acquisition of chess knowledge in AlphaZero, and much more.

From The Gradient
View Full Article


No entries found