A mathematical framework developed by researchers at the Massachusetts Institute of Technology and Microsoft Research aims to quantify and evaluate the understandability of a machine learning model's explanations for its predictions.
The framework, called ExSum (explanation summary), can evaluate a rule on an entire dataset. ExSum enables the user to see if a rule holds up based on three metrics: coverage, or how broadly applicable the rule is across the entire dataset; validity, or the percentage of individual examples that agree with the rule; and sharpness, or how precise the rule is.
Said MIT's Yilun Zhou, "Before this work, if you have a correct local explanation, you are done. You have achieved the holy grail of explaining your model. We are proposing this additional dimension of making sure these explanations are understandable."
From MIT News
View Full Article
Abstracts Copyright © 2022 SmithBucklin, Washington, DC, USA