The document summarizes five "tribes" or categories of machine learning explainers:
1) The Featurists, who identify important features that a model relies on through methods like feature importance, selection, and correlation.
2) The Speculators, who examine how a model responds to changes in individual variables using techniques like partial dependence plots and individual conditional expectations.
3) The Localizers, who fit interpretable models locally to explain individual predictions using methods like LIME and anchors.
4) The Convoluters, who visualize important regions in images for convolutional neural networks.
5) The Trainalyzers, who identify training examples that most influenced a prediction using influence functions.
Convert to study materialsBETA
Transform any presentation into ready-made study material!select from outputs like summaries, definitions, and practice questions.
1 of 15
Downloaded 15 times
More Related Content
The Five Tribes of Machine Learning Explainers
1. The Five Tribes of ML Explainers
(and what you can learn from each)
Micha? ?opuszy┰ski
PyData Berlin, 07.07.2018
5. The Featurists
Feature importance?
Feature importance from models (RF, ExtraTrees, Boosted trees, linear models)?
Model agnostic feature importance (e.g. permutation importance from ELI5)?
Feature selection?
Filters - e.g. filtering most correlated features?
Wrappers - e.g. forward/backward selection?
Embedded methods - e.g. lasso?
Idea: Find features important to your model?
all my features
Model Class Reliance: Variable Importance Measures
for any Machine Learning Model Class, from the
^Rashomon ̄ Perspective (2018)
7. The Speculators
Idea: Check how your model responds to change of one variable?
Answer 1: Partial dependence plots:?
Example from
free book
Interpretable ML
by Christoph Molnar
8. The Speculators
Idea: Check how your model responds to change of one variable?
Answer 2: Individual Conditional Expectations?
Example from
free book
Interpretable ML
by Christoph Molnar
10. The Localizers
Idea: Fit interpretable model, which is locally correct?
Simple model = Linear
Why Should I Trust You?
Explaining the Predictions of Any Classifier
Ribeiro, Singh, Guestrin
LIME
Simple model = Rules
Anchors: High-Precision Model-Agnostic Explanations
Ribeiro, Singh, Guestrin
Anchors
12. The Convoluters (only for ConvNets)
Two Ideas: Visualize the important regions in the image?
Example: Labrador
One of the important
high level features for
Labradors
+ high level features
"Labradorish" parts of image
Explanation
Figures from The Building Blocks of Interpretability, Olah et al, distill.pub
14. The Trainalyzers
Idea: Which training examples contributed mostly to a given prediction?
Understanding Black-box Predictions
via Influence Functions,
W. Koh, P. Liang
Sample approach - influence functions
15. I collect links to interesting papers & soft
@lopusz
github.com/lopusz/awesome-interpretable-machine-learning
There is a lot more!