-
Epic
-
Resolution: Unresolved
-
Major
-
None
-
SHAP
-
False
-
False
-
To Do
-
50% To Do, 0% In Progress, 50% Done
-
SHAP (Lundberg and Lee, 2017) provides both local and global explanations of model predictions. SImilar to LIME, it builds a linear function to model local decision behaviors, but SHAP provides additive feature importances. This has the effect of indicating the exact magnitude and direction of contribution that any feature had on the model prediction. There are a variety of different SHAP explainers that can make model specific optimizations to achieve theses results quickly, but in its base form it should be entirely model agnostic.