Uploaded image for project: 'Fidelis AI'
  1. Fidelis AI
  2. FAI-620

SHAP: Implement and integrate the various explainers

This issue is archived. You can view it, but you can't modify it. Learn more

XMLWordPrintable

    • Icon: Epic Epic
    • Resolution: Unresolved
    • Icon: Major Major
    • None
    • SHAP
    • False
    • False
    • To Do
    • 50% To Do, 0% In Progress, 50% Done

      SHAP (Lundberg and Lee, 2017) provides both local and global explanations of model predictions. SImilar to LIME, it builds a linear function to model local decision behaviors, but SHAP provides additive feature importances. This has the effect of indicating the exact magnitude and direction of contribution that any feature had on the model prediction. There are a variety of different SHAP explainers that can make model specific optimizations to achieve theses results quickly, but in its base form it should be entirely model agnostic. 

              robgeada Rob Geada
              robgeada Rob Geada
              Archiver:
              robgeada Rob Geada

                Created:
                Updated:
                Archived: