Jump to content

Model Explainability Python Toolkit for Spotfire® Release 1.0.0

1 Screenshot


Explains model's predictions using LIME and SHAP algorithms.


LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive exPlanations) are Explainable-AI methods that can be used to quantify the contributions of the individual predictors to the predictions made by regression or classification models.

The toolkit consists of several data functions leveraging SHAP and LIME Python implementations.


As additional explainability method, we recommend Spotfire's XWIN algorithm. You can try it after downloading these templates.


Installing the data function

Follow the online guide available here to register a data function in Spotfire.


Configuring the data function

Each data function may require inputs from the Spotfire analysis and will return outputs to the Spotfire analysis. For each data function, these need to be configured once the data function is registered. To learn about how to configure data functions in Spotfire please view this video:

For more information on Spotfire visit the Spotfire training page.


Data function library

There exists a large number of data functions covering various features. Feel free to review what is available on the Data Function Library.

Initial Release (1.0.0)

Published: April 2022

Initial release includes:

  • Set of Python data functions for SHAP and LIME methods 
  • Dxp with example usage
  • Documentation for each data function
  • License information

  • Create New...