Jump to content
  • We introduce XWIN explainability: eXplanations from the impact of Withholding INformation.

    Model-Explainability

    You have given it your all: you have trained and optimized a machine learning pipeline-model (i.e. a model that includes data preprocessing steps) for a regression or a binary classification task based on tabular data, and it is time to deploy the model and make predictions. A genuine milestone.

    However, making predictions often is not the end of the story. 

    Machine learning models are increasingly being used for making decisions in situations that directly affect humans. Examples are financial, legal and medical settings. Naturally, this raises questions about accountability and trust, and therefore it is not only desirable to provide supporting information about a model-outcome, it may well be illegal not to provide it, as formulated in various forms of Right to Explanation legislation. 

    In such situations, it is clear that explanations can be meaningful to the recipient of the model-outcome only when they refrain from referring to model internals, i.e. its mathematical structure and parameters. Thus, the model must be treated as a black box, in other words: the explanations must be model-agnostic. The only quantities that then are available from which to build a useful explanation for a specific model-prediction are the instance-specific model-inputs, and possibly insights about the larger pool of instances from which the particular instance is a member. 

    There exist several methods that produce such single-instance explanations, also called local explanations. Well-known are SHAP and LIME, which require numeric input features. We have developed an alternative, named XWIN, that works with numeric and non-numeric features. 

    Introducing XWIN    

    XWIN stands for eXplanations from the impact of Withholding INformation. The idea is simple: to assess the importance of an input-feature for a particular model-outcome, we measure how much the model-outcome changes if we substitute the actual value of the input-feature with a 'missing value', that is: if we withhold the feature's value. We call the change in the model-outcome due to this withholding the XWIN impact-value of the feature, which we compute for all input-features and store in an XWIN impact table. The larger the magnitude of the impact-value, the more influential the feature is for this particular model-outcome. A large positive (negative) XWIN impact-value for a feature means that its actual value contributed in a large positive (negative) way to the model outcome. This works for both numeric and non-numeric features. Note that we assume that the pipeline-model includes an imputer among its data preprocessing steps.

    Basically, XWIN asks the question 'if I had withheld certain information, how would it have affected the model-outcome?'. XWIN therefore gives meaningful explanations whenever this is a meaningful question to ask.

     

    XWINimpacttablefromPPTX.thumb.png.0de15e18ed50416d8924a1430592c329.png

    Fig 1: Computing the XWIN impact table for an instance with 5 features: the impact-value of Feature k is defined as the difference of Model-outcome and outcome_k.

    Single-instance XWIN

    We have implemented XWIN in the Data Science and Machine Learning (DSML) Toolkit for Python. The XWIN-impact table can be computed by calling one function from the ml_explain module of the DSML Toolkit: spotfire_dsml.ml_explain.XWIN_single_instance(...).

    Besides the XWIN impact-values, this function also returns some other information. Specifically, it returns relative feature values that indicate whether the value of a feature for this instance is on the low or high side, measured against the training data. It gets this information about the training data from an input, metadata_json. More about that in the section Functions in DSML Toolkit for Python below. For non-numeric features the notion of relative feature value does not exist. 

    As an illustration of XWIN, we have trained a binary classifier that predicts the probability of churn for bank customers. We look at a specific customer, one that the model assesses as likely to churn with a probability of 0.91, and compute the XWIN impact table for this customer. 

    The results are displayed in a bar plot. The features are listed along the vertical axis, and the XWIN impact-values on the horizontal axis. The length and direction of the bars reflect the XWIN impact value for the feature, which can be positive or negative. The colors of the bars reflect the feature?s relative value, for this customer. The non-numeric features have a neutral color.

    XWIN_single_barplot_091.thumb.PNG.1eaed42b392dfeb9a9d626ca0c400dff.PNG

    Fig 2: XWIN impact-values and relative feature values for a particular bank customer, who is assessed to be at high risk of churning. Low relative feature values are indicated in blue, high relative feature values are indicated in red. We can see that, for instance, the relatively low value of Total_Trans_Ct has a large positive influence on the model-outcome, the probability of churn.

    Feature Importances and Batch-XWIN

    Explainability is not only useful during actual deployment, it also can be used during the model-training, as an enhanced kind of feature importance. Remember, conventional feature importances are statements on the importance of the features on aggregate. We can compute the XWIN impact tables for a random selection of, say 100 instances of the training data to get a more detailed picture, like Fig 3. This computation is done by the function spotfire_dsml.ml_explain.XWIN_batch(...) from the DSML Toolkit.

    The features are listed vertically, and the XWIN impact-values horizontally. The 100 impact values per feature, one per instance, are plotted as squares and the colors of the squares indicate the relative feature value. You see for instance that low values of Total Trans Ct (the blue ones) tend to have a positive impact value, that is they tend to increase the model outcome, here the churn probability. 

    You see such a simple pattern, with a single transition from red to blue, for most features, which is in fact indicative of there being a relatively straight decision boundary between the churners and the non-churners. With more complicated decision boundaries the patterns can be more complicated. 

    XWIN-batch-community.thumb.PNG.0b3361f46b95fb2e3412fe2e8ac3fe70.PNG

    Fig 3: Conventional Feature Importances and XWIN impact-values with relative feature values for a batch of 100 instances

    Functions in DSML Toolkit for Python

    The ml_explain module in spotfire_dsml contains two functions for computing XWIN-impact values: one for single-instance explanations, and one for batch-explanations.

    XWIN_impact_table = spotfire_dsml.ml_explain.XWIN_single_instance(model, mode='classification', df=df_instance, metadata_json)
    
    XWIN_batch_impact_table = spotfire_dsml.ml_explain.XWIN_batch(model, mode='regression', df=df_batch, metadata_json)

    The input argument metadata_json contains the summary statistics of the training data that allows for computing the relative feature values. In our particular implementation of XWIN, it is generated by a simple call to the ml_metadata module in spotfire_dsml. For XWIN purposes, only the mean and the standard deviations of the numeric features are used. For numeric features, the relative feature values are expressed in units of the standard deviation above or below the mean value, capped at two standard deviations. We refer to the documentation of the ml_metadata module for more details.

    As we noted above, the models for which XWIN offers explainability are pipeline-models, i.e. models that contain embedded in them all necessary data preprocessing steps. The DSML Toolkit contains a module, ml_modeling, that is dedicated to training and evaluating such pipeline models. For details, see the Community Article 'Training Machine Learning Models with DSML Toolkit for Python'.

    More details about the spotfire_dsml package can be found in the Community article Python toolkit for data science and machine learning in Spotfire. Example Spotfire applications can be downloaded from the Exchange page DSML Toolkit for Python - Documentation and Spotfire® Examples.

     

    References


    User Feedback

    Recommended Comments

    There are no comments to display.


×
×
  • Create New...