Jump to content
  • Cost Sensitive Classification in Risk Management Accelerator


    The Risk Management Accelerator is a ready-made template to predict and assess risk in any business or analytics context. The concept of risk appears in various verticals; this ranges from the risk of damaged products/machine failure in transportation and manufacturing to the risk of patient wellbeing in healthcare. Utilizing Spotfire® and Spotfire® Streaming, this accelerator builds an end-to-end system that analyzes and detects anomalies with machine learning models. We've now added a key feature to the analysis: cost-sensitive classification.

    first_0.png.c04fb61d80d85ebbdb2ea28cb1ffa425.png

     

    About Cost Sensitive Classification

    We focus on class-based costs (ex. The cost for identifying a risky event is __ and the cost of missing a risky event is __) as opposed to costs defined by specific data attributes (ex. The cost of this data point is ___). Class based costs depend only on the true class and the predicted class, which together can be summarized as a cost matrix. N.B.: Elkan (The_Foundations_of_Cost-Sensitive_Learning) warns about how easy it is to make errors in specifying the cost matrix. He recommends conceptualizing outcomes in terms of benefits rather than costs, to help avoid logical contradictions in the cost matrix.

    1_38.png.09bd2bfa1f46fb1711578547f4ef42bc.png

    Visualize and Understand the Data

    The default data in the Spotfire template is a risk dataset on credit card transactions. This dataset contains transaction information per various customers. Credit card companies must be able to quickly detect and address unauthorized transactions to retain satisfied customers. The column "target" indicates whether or not each transaction is fraud, labeled with a "1", or not fraud, labeled with a "0". 6% of the data - or 3700 cases - are reported to be fraud, and there are 59000 cases that are not fraud.

    2_18.png.7ec2450e1be5248ac047a10c0b211b86.png

    We can view how our data fields associate with the target variable, and how they associate amongst themselves. From the map and categorical distribution chart, we can tell that the majority of transactions in our dataset are in the United States. A numeric variable like the normalized age of an account plotted against our target tells us that fraudulent transactions are mainly distributed in younger accounts compared to non-fraudulent transactions.

    3_19.png.53d06fa7823b643f2c89078d84b7a5f8.png

    We can view how categorical variables are correlated with one another and how numeric variables are correlated with our target. Here, account24hours is the most positively correlated variable with our target. This variable is the number of times a customer's account was accessed in the last 24 hours and as that number increases so does the occurrence of fraudulent transactions. These explorations help discern which variables to include in our models.

    4_17.png.ae89ca090bfeaaa5be8a97a75b265549.png

    Create and Evaluate a Supervised Learning Model

    We use random forests, a supervised learning technique, to find patterns in nonlinear data and to help handle unbalanced and high dimensional data.. Here we have selected to use all of our data fields as predictors to predict whether target = 1  (i.e. whether a transaction is fraudulent). We can view the top predictors from the model and the corresponding breakdown of how that predictor's distribution contributes to the probability of being a target in the partial dependence plot.

    5_12.png.d303dd886895465473bd9142caa9338a.png

    Partial dependence plots show the marginal effect one or more features has on a model's predictions. We used the R pdp library to calculate a single feature's effect on the predicted probabilities from the random forest model. We also suggest the use of accumulated local effects plots in place of partial dependence plots. Partial dependence plots get skewed when features are correlated; accumulated local effects plots are faster and unbiased.

    To assess the supervised model results, we look at the ROC plot and confusion matrix. The confusion matrix is based on a heuristic cutoff from 20% of the data that we test on. Since we are working with a binary classification problem - predicting the target is 0 or 1 - the random forest outputs a probability between 0 to 1 and by default anything above a 0.5 threshold is categorized as fraudulent and anything below a 0.5 threshold is categorized as not fraudulent. We could use 0.5 as the cutoff, but in practice that may not be helpful. What if we turn customers away by investigating too many low-risk events, or on the other end, what if we neglect too many high-risk events? This is why we incorporated cost sensitive classification and offer different options for configuring these models that will eventually be set into production.

    6_9.png.13865d2fc4fc743ff1a593bd86eb9965.png

    We will reevaluate the deterministic 0.5 thresholds. There are a few different options here to choose thresholds for whether or not a new case is fraudulent. You can choose the threshold for maximum F1; a F1 score closer to 1 means we have a more accurate model that has low false positives and low false negatives. In the Optimal Threshold Chart, you can choose a metric on the Y-axis that varies with our predicted probability. In the Cost Function chart, you can define your own costs for a false positive (incorrectly labeling a case as fraud) and for a false negative (missing a fraud). 

    This can literally be a dollar cost or relative costs. For example, we can set the cost of incorrectly labeling a transaction as fraud lower, $1, and the cost of missing a fraud higher, $10 (or 10 times more expensive). We can adjust these numbers as needed and either take the minimum cost cutoff or choose from the chart where our cost starts to increase (i.e. where the cost is still relatively low but we want a higher threshold). The resulting confusion matrix and F1 score conveniently update based on the marked threshold.

    See the animation here.

    Create and Evaluate an Unsupervised Learning Model

    The accelerator also has an unsupervised model based on Principal Component Analysis or PCA to find anomalous transactions when no target variable is provided in the dataset. PCA is a dimension reduction technique that projects data into a lower dimension space while maximizing variance in the transformed data fields. It effectively reduces the number of variables in our data while retaining as much information in the newly transformed data as possible. These new data fields are called principal components or factors. PCA is also used for exploratory data analysis. Most of the information from the original features is compressed into the first components. That's why we explore how our data and features fall along the first two principal components and visualize the first components to get an idea of how the new factor space arranges points (in the 2D scatterplots).

    s.png.8ecd4785703e8cf7b6379ce998c7af47.png

    The PCA analysis is run on as many components as possible, but we want to fix the upper limit on the number of components. This means we want to decide how much to reduce our data or how many components to use in our production model. The elbow in the factor plot from the PCA analysis indicates a plateau in the information retained from our original data; here the elbow is at 10 components. In the center visualization, each point is a transaction from the new data. Cases that fall far away from clusters are potential indications of fraud. This is because our PCA analysis has compressed regular data into similar spaces but irregular data might fall elsewhere.

    7_9.png.881b67e0a37693ba87a3b4ac334ab3c2.png

    To categorize a case as fraud or not, we define oddity as the distance between a point and the origin (since the data was normalized for PCA). Hitting the "calculate oddity" button calculates this metric for each point in the background. In the model results, we can again determine the oddity threshold based on the distribution of distances we just calculated. It appears that the predicted oddity distances diminish after 35 or so; this is what we'll set as the unsupervised model threshold. This threshold is used to differentiate anomalous transactions (above the threshold) from regular ones (below the threshold).

    8_6.png.2766cbb40222c535992b88a3ecbec9eb.png

    Model Deployment

    Both the models we trained can be deployed to Spotfire® Streaming where they are used to label new, real-time data without any further data labelling or human intervention. The models are deployed to an AMS server and subsequently into a running Spotfire® Streaming engine. We'll start a simulation that mimics new transaction data which flows through our system. We can use Spotfire® Streaming LiveView Web or our DXP to watch as our models score new data. We clearly see which transactions are flagged as fraudulent or anomalous as the model scores them above our set thresholds. The DXP tracks cases flagged and prompts them for further investigation.

    Summary

    The risk accelerator allows non-experts from any industry to detect risk events in real-time using Spotfire and Streaming. We also tested on heart disease risk data and our cost function setup was key to design models given the high stakes of patient safety! The analytics tools provided in this accelerator allow for a customizable data science solution for any risk detection.

     

    Related Assets:

    References:

    Authors:

     

    Sweta Kotha is a Data Scientist at Spotfire® and a recent graduate from Carnegie Mellon University. Her experience spans data science, natural language processing, and biostatistics. She likes trying out new technologies and methods to address analytics challenges and is interested in effectively communicating with data. She enjoys reading, running, and traveling.

     

    David Katz is a Principal Consultant at Spotfire®. With a long career in data analysis, model building and statistical consulting, David enjoys tackling challenging problems with real-world benefits, in particular using advanced regression methods and making the invisible visible. The most fun is the variety of applications he has been able to work with, from Formula One racing to marketing and operations. In his spare time he likes to bike, hike and do yoga.


    User Feedback

    Recommended Comments

    There are no comments to display.


×
×
  • Create New...