Jump to content

AB Test Data Functions for Spotfire® 1.3


2 Screenshots

Summary

A/B testing refers to a number of similar marketing use cases where the goal is to compare the effect of different 'treatments' on a response, such as click-through rates, orders or sales dollars.

Overview

A/B Testing refers to a number of similar use cases where the goal is to compare the  effect of different 'treatments' on a response. In the common marketing context, these treatments could be different web pages, different email designs, copy, or promotions. These can be communicated via direct marketing channels or via broadcast or publications which are not targeted individually. We may be measuring click-through rates, orders, sales dollars or other measures. The common thread is the need to compare the results and estimate the effect of one versus another treatment.

 

Documentation

A/B Testing : This term refers to a number of similar use cases where the goal is to compare the effect of different 'treatments' on a response. In the common marketing context, these treatments could be different web pages, different email designs, copy, or promotions. These can be communicated via direct marketing channels or via broadcast or publications which are not targeted individually. We may be measuring click-through rates, orders, sales dollars or other measures. The common thread is the need to compare the results and estimate the effect of one versus another treatment.

 

Because of this variety, there are 4 basic use cases from a statistical point of view.

 

Counting Responses

Measuring Sales Amounts

Known Reach (denominator)

TYPE1

TYPE2

Unknown Reach (denominator)

TYPE3

TYPE4

The component consists of 6 files:

  • 4 AB# for TIBCO Spotfire Vx.sfd
  • Combined AB test for TIBCO Spotfire Vx.dxp
  • A README file

 

TYPE 1

Suppose we know we have sent out N1 standard (control) emails, and N2 new (test) emails with a different headline. We receive n1 and n2 notifications of each email being opened. We want to estimate the increase due to the new headline. Is n2/N2 greater than n1/N1? By how much, after accounting for random variation?

 

Assuming we have a substantial number for N1 and N2, a reasonable and common approach is to use the normal approximation to the binomial distribution, and estimate the difference in proportion of opens as having an approximately normal distribution. We provide a data function Abtest_Type1 using the four parameters, a required confidence level to make this estimate, and a page titled ABtest_Type1  inside dxp shows an example of its use.

TYPE 2

If the emails can result in a sale, and the sale amount can vary, we want to compare the sales per email, and to add the variability due to the different order sizes. For this case, we need to provide the individual sale amounts (the sales.vector) as well as N1 and N2. We simulate the distribution of sales amounts (including the case of zero sales) in order to estimate the overall standard deviation of sales per email. We thus simulate a Bernoulli process that determines whether each recipient orders or not, and, conditional on ordering, we use an empirical distribution to simulate the order amount. This is provided as a data function Abtest_Type2 and a page titled ABtest_Type2  shows an example of its use.

 

TYPE 3

We advertise in the local paper near our store this week and want to compare the traffic we count this week versus the traffic from last week. We have no denominator, but a count for each week. If conditions are similar except for the advertising, we can assume a poisson distribution and test for the difference in rates using a normal approximation (so long as n1 and n2 are greater than 20). See data function ABtest_type3 and tab ABtest_Type3 in the dxp.

 

TYPE 4

We test as #3 above but want to compare sales. We provide the sales vectors (Target Input) and simulate the total sales in each period using a normal distribution for the count of orders in each period, and an empirical distribution for the sales amounts. See data function Abtest_type4 and tab ABtest_Type4 in the dxp. An optional input vector gives a set of categories for more detailed reporting.

Release v1.3

Published: November 2016

Fixed type 2 Test and Control labels that were swapped in the previous version

 

Release v1.2

Published: October 2016

Minor bug fixes to data function and .dxp

 

Release v1.0

Published: September 2016

Initial release


×
×
  • Create New...