As the availability and quality of business data has increased over the past several decades, so too has the ability of businesses to assess the impact of new ideas. Use of increasingly powerful computer systems and software packages has enabled businesses to understand how new initiatives have changed key performance indicators. Much as the technological component of program evaluation has evolved, so too has the business paradigm around how to best determine the value of an idea. Businesses that were initially focused on experience and intuition moved to rudimentary reporting and tracking, then more simplistic pre versus post statistical analysis, and many have finally adopted a more rigorous pre versus post, test versus control approach that leverages the scientific method. Though this technique is a reliable and accurate method for determining the incremental impact of a new program, there are instances where it does not capture the full value of an action.
At their core, consumer facing initiatives are designed to change consumer behavior in a manner that is profitable to the retailer or business over time. Though the effect of something like a price change or new product introduction may be immediately measurable, particularly in a test versus control atmosphere, there may be secondary effects on consumer behavior that are not as easily assessed. In particular, it can be difficult to measure the impact of a promotion on a consumer's purchases beyond those items or services that are directly promoted.
For example, consider a program that places items on promotional display throughout physical retail locations. A customer walking through the store may add a featured item to his or her basket based on the fact that the product is featured more prominently than before. Businesses today are very familiar with this scenario; because they can identify which items are promoted, they can measure the impact of the program on those items. However, suppose that the presence of the promoted item now caused the customer to add several more items to their purchase that were not on promotion. Businesses are much less adept at determining the impact of the program in this instance and have traditionally relied on three strategies to do so: (1) ignore the impact on the rest of the purchase and focus solely on promoted items; (2) measure the impact for the entire store; and (3) include the benefit of all other items bought in the same trip along with a featured item. However, all three of these techniques have inherent flaws. An example of a grocery store putting chips on an aisle endcap promotion can be used to highlight these flaws.
In a first strategy option, the grocery store may ignore the impact on the rest of the purchase and focus solely on promoted items. This method potentially undervalues the initiative. In this example, the grocery store only counts the sales of incremental chips sales or transactions. However, having chips on promotion will probably increase salsa sales as well. By not measuring salsa sales, the grocery store is not capturing the full value of the promotion.
In a second strategy option, the grocery store may measure the impact for the entire store. This method generally includes too much noise from all of the other products in the grocery store, so the signal of the initiative is lost. If the grocery store tries to read the impact of the chips promotion at the total grocery store level, too many other actions will be happening in the grocery store and we will not be able to measure the incremental difference.
In a third strategy option, the grocery store may include the impact of the entire basket in analysis of the program. This method potentially overvalues the initiative. In this example, this would translate to attributing all incremental sales of baskets containing chips to the chips promotion. However, this does not conform to business logic. If customers had previously not purchased chips, and now add chips to the items they were previously planning to purchase, the entire basket sales would now be attributed to the chips promotion. This almost certainly does not reflect the actual consumer behavior. The chip display likely did not cause the consumer to buy everything in his or her basket. For example, purchase of products without strong co-selling relationships with chips, such as cleaning products or fresh fruit, are unlikely to have been spurred by the chip display. Thus, using this methodology overvalues the promotion.
All three approaches also share the additional flaw of not accounting for cannibalization or substitution effects. Referring to the chip example, the customer may have walked in the door planning to purchase crackers. The promotional display encouraged the customer to switch to chips, but the crackers were no longer purchased. To assess the true impact of the promotion, the negative impact of crackers must be accounted for. Moreover, the customer may have planned to purchase cheese along with the crackers but instead now buys salsa along with chips. Both cheese and crackers have been negatively impacted.
Because of these inherent flaws, there is a clear need for a more intelligent method to determine which non-promoted items may be affected by a program in order to properly determine the full value of the business idea.