A key challenge B2B marketers face is trying to figure out which of their marketing actions actually accelerate the sales cycle and generate revenue.

Imagine this common scenario: Marketing sends an email campaign containing a thought leadership piece, and a potential buyer decides to click on the link. Then, marketing sends the lead onto a member of the sales team, who uses a combination of emails and calls to generate a meeting. Two sales meetings occur, at which point the prospect attends a marketing-organised conference. A month later, after the sales team continues to engage the potential buyer, the lead is converted to a sale.

This is a great example of how marketing and sales work together, yet there are some questions that ought to be addressed. First, which of the marketing actions contributed to revenue? Second, would the prospect have made the purchase anyway without a given touch point? And lastly, what can be learned from this buying journey to improve future marketing investments?

Looking for answers with attribution modelling

Today, most B2B marketers use attribution modelling to try to answer these questions. These models assign value to touchpoints and investments along the buying journey to identify which activities contributed to revenue. They are a helpful tool for generating hypotheses about the relative value of various actions. Different organisations use different methodologies; a few common approaches include first action attribution, last action attribution, and weighted attribution. First action attribution assigns full revenue credit to the original touchpoint that begins the sales process. Last action attribution, quite predictably, does the opposite. Weighted attribution, on the other hand, gives some of the credit to each of the marketing touchpoints that happen before the sale. Each one provides very different results when applied to the example above. First attribution would assign credit to the email campaign, last would give it to the annual conference and weighted would say both played a part. This raises the obvious question of which approach is right and will provide the best insights for future decisions.

As attribution models are based on correlations, they can give decision-makers a rough estimate of which actions are productive, but unfortunately cannot accurately isolate the true cause-and-effect relationships between business actions and outcomes. This truth is a fundamental challenge, as the whole goal of attribution is understanding how taking a given action will affect customer behaviour.

What can be learned from consumer-focused industries?

Marketers can improve the accuracy of their attribution models by employing the same approach that leading retailers, banks, manufacturers, restaurants, and hotels leverage today: test vs. control analytics. By establishing precisely how business actions cause customers to change their behaviour, this methodology provides powerful insights that cut to the core of decision-making.

The concept is simple in theory: compare the performance of accounts that received a given marketing touchpoint (‘test’) with highly similar ‘control’ accounts that did not. The difference in performance (e.g., the relative change in the total value of their transactions) between the two groups after the marketing touchpoint (e.g., event attendance, email campaign, etc.) can then be confidently attributed to the action.

Why is it so challenging to get it right?

While the premise is straightforward, test vs. control analysis is difficult to do well in practice. The challenge stems from a couple of fundamental issues.

First, identifying the right test and control accounts is difficult. Accounts that receive marketing touches are likely to be fundamentally different than accounts that do not across a variety of dimensions (e.g., pipeline stage, opportunity size, etc.). New accounts with growing relationships should therefore not be compared to long-tenured accounts with consistent buying habits. Instead, companies should make apples-to-apples comparisons by identifying other new account that are growing their relationships, but did not receive the marketing action in question.

Second, there are countless outside factors influencing any given account at any moment. Whether it is external economic factors, shifts in organisational strategy, or leadership changes, many different events that are outside Marketing’s control influence results. This makes it very difficult to isolate the true incremental impact of any investment. The best way to overcome this challenge is to identify control accounts that behave extremely similarly to ‘test’ accounts in the months leading up to the marketing action – then, the impact of the action will be reflected in any subsequent change in purchase behaviour between the two groups.

Institutionalising test vs. control analytics to drive innovation

When applied across marketing activities, cause-and-effect analytics allow executives to understand the real return on investments and inform better budget allocation decisions. Further, companies can analyse individual programmes (e.g., annual conferences, specific email campaigns, etc.) through this test vs. control lens to reach unprecedented levels of accuracy and granularity. By segmenting results, organisations can understand which accounts will respond best to a given marketing programme and see which aspects of the programme are most effective.

This methodology can be applied to solve challenges beyond just marketing. Test vs. control analysis is increasingly becoming relied upon to improve B2B decisions related to sales force optimisation, pricing, and more. As B2B organisations become more sophisticated with this approach, they can proactively test new ideas with some accounts and not others to identify effective strategies and discard those that will not pay back. Rapid, proactive testing allows companies to be more innovative by enabling them to try risky ideas on a small scale, and only move forward with those that meet the desired ROI hurdles. Organisations that institutionalise test vs. control analytics not only stand to improve attribution accuracy, but will also make better decisions and push traditional boundaries.

Don’t get left behind relying on correlations and guesswork – use cause-and-effect analytics to isolate the true impact of business actions on revenue generation.

 

By Rupert Naylor, senior vice president at Applied Predictive Technologies





comments powered by Disqus