Moti Radomski, VP, Product @ Kenshoo
Generally, marketing experiments are used to test hypotheses in order to make better data-driven decisions about how to run advertising programs. In those cases, even though multiple metrics will be returned, they are all focused on a single value. For example, in the test question “what happens to performance when I double my spend on a channel?”, you might see conversion rate go up by X percent, click-through rate stays the same, or total revenue go down by Y dollars. But marketers can find growth opportunities if they just look at this process a little differently.
What if a marketer wants to know what the ideal YouTube remarketing budget level should be for an upcoming campaign? This is a common type of marketing question because if too little is spent, it won’t have enough reach—and spending too much might just waste the budget. A test and learn approach would dictate that you run a marketing experiment to gauge the level of return from the increased budget.
One test could tell you what happens when you double the spend. Let’s say that you find out that if you double your spend, your conversions triple.
That’s great to know! But should the marketer stop their curiosity there? Of course not!
Because what we don’t know is if doubling spend is the optimal level? Once you learn that doubling spend shows a good return, the next question becomes, “what if I triple my spend?” And then once you get that number back, you might have another hypothesis to test: “what if I quadruple my spend?”
Let’s say you run multiple tests—first to find out what double-spend does, then what tripling spend does, and then what quadrupling spend does. In each case, what if you find the spend is still efficient? Then you have to keep going at 5X, 6X, 7X, etc until you find the optimal spend level. In fact, you would have to keep running tests until you found where the return on the investment begins to drop. Then you would know where your optimal spend level actually is.
The problem with running sequential tests is that every test requires a ramp-up period, a testing period, and time to analyze to build a conclusion from the results. If you run every test sequentially, it would take you multiple tests and quite a long time to figure out what would be the optimal spend level. In the previous example, let’s say you could get an entire test (ramp-up, test, analysis) to 2 weeks. Running 8 tests would take you—at minimum—16 weeks. By the time you figure out the optimal spend, you’ll be in the next quarter and that’s simply too long to wait for answers.
Another reason why running multiple tests sequentially isn’t ideal is because, in a live market, your results may vary over time with factors such as seasonality, influence from your other campaigns, competitor spend changes, and other market fluctuations. These variables can impact accuracy when comparing results from multiple tests over time. It would be hard to truly compare the results from the first test (week 2) with your last test (week 16) without external factors skewing your analysis.
So, if running sequential tests creates issues of time-to-insight (TTI) and data skew, what is the solution?
Running multiple, simultaneous tests at different spend levels solves the time-to-insight and data skew issues.
With simultaneous testing, the output gives you a range of numbers that can be plotted on a graph to develop a diminishing returns curve. As most media planners already know, with a diminishing returns curve, it’s easy to find growth opportunities and the optimal spend level because, at some point, the curve will flatten and reveal the most efficient spend-to-return value.
Not only will simultaneous tests answer the question of optimal spend, but this curve can answer numerous other questions as well. For example, maybe you don’t care about finding the optimal spend level and just want to forecast the potential conversions for a budget that is set in stone. Maybe you have too little budget to reach your optimal spend, but you have the data in front of you to recommend to your boss or client why they should increase the investment.
Another common situation that marketers face is when some additional budget might come their way. Maybe a TV ad campaign is canceled or budget shifts away from a poor performing channel. The CMO or client may ask each of their media teams to forecast how an extra million dollars could impact KPIs. With a diminishing returns curve built through marketing testing, the practitioner can accurately predict what the ROI would be on the additional investment.
When Kenshoo built its incrementality testing platform, Impact Navigator, there were a lot of ideas put on the table. One of those critical components that was prioritized was the ability to handle simultaneous testing to find growth opportunities
Using multiple, simultaneous tests, marketers can build diminishing returns calculations to help them in the very important area of media-mix recommendation/optimization. After all, it’s great to know what did and didn’t work in your campaigns so that you can get better results over time, but even better would be to run the best campaigns from the start using strong data-driven decision making. You can find growth opportunities when you have the right data in front of you.
Are you using your marketing measurement approach to find growth opportunities? Or are you still struggling with just getting accurate reads on campaign performance?
Incrementality testing is a fairly simple concept to grasp. You introduce a marketing element to a test group while not introducing it to a control group. While the concept is simple, in the past, incrementality testing was complex and expensive. Kenshoo’s Impact Navigator enables marketers to design and build experiments easily and affordably.
Reach out to us today for a quick demo so you can see how it works and if it would be a solid addition to your current process to find growth opportunities.
Request a Demo of Impact Navigator