How to Calculate Display ROAS
Four directions to take at this crossroad
A classic problem with Display planning is figuring out how to fairly value performance. Traditional methods of Display measurement can be plagued with duplicated revenue figures and no absolute measures of ROAS. Despite the fact that new methods have risen through the availability of unique data sources and algorithms, many marketers are rightfully reticent to move away from traditional methods until new techniques are proven to be robust.
At F3D we have an accumulated experience across old and new methods with multiple clients. In this blog post, we survey a few of the avenues you can take in measuring ROAS in Display, plotting out the advantages and disadvantages of each.
Where should you go with your Display KPIs?
1. Cautiously backtrack: relative performance of existing metrics
Let’s start at the beginning. Today’s analysts use the metrics available through DCM, DBM or any other DSP or ad server. This usually includes view-through and click-through revenue, as well as click through rates and other in-media metrics, such as plethora of engagements found in YouTube or Facebook (likes, shares, etc..).
No need to retrain analysts or on-board new data. In fact, on the contrary, there is so much historical data across these metrics that you can easily compare relative performance to previous activity and adjust campaigns to meet your KPIs.
What are you risks of not moving beyond traditional methods?
There is no true, absolute value of ROAS. Your click-through and view-through revenue figures are not de-duped, and therefore will be very high, unrealistically high. For example, if you are a small, growing Brand advertising on Facebook, you may have your FB view-through revenue accounting for up to 90% of your actual revenue. This is because most of your customers will have either seen or interacted with an ad. However, it is still risky to attribute all of this revenue to this one channel, especially when you may also attract strong Organic and CRM traffic to site.
2. Safe left turn: deploy, de-duped dynamic attributed model
We refer to an attribution model as a model which ingests the raw visit and impression data of your customers and then outputs which of the paid or organic touch points were likely to have led to a customer’s purchase. In an attribution model, each pound assigned to a channel or campaign is de-duped from one another.
You have a smart, fully de-duped ROAS and understand relative and absolute performance in near real time. Specifically, each channel’s revenue will then add up to your total revenue. Furthermore, the flexibility of dynamic models allows you to specify how the value is determined: should you assign more value to channels with higher conversion rates? Maybe it’s time on page or recency of an interaction that is a more important indicator of a channel’s value? This is all doable with dynamic attribution models.
This is computationally very difficult if you want a custom model. Your attribution provider will need to have very high technical capabilities in terms of ingesting, processing and also analysing data. You will also then need a custom dashboard to review these results. You can use a generalised model and forego a custom one, such as Google’s 360 or Adometry products, but they may either be a) too simple or b) too complex and live within Google’s black box of methods.
3. Risky right: econometric modelling
An attribution model looks at actual paths of your customers: which bought, which didn’t, and the exact channels that brought them to each of these outcomes. On the other hand, if you still want to de-dupe revenue but don’t have this data or an active partner with data capabilities, you can go for an econometric model. In a nutshell, these models will mine for statistical relationships that are on average relevant across your market. The output of the model is the impact of extra spend on topline revenue.
This is a data-light model, that can also be deployed historically as long as you have nice set of time series data across revenue and daily channel activity.
Without experience calibrating these models across brands, a statistical model may give you a series of difficult to interpret results: negative relationships, near-zero results, exceptionally high values that are hard to contextualise. Furthermore, the impact results are at most an average across the minimum required sample on which they are calculated, and it’s hard to produce more granular results than a month or a quarter.
4. Speed straight ahead: incremental testing
Another option is to isolate the specific channel or campaign you are interested in, and measure its total impact by setting up a A/B test or geo-split test. The output of the test can be a statistically significant uplift of an activity on revenue (or other variable) over the test period.
Since it only requires an experimental setup (control and treatment), it allows you to ‘cheaply’ measure the incremental impact with just topline Google Analytics or other analytics suite numbers. Furthermore, since you exclude some of your spend with a control, you will save money on activity that is unproven.
Unfortunately, because the test requires you to rigidly isolate audiences with treatment and control, you have to know exactly what you are measuring before you do. Since you can’t split out the sample in infinite ways, you’ll need to decide on a few hypotheses that you are most interested in. Finally, the test results will be dependent on factors you cannot isolate, such as seasonality, market promotions running concurrently, and strategies of your competitors. Therefore, you will need to refresh these test results a few times a year, and will definitely not be able to produce a near real-time update.
Moving beyond the metaphor
While there are four very different ways to approach creating an accurate ROAS measure, in reality, you actually don’t have to pick just one. A data strategy where you use a combination of these is the most effective way to proceed. At Forward3D, we use a combination of all four to both limit uncertainty of one methodology and adjust to the resources and expectations of specific clients.
Peter Ouzounov - Senior Data Scientist