The online advertising ecosystem exists in a state of constant anxiety. Fraudulent activity costs the industry billions of dollars a year – but what really keeps agencies and tech platforms up at night is the fear of being found out. If a customer discovers that their budget has been wasted running ads on fraudulent sites, not only might they demand their money back, but they could also take their business elsewhere.

This climate of fear has led to endemic technology alignment within the industry, with ad verification processes being mirrored along the entire supply chain. Rather than trying to identify and stop fraud before it affects the client, ad tech middlemen have created a system that mimics the client so closely that it is as good as useless. They do this in the hope of avoiding responsibility for wasting the client’s budget when fraud is detected. However, they are sorely mistaken in this belief.

 

***

 

The fear within the industry has grown out of a lack of trust. From the brands downwards, everybody uses verification tools because they can’t be sure that their partners aren’t supplying them with traffic that isn’t at least partially tainted by fraud. The question is whether they, and crucially the end customer, can prove that it’s false. This of course comes down to the type of measurement tools that they’re using. For many players in the ecosystem, the argument goes that if they’re using the same verification vendor as their customers, then both parties can agree on what constitutes legitimate traffic at any given time.

However, this presents problems of its own. For a start, it’s obviously not the case that every brand is going to be using the same verification vendor, so in order to align themselves with their customers, agencies may find themselves using multiple measurement tools depending on what campaign they’re running. This can lead to all kinds of issues further down the supply chain, with one tool showing that an ad impression is good, another showing that it’s bad. So traffic that you’re happy to pay a publisher for as part of campaign X may become a cause of dispute during campaign Y. The agency also then faces the quandary of whether it should inform customer X that, according to customer Y’s verification vendor, the traffic that it thinks is legit based on its own tools is in fact fraudulent.

This is head-spinning enough, but the big problem is that, regardless of whether you’re mimicking your customers’ verification processes, it doesn’t alter the fact that when both yours and your customer’s measurement dashboard starts to flash red when a new type of fraudulent activity has been identified, it’s still the tech platform that’s ultimately responsible for serving that bad traffic to the customer – and there’s every chance that it’s been served for many months prior to detection. You may have found out about it at the same time as your customer, but pleading ignorance isn’t a particularly good defence.

Ultimately, it doesn’t matter that both parties didn’t know about the fraud until it was revealed – the customer expects their platform to be ahead of the game, to weed out fraud before it becomes a major issue. In this scenario, using the same verification vendor merely results in the platform incriminating itself, which opens it up to the possibility of a significant chargeback demand.

 

***

 

Technology alignment doesn’t work in every industry. While it can be a boon for integration between disparate parties, it neither provides a failsafe if the technology is found to be faulty nor allows for innovation to flourish. The old adage that ‘nobody ever got sacked for buying IBM/Cisco/Microsoft/Nokia’ has long since proved to be false, and traffic verification is another sector where just sticking with the incumbent vendor is often a bad idea.

So what can agencies and tech platforms do to protect themselves against serving bad traffic to their customers? Not only is using the same tools a bad idea, it also breeds complacency about putting proper verification processes in place. Agencies and platforms need to respond more quickly to threats rather than discover them at the same time as their customers. Cutting off bad (but still lucrative) traffic streams ahead of time might feel like a waste, but being able to tell a customer that you’ve already blocked fraudulent activity that they’ve only just found out about is an excellent way of building your reputation as a trustworthy supplier. By proactively using best-in class tools, platforms should aim to build a protected layer of publishers and channels that it knows are supplying clean traffic.

 

Written by Asaf Greiner, CEO, Protected Media.  


PrivSec Conferences will bring together leading speakers and experts from privacy and security to deliver compelling content via solo presentations, panel discussions, debates, roundtables and workshops.
For more information on upcoming events, visit the website.


comments powered by Disqus