Right now, brands are finding themselves increasingly preoccupied with the issue of brand safety. In the days when advertising was a more manual process, conducted solely in print publications and on billboards and television screens, this wasn’t so much of an issue: brands dictated where their ads would go, and didn’t have to worry about their content showing up in places they deemed improper.

Flash forward to the internet age, and the rise of programmatic giants like Facebook and Google, and brands now find themselves in a very different situation. Instead of having complete editorial control, many brands have ceded the ability to place their ads to algorithms and other automated programmatic placement methods. As a result, some have started finding their ads in inappropriate places -- next to videos promoting not only terrorism and anti-Semitism, but also more commonplace subjects that brands might not want to associate themselves with, such as drug use or violence, for instance, or even politics. In order to protect themselves from being linked to these types of content, brands need to begin enlisting technology to prevent their campaign assets from showing up in the wrong places. Recent advancements in voice and image recognition can help brands do this by creating a better way of filtering content, as well as better safeguarding measures.

Image recognition is already being used by some social media platforms to filter out inappropriate content. Facebook began testing a way to use AI to flag inappropriate live videos last year, and recently announced that it was applying an AI algorithm to root out and delete extremist content on the platform. Google too has been using AI to help it find “objectionable content” and make it easier for advertisers to manage and review their ads. While these are steps in the right direction, brands shouldn’t just rely on platforms to put an end to the problem; they must also take additional steps themselves to safeguard their brand values.

What brands need is a cross-platform solution that goes deeper than anything already being done by any single platform today, and relies on the use of voice recognition, image recognition, and other machine-learning models that are built specifically for the protection of brands. These tools must be able to identify not only universally objectionable content (such as extremist propaganda), but also content that will adversely affect a specific brand -- whether it is because it contains negative sentiment, mentions competitive brands, or otherwise includes objects and themes that are unrelated or contradictory to the brand’s message. With the advent of deep learning, these are now things that AI can evaluate reliably and at scale.

These algorithms and AI technologies can be used as more than just a defense against inappropriate content. In order for brands to use image and speech recognition effectively, they have to understand not only what content people might find offensive but also what content they want to be associated with -- which means knowing their brand values and target audience. A brand that only wishes to attach itself to the most popular piece of content at any given time might find it harder to determine the audience most receptive to its ads, and therefore harder to narrow down the parameters for its content filters. Conversely, a brand that understands its audience well can take steps to ensure that the content it messages against is the most relevant for its target consumers.

To give an example, say you’re a company that makes sportswear that’s looking to advertise on YouTube. As popular as cat videos are, they’re not really the right fit for your brand: while some people who watch those cat videos might be convinced to buy your exercise pants, there’s not necessarily a direct link between the two. Now, if you were to set up an algorithm that uses image and speech recognition to ensure that your ads are only placed next to sports videos, that would allow you to do a kind of hyper-targeting without having to rely on Google or Facebook to place your ads for you.

Ultimately, it’s about understanding. Brands need to understand the content they’re messaging against, but they also need to understand their own content. By doing so, and by using speech and image recognition to create better mechanisms that will prevent their ads from showing up next to questionable content, brands will be able to protect themselves and reach out more effectively to their target audiences. Artificial intelligence will not only make brands’ content safer, it will also make brands safer, by reducing their reliance on third parties and allowing them to have control over their own message.

 

By Brunno Attorre, CTO and co-founder of Uru


PrivSec Conferences will bring together leading speakers and experts from privacy and security to deliver compelling content via solo presentations, panel discussions, debates, roundtables and workshops.
For more information on upcoming events, visit the website.


comments powered by Disqus