The success of platforms like Facebook and Twitter comes down to the fact that audiences naturally tend to congregate around social and visual content. And earlier this month, the power of social media as a news driver became apparent when Facebook leapfrogged Google as the top source for online news traffic (Parse.ly).

There’s no doubt that social networks are a goldmine for advertisers. This has been seen recently with both Facebook and Twitter introducing autoplay videos, which have become increasingly common and popular with advertisers due to the immediate attention they attract. Also, companies can charge higher ad rates on the formats, making it a major moneyspinner.

However, the dangers of this kind of autoplay video, without sufficient monitoring of the content, became shockingly apparent yesterday (Wednesday 26th) when footage of two journalists being shot and killed in Virginia appeared in the timelines of many social media users across the social networks.

The footage circulated quickly on social media, finding its way on to Facebook and Twitter in autoplay mode. The incident was heightened by the fact that not only was the harrowing and disturbing video repeated over and over again, but also because viewers were not able to stop it. Understandably, this resulted in a social backlash.

With so much video content being uploaded to social channels daily, including 300 hours worth of video on YouTube every minute, this kind of problem is going to become more prevalent unless firm action is taken to monitor video content effectively. Particularly with mobile video live streaming poised to become the next consumer phenomenon.

The problem is that most verification tools rely on the user or metadata to moderate content as opposed to analysing the actual content being viewed, making it difficult to validate user generated content as safe with sufficient accuracy. However, technologies are now available that can ‘read’ and classify video content as well as automate and improve the whole process of placing video ads.

Introducing an automated solution on upload that incorporates both visual recognition technology and brand safety criteria provides a comprehensive understanding of the visual content. This removes the threat of an unsafe image being uploaded before being spotted by a moderator, the web audience or the brand manager. It’s not only a safer solution, but can also be a more efficient one if the site has scale, as there is no need to bring in teams of human moderators.

This will not help those people who have been left traumatised by Wednesday’s tragic event, but if implemented will prevent it happening in the future.

 

By Adrian Moxley, Co-Founder and Chief Visionary Officer at WeSEE. 


PrivSec Conferences will bring together leading speakers and experts from privacy and security to deliver compelling content via solo presentations, panel discussions, debates, roundtables and workshops.
For more information on upcoming events, visit the website.


comments powered by Disqus