Google’s YouTube Content Problem Isn’t Going To Just Go Away

It was huge news last week that a number of very high profile brands pulled all of their advertising from YouTube. It started with some European companies and governments and quickly spread to US companies shortly thereafter. I tweeted after the first story broke that this is a big deal. YouTube has been driving Google’s revenue for a while now – do a search for Google earnings Q4 2016 and you’ll see a slew of stories that include YouTube right in the headlines. It is expected, along with mobile ad units, to be an engine of growth as “traditional” CPC figures have flattened or declined in recent quarters. Forbes has an accessible (understandable by non-financial pros) piece here on the Q4 2016 earnings report.

This is clearly not good news for Google or Alphabet. The PR look is certainly negative – stories of brands’ content showing alongside terrorist, extremist or hate-speech filled videos is chilling to the market. And, with very high profile brands opting to pause all of their advertising on the platform unless or until this can be effectively managed, certainly would make other brands, both large and small consider doing the same. And brands not advertising obviously means lower revenue in the short term. But what about the long term?

This is a really complex and really interesting issue. Having been in internet marketing since essentially the beginning, I find this perhaps even more interesting than the average tech following person. Google, and to be fair all search engines or platforms, have historically had policies about content on their networks that essentially could be boiled down to something like this – we have terms and conditions for content that is posted/appears on our network and if there is something there that violates said policies and we are made aware of it, we will review it and if deemed necessary, take action. They all have worked VERY hard to not be responsible for the content that is shown, exists or is listed or promoted in their ecosystems.

And, I totally get why they have taken this stance. It can be summed up in one tidy little word – liability. For any of these platforms to take a more aggressive stance in policing their content begins to shift responsibility off of content creators and on to them. I am obviously not a lawyer, but you don’t have to be to understand this dynamic. And, many are struggling with the consequences of this stance right now. Facebook is “combatting” fake news. Google is working to combat advertisers’ ads showing up next to content that they find horrifying after the high profile stories of the past week.

The thing is, you can’t really have it both ways – at least not forever. The stats floating around out there about the sheer volume of content that is uploaded to YouTube daily are staggering. Something like 400 hours of content uploaded EVERY MINUTE. That is an astounding amount of content. According toMatt Brittin, Google’s president, EMEA business and operations “nearly 98 percent of removals occur within 24 hours of posting”. You can read more on his statements in this piece on CNBC.

Ok, terrific. Content is removed sometimes. But, clearly on all of these networks, content is posted and remains that could be classified as hate speech and/or extremist in nature. What, if anything, should the platforms do about it? I think what they are going to have to to is something they have not wanted to do up until now and that is hire teams of people to spend  more time reviewing and flagging content and not just waiting for people to find it and report it. They probably have not done this up until now for two reasons – the cost of having these teams in place and what I would imagine could be a shift in their liability risk if they start being more proactive in content review and categorization or removal.

But, I can’t see that they have a whole lot of choice at this point. Whatever the potential legal issues or risk exposure with becoming more proactive in content review, if they want to keep those sweet advertising dollars flowing, they need to be able to promise and deliver on that promise that brands can absolutely be assured that their ads will not show alongside of content that they find objectionable. This means that content will need to be reviewed and that certain types of content that are not removed as violations of terms of service but are still in the category of extremism or hate speech will have to be clearly tagged in the system so that advertisers can exclude those categories of content from their advertising campaigns and have it actually block all undesirable content reliably.

This isn’t just a YouTube issue either. If you advertise on the Google Display Network, you may or may not have experienced undesirable placement creep despite excluding categories that seem like they should have handled/included that placement. I’m certainly not making excuses for lazy advertising execution. I’m sure there are plenty of cases where the agency or people in charge of accounts simply did not take the time to make sure settings were properly configured and/or did not monitor what actual happened to be sure that placements a client would find undesirable or unacceptable were not happening or were stopped as soon as they were spotted. Surely that is responsible for some instances of ads showing up where advertisers absolutely do not want them.

I do not envy what Google and other networks are going to have to try to figure out and stay ahead of in the coming months and years on this front. It feels a lot like the issue of spam. Everyone agrees it is a scourge and that something should be done, but the train is so far from the station that catching it is a virtually impossible task at this point. Hopefully for advertising ecosystems, the train is not already out of reach. A lot will depend on the amount and types of resources they are prepared to commit to the issue. And, if it becomes a permanent part of how they do business.

As someone responsible for clients’ ad placements, I will most certainly be paying close attention to how this all plays out.

What are your thoughts or experiences with this issue? As always, sound off in the comments or hit me up on Twitter (@NeptuneMoon).

 

Comments

  1. Without a doubt, we can say the world of content continues to evolve, but I would question if the GDN model has evolved at the same pace. I think you can safely say that the GDN advertising model has stayed fundamentally the same throughout its existence. Sure there have been useful innovations like remarketing, but all-in-all, not much is different. I think it would be great if Google could leverage machine learning to identify quality tiers in content and develop bidding models against those tiers. Larger publishers already do this, I have a hard time understanding why Google doesn’t take the leap.

    • Neptune Moon says:

      Thanks for sharing your thoughts!

      I think this is another great place to utilize machine learning, but I still think it is going to take good old fashioned human review to even hope to catch up with all the content. Hoping for a solid effort, as I do like GDN for advertising!

Speak Your Mind

*