Misleading Ads, Fake News & The Platforms That Profit From Them

Much has been in the news lately about the proliferation of “fake news” and its impact on society. Ad platforms and networks, of which Google and Facebook are the two behemoths, make the vast majority of their revenue from people placing ads and others clicking on those ads all around the web.

I saw two pieces shared on Twitter today talking about different pieces of this complex puzzle:

From the NY Times: “Google Serves Fake News Ads in an Unlikely Place: Fact-Checking Sites

From Bloomberg Business: “Facebook and Google Helped Anti-Refugee Campaign in Swing States

Both of these pieces are troubling.

Ad & Site Network Quality Is Largely Unchecked In Any Meaningful Way

Anyone who works in digital advertising already knows this. There are certain automatic triggers that will get your ads disapproved in a heartbeat – advertising goods or services from certain prohibited categories, using trademarked terms or being too personal on healthcare are just a few examples. These rules can often catch things that should be be flagged, and sometimes flag things that should not be and upon a manual review, can usually be approved and run. The larger problem on the platforms is that they do not want to be in the business of verifying truth in advertising or identifying nefarious intent. This is not a new philosophy for tech companies – since their inception, they have argued that their position is merely to make information available, not to police or censor it in any way. And, they have gotten away with maintaining that position largely, at least until recently.

I have long argued that a significant percentage of the sites within the Google Display Network are of questionable quality. Having spent many, many hours combing through placement reports, I can testify to this personally. There are a lot of terrible sites out there. And there are a lot that are of questionable intent. Up until this point, the standard for sites being allowed to run GDN ads has seemed pretty low – again, based on my own personal viewing of sites that come up in placement reports. If someone reports a site, it might be dropped from the network, but it does not seem like there has been an aggressive effort on the part of Google to cull sites from their distribution network.

They (along with Facebook) seem to have similar disinterest in whether or not the ads that run on their platforms or networks are clickbait or midleading. Sure, they might have policies in place that prohibit such practices, but they are absolutely not employing active measures to root out these kinds of ads from their networks – at least not at present in a way that deals with the issue effectively or broadly. I see ads all the time for things that I know are false and are designed simply for the benefit of the advertiser to send visitors to their web sites to then earn revenue from the Google ads running on those sites (like Joanna Gaines of HGTV’s Fixer Upper fame leaving her company to start a skincare line – completely false and advertised everywhere for months and months as one example). Google in this case, makes money on both sides of the equation – from the questionable site running the ads to get people to their site and then again from the ads running on the site. Doesn’t exactly put them in a position to be terribly anxious to significantly review that ecosystem, does it?

The NY Times story highlights a particular irony in that these type of deceptive ads were running on web sites whose purpose is to debunk untrue things, such as Snopes and Politifact. Yikes.

I am anxious to see how this all shakes out, as Facebook recently announced the hiring of thousands to manually review ads. Every step they take to be more active in verifying things and weeding things out increases their liability. I’m not a lawyer, but this is simply common sense. The “we are just a network or conduit” position is designed specifically to absolve them of any responsibility for what appears on or within said network or platform. If you start taking responsibility for some of it, it seems like the proverbial slippery slope.

Don’t get me wrong, this is a slope they need to navigate…

Now The Second Story Which Is Even Worse For The Plaftorms

The Bloomberg Business story talks about how Google and Facebook actually  and literally helped organizations target people to see ads that are pretty disturbing in the larger picture and certainly seem to fall into a category that I think most people would characterize as “wrong”. This particular example had to do with targeting specific voters in specific swing states to try to move the needle with advertising that is this inflammatory. And, this is only one example, others have been in the news over the past few months as well.

It is really not a good look for Facebook and Google to seem to not only allow this type of very manipulative advertising, but to offer what seems like pretty significant account optimization strategy and support. I think most people view it differently for private firms or consultants to do this type of work versus the advice, strategy and significant support described in the article coming from the platforms themselves. It is harder to make the “we are just the platform and people will sometimes push our rules to their breaking point” or “sometimes we miss stuff, but we are trying to get better” arguments when you are actively participating in stuff like this, as the Bloomberg Business article, and others recently, have stated.

Facebook has also come under fire recently for allowing ads to be targeted to people who self-identified as “jew haters”.  When that story came to light, Facebook quickly removed the entire tree of targeting options that allowed for this heinous one to exist. But is it indicative of a larger problem within the platforms themselves.

The bigger question though is how are these things allowed to not just exist, but persist and even thrive? I think this is a question the major platforms are going to have to answer in the very near future. Much like the first examples, the hands off, laissez faire way of doing things seems like a poor path going forward now that the ugly potential has been exposed. From a strictly PR perspective, it is – as they say – “not a good look” for either Google or Facebook to not address this stuff head on.

Another piece on this topic from TechCrunch just came into my Twitter feed as I was finishing this post: “Facebook and Google competed for anti-immigration ad dollars during the 2016 election

How Can This Situation Be Improved?

I don’t want to oversimplify here, but something has to start somewhere. My first suggestion would be to bring in people who have skillsets beyond computer programming and development and let them dig deeply into your systems and provide insight into how these things can happen within the current framework. I have another post sitting in my drafts right now that talks about the importance of having people with different skillsets working in organizations. This is a prime example. The majority of people who work for Google and Facebook come from technical backgrounds. This makes them amazing at the core of the work they do – developing and growing huge platforms and networks. It does not make them good at seeing beyond that perspective.

To be fair, most people are at least somewhat myopic when it comes to how they view things based on their background and perspective. They will typically approach something from their familiar vantage point. The problem arises when there is only one or maybe two vantage points represented in any meaningful way in an organization. I can see this pretty clearly looking from the outside into a Google or Facebook. I have worked with developers for around 15 years and to generalize some of my experiences with these types of experts – they tend to be very literal and linear in their thinking and perception of things. This makes them very good at what they do, which is building things to perform the desired functions. What they have not been great at, on the projects in which I have been personally involved, is getting outside of that vantage point to even think of things from a human perspective. They don’t think about who will be using their stuff and how they might want or need to use it. They tend to think about the how of getting a piece of software to do X, Y & Z and do it completely and correctly.

I’ve seen it where developers (again the majority of people working at Google and Facebook) work on web sites and are tunnel visioned in their tasks. They literally don’t even think about how what they are doing might impact some other aspect for either the site owner (such as SEO or ability to track web site visitor activity via Analytics) or end users (like can they complete a purchase on their phone). It takes someone like me or someone at the client to bring up these types of issues at any point in the development process. Otherwise, it would not even be thought of.

I think there is hope that this can get better, but it will take a pretty Hurculean effort and a willingness of the platforms to potentially lose revenue in a more permanent way by actually addressing some of these issues. It would be better for their long-term viability to seriously address this, but Wall Street typically does not reward this type of proactive behavior, so time will tell. In the meantime, it would seem the heat is on, at least for now…

What do you think about all this? Love to hear your two cents! As always, sound off in the comments or hit me up on Twitter (@NeptuneMoon).

Speak Your Mind