The Root Of The Issues With Platforms Like Facebook And Google

I read a very long and really interesting piece this week, written by an early Facebook investor detailing what, in his view, is wrong with Facebook and what might be done to “fix it”. It hit on a lot of points that I have found myself thinking about in the past couple of years and I would highly recommend reading it for yourself. I shared it on Twitter and it can be found here.

How Did We Get To This Point?

This is a question a lot of people seem to be asking. Especially in light of the events of the past year. I would argue that it has been a combination of factors that have led us here. With “here” being a place where a couple of enormous corporations have a tremendous amount of control over what we see, how we see it and when we see it.

I think the answer is actually fairly simple. I have worked in tech and with programmers and developers for nearly 20 years now. I have had a wide range of experiences with quite a number of individuals and firms in this space. I certainly do not know every programmer or developer everywhere, nor do I personally know the people who originally conceived and built Facebook or Google or the people who continue to work on them today. I am speaking merely from my experience and positing a scenario that feels likely to me.

Google and Facebook were built to be something. As their founders were creating them out of nothing, they had a very particular set of goals in mind as to what the core functionalities of their platforms would and should be. They set about building their platform to achieve those goals. Programmers and developers are typically great at this kind of task – give them something that needs to be built and they find a way to build it, to make it, to bring it to life. It is pretty heady stuff when you’re in the middle of it. Figuring out how to get something you’ve only conceptualized to become something that actually exists and functions as you intended it to is a pretty amazing feat.

Here is where I think the problems originate though for platforms like Google and Facebook – their leadership is comprised largely of people of the programming/developer background and mindset. These type of thinkers generally are not the ones to think about the nuance of a platform’s capabilities or certainly if something they thought of and managed to develop could be used in any way other than what they envisioned. They have a particular way of looking at things, a way that makes them extraordinarily good at their core functions. But, it is so often sorely lacking in even the most basic “big picture” thinking. They are also often lacking in viewing things from a human experience point of view. The questions should not just be “can we do X?” but should be closely followed by “should we do X?” and “what might the unintended consequences of doing X be?”. We need more voices that raise these types of questions in the mix at tech companies – we just do.

I have seen this personally on projects on tremendously smaller scales, but when developing a functionality for a web site, I have found that developers will give you exactly what you asked for and generally don’t think about what they are doing in any other context other than “does this do X properly”. They are not usability thinkers – they will usually not be the ones to tell you that your process is missing a critical step that actual users will want, for example.

Facebook in particular (although Google suffers from this same malady, particularly with YouTube) seems to have a real lack of understanding about how it impacts the world. They also appear, at least so far, to have little interest in taking major responsibility for the reality of their tremendous influence on people’s lives.

To be fair, I understand to a point why platforms have to date taken the attitude of “we are just a platform and we are not responsible for what people do with it” stance. From a legal perspective (disclaimer – I am not a lawyer) it makes sense. If you start taking responsibility for what happens in the space you created, I would assume you open yourself up to a lot more liability than by taking the current stance. But, at some point, something is going to have to change in this regard. Twitter has been facing all kinds of heat for selectively enforcing its terms of service regarding harassment. Facebook is facing major scrutiny and heat for content spread in advance of our last elections and also for allowing people to target users using characteristics most regular people would find appalling. Google has been taking a lot of criticism about their YouTube platform for a variety of scandals over the past year as well.

We Need To Create Some Kind Of Checks On Algorithms

The biggest problem with algorithms operating in real time is that they (at least at this juncture) respond to what is popular, with little to no regard for what that actual content itself is. It is the reason that whacked out results can be at the top of a Google search for a real time event. It is the reason that material that is highly offensive bubbles right up to the top of the content lists and more and more people see it – content that elicits strong reactions is interacted with a ton and to algorithms this then seems to equal “something that should be elevated so more and more people can see it”.  Algorithms are not meant to interpret any kind of nuance. And that is a big problem.

I know, this can quickly slide into the inevitable “slippery slope” discussion territory. And I agree with that. But I also think that just letting things be as they are is not tenable either. Facebook took some steps last year, after information about Russian use of their platform to influence voters in 2016 came to light, to hire more humans to review content and try to stem the tide of this type of propaganda finding mass distribution. It is a step in the right direction.

YouTube needs to do the same if it wants brands to continue to advertise on it. And, frankly if it wants parents to continue to let kids consume content on it – which is a gigantic source of revenue for YouTube. Many of the highest paid YouTube content producers create content aimed at kids. The amount of weird, scary and disturbing stuff that is out there for kids to easily stumble upon – even on the supposedly safer YouTube Kids version of the platform – is staggering. I have seen that firsthand as well with a young child who likes to watch things on YouTube Kids. It is an issue that needs some type of solution. I am not a fan of censorship, but I also think that people should be able to more easily make informed choices about the content they, or their children, are exposed to on a platform. The “if you don’t like it, don’t watch it” argument is actually generally ok by me (with the exception of hate fueled content, violent content, etc. – the usual places most human beings would draw lines) problem is that right now it is WAY to easy to unknowingly stumble upon things that are highly objectionable with no quality way to filter them out of a personal viewing experience.

I find the idea of censorship abhorrent. And I don’t want platforms to wade too far into that territory, beyond the things that are supposedly already against their terms of service. What I think probably needs to happen going forward is a much better categorization of content, most likely with human involvement, so that users can have more control over what they are shown. If a platform literally entitled YouTube Kids can’t keep clearly offensive material off of itself, then something needs to change.

What do you think? How should platforms try to address this stuff? As always, sound off in the comments or hit me up on Twitter (@NeptuneMoon).

Speak Your Mind

*