Facebook’s Broken Censorship Machine

Parker Thompson
5 min readAug 8, 2018

We seem to be having a debate about whether or not Facebook (et al) should or should not censor speech without acknowledging that they already do.

Facebook has built the most effective censorship machine in the history of human civilization. Every day it filters through billions of pieces of content and decides exactly which ones each of its users will and will not see. It shapes the way we think, the relationships we have, who we vote for, and what we believe to be true.

The real question is not whether they should or should not be censoring content, as the (allegedly) failing New York Times would have you ask, it is whether Facebook’s censorship algorithms are working in the best interest of their users and society, or whether they are not.

And the reasonable follow-up question is, how might Big Tech (to use Mr. French’s term) improve its algorithms to censor less content that’s better for users and society, and consequently (because users only have so much time), what should they de-prioritize (censor)?

Let’s agree that more or fewer cat videos is not worth discussing, and while it’s relevant to user happiness, let’s set aside inflammatory posts from your jerk cousin and focus on ostensible political speech, which seems to be what all the fuss is about.

Returning to today’s New York Times piece, I don’t find Mr. French’s argument that Big Tech should “not censor” content to be sensical at all. What would that even mean? Would we ban the news feed entirely? I don’t think users would like that at all.

It’s time we (Big and Small Tech) debate publicly the fitness function (nerds love these) for the newsfeed (and other recommendation algorithms that are commonly hacked), and critically assess whether these companies (we) are doing an effective job prioritizing content is at least not at odds with this function, and, yes, pushing content out of the scarce screen space that is (even if it’s just to make room for more cats).

I’d be genuinely curious to hear what algorithm Mr. French would suggest would lead to desirable social outcomes? What would he prioritize in our information diets given that “freedom” is a non-answer, and the slippery slope he implies is a poor excuse for inaction. But since I don’t expect he reads my little blog, let me take a whack at it.

In my estimation, the primary problem with respect to “news” is caused by Big Tech recommendation algorithms, is that they tend to bias towards commercially-oriented propaganda and away from information that is generally considered to be in the public good, such as directly political speech or high-quality ethically-executed journalism.

I made this dumb graphic to frame the issue in a way I think might be constructive to think/talk about given where the companies are at today in terms of their comfort level with action on this topic.

Mr. French et al are in practice arguing that private businesses making the moral decision to “not censor” (prioritize) the most extreme commercially-oriented and willfully deceptive content (that happens to be about politicians and political topics). I would argue that is an extreme (and not particularly conservative) cause that does not serve the public interest, but is an easy position to take if the current propaganda environment ostensibly serves your tribe’s interests (we’ll see long-term).

It is time reasonable people (especially those who work for Big Tech) acknowledge the difference between intentionally-deceptive propaganda operations that hack these platforms for their own financial gain at the expense of society, and those that operate ethically (please read this page) in the public interest. And it’s time we stop taking people seriously who do not approach this specific discussion in good faith.

We should not be debating whether the content in the extreme lower right of this graphic should be promoted by Big Tech as some sort of perverse social obligation, and that we are is a testament to the influence of those who operate within it.

There is no slippery slope.

We’re not even talking about the propaganda elephant in the room (FB’s least censored publisher, FWIW).

It speaks volumes about where Big Tech is at on this algorithm design that the debate is happening around Infowars as opposed to how to design algorithms with thousands of engineers and infinite data that could approximate what one person created in powerpoint.

Caveat: some would argue propaganda is nothing new and society will be fine. I am unsure if that’s correct or whether it’s “worse than it’s ever been,” but would argue this is irrelevant and primarily another way of justifying inaction.

If there’s room for improvement, then it can be better, that itself is a worthy goal.

Caveat #2: a related argument objection I’d like to preempt is that these are subjective questions and these platforms should try to stay neutral. Mr. French makes a version of this argument.

As the title of this post suggests, neutrality is not an option, it just cedes the decision to those most capable of hacking the algorithm.

But to the question of whether trying to solve the problem could lead to mistakes, sure, let’s admit that it could. But let’s also admit that if we approach this task as a process, take steps to be transparent, consult broadly but act by fiat (as editors ultimately do), we are much more likely to make the situation better than worse. And if we make it worse, it’s just code, and easy to fix with stable infrastructure.

From my perspective, Big Tech needs to acknowledge that there is a spectrum of media, score this media, and feed these scores into their censorship algorithms along with other data which determines what they censor/promote.

Some (e.g. journalism, candidates for office, etc) should probably score a bit higher if the goal is to have happier users and to promote healthy democracy. Some should probably score a bit lower because it is counter to these values.

Caveat #3: please don’t tell journalists what you can make convincing troubled people to go into pizza parlors with guns. We’ll be so screwed if they find out.

To date, Big Tech (and frankly, Big Media, self-interested politicians, etc) has tried to treat these types of media as equivalent, with the rationale that the distinction I’ve made is subjective and that they could not possibly bear the weight of being responsible for making this distinction for their users.

While I genuinely respect what I see as earnest humility (it will be necessary in getting this right) from Big Tech executives on this particular topic, what we’ve seen from these companies has been inadequate. Good intentions only get you so far and goodwill only lasts as long.

Personally, I drafted this post in November 2016 and my patience has run out.

My perception is that the propagandists are winning because their fitness function is to generate content that turns Big Tech’s algorithms into profit regardless of the externalities, and that there’s an uncomfortably high chance that we don’t return to a desirable democratic equilibrium. I worry Big Tech still doesn’t get it.

Perhaps it is not that our (tech) media environment represents a threat democracy (I acknowledge it’s hard to tell in the moment), but if it does, democracy doesn’t die in darkness, it dies in code.

Is it your code?

--

--