Here’s how Facebook determines what posts are ‘harmful’ and should be removed

A leaked internal Facebook document that details exactly how often the company restores posts is causing an uproar on social media, with many users claiming the system shifts its pressure too much toward censorship.

The report, obtained by Buzzfeed, shows that Facebook currently reviews posts for accuracy by using an algorithm to determine whether the post is “harmful” to a community member’s well-being and whether it lacks context. As part of the algorithm’s determinations, human reviewers will look for warning signs like a block, memorialization or search history that hint at the questioner’s identity.

The median amount of time it takes to determine content as “harmful” is, unsurprisingly, much longer than the average review time. Buzzfeed’s Wesley Lowery found that he was previously banned for six days on the coasts and one week in the Midwest before Facebook restored his post.

The average time it takes for a complaint to be determined as “harmful” is 55 days, meaning that they remove a lot of posts before they restore them — which might be useful information for users who know more about their posting habits than some social media users might imagine. Others are pointing out that there is no clear minimum or maximum period that a complaint should be resolved: Facebook currently says that low-harm complaints will be addressed within six days, meaning that the company is “seriously tolerant” of low-harm comments.

This particular process gives Facebook a good amount of control over when it returns a particular post, and if it gets there, when the service takes another look at the post.

This is a strikingly different system than Twitter’s or other platforms. These other services consider each complaint individually and are working to automate their decision-making systems. Twitter recently won the right to take that step.

The current process seems on the surface to favor Facebook, especially given the dominance of the social network — an advantage it might be exaggerating with its algorithm.

The idea of balancing a community’s civil rights with an individual’s right to freedom of expression might seem like a simple idea, but the data has the potential to reveal a real human struggle: Would you want the government regulating your speech, or would you rather let them censor your posts?

The complaint process comes up with a complex hierarchy of importance in terms of when Facebook moves to take action. News articles can lose their “zero” rating for 40 days. Videos can stay the same for one week, but after that, they have to be removed, even if edited, if the language is strong enough.

Obviously, most content at Facebook is still unclear, and may take longer to determine, but it seems clear that despite some improvement, the computer cannot discern between hate speech and hate crime, public records and private communications, political content and political debate, and accurate protest and misinformation.

So if this system is meant to promote democratic oversight, then it’s not much better than what we’re used to.

This story was originally published by The Information.

Leave a Comment