Facebook is still struggling with the difference between hate speech and censorship

63

On Nov. 11, thousands of people marched in the streets of Warsaw, Poland, to celebrate the country’s Independence Day. The march attracted racist and neo-fascist groups as well as individuals from all over Europe emboldened by the global rise of the far right. International news was flooded with images of the more menacing attendees: young men bearing signs that proclaimed white supremacy, engulfed in a sea of red flares and smoke.

One collection of such images, published on Facebook by a renowned photojournalist in Poland, was taken down by the social media’s content moderators, once again raising questions about censorship and the platform’s confusing policies on hate speech.

Chris Niedenthal, a family friend of mine, attended the march to practice his craft, not to participate, and posted his photos on Nov. 12, the day after the march. Facebook took them down. He posted them again the next day. Facebook took them down again on Nov. 14.

Niedenthal himself was also blocked from Facebook for 24 hours. “I was, quite naturally, furious when Facebook first deleted my post, and censorship immediately came into mind,” he said. “More important, I felt it was censorship for the wrong reason: A legitimate professional journalist or photojournalist should not be ‘punished’ for doing his duty.”

The images’ disappearance spurred multiple news articles and outrage in Poland, where official censorship reigned for decades under Communist rule. Some Facebook users speculated that the platform was helping quash unflattering portrayals of the march.

Niedenthal’s images showed a young woman, a child, and an older person among the participants. But some of the most striking were those of young men covering their faces with masks and scarves that bore the insignia of nationalist organizations, soccer hooligan groups, and the Celtic sign, often a symbol of white supremacy. Gazes intense, fists raised.

Facebook told Quartz that the photos, because they contained hate speech symbols, were taken down for violating the platform’s “community standards” policy barring content that shows support for hate groups. The captions on the photos were “neutral,” so Facebook’s moderators could not tell if the person posting them supported, opposed, or was indifferent about hate groups, a spokesperson said. Content shared that condemns or merely documents events can remain up. But that which is interpreted to show support for hate groups is banned and will be removed.

Other users had flagged the album as undesirable content, and, each time that happened, a Facebook content moderator took down the photos. Facebook said it was likely a decision of two different members of its 7,500-member team, who are located all over the world and not necessarily in Poland, where they may have had more insight into events on the ground.

Eventually, after Niedenthal protested, Facebook allowed the photos to remain on the platform. Facebook apologized for the error, in a message, and in a personal phone call.

Discerning what is hate speech and what is not remains tricky, Facebook told Quartz. People report content that they don’t agree with, which leaves the moderators to decide what is a legitimate complaint. While some content is clearly inadmissible according to Facebook’s rules, like terrorist propaganda or child pornography (which is increasingly sought out by artificial intelligence), decisions about hate speech can’t be made without looking at the post’s context.

The Facebook spokesperson said the company regretted the incident and wants newsworthy material to remain on the platform. Facebook is trying to improve its moderation system; it announced earlier this year that it planned to hire thousands more to review content. The company is also planning to expand its appeals process and to better inform users why their posts are being taken down.

It is also working on developing AI technology to flag objectionable posts, although in the case of hate speech, the spokesperson said, human input will remain necessary to understand the situation—which is where, in this case, Facebook moderators failed.