Time to stamp out hate content on social media

After the tragedy in Christchurch last Friday, serious questions are being asked of the world’s largest social media companies. Why was the killer able to live stream this appalling act on Facebook for 17 minutes? Why couldn’t YouTube and Twitter prevent copies of the video from being propagated on their global networks? Why did Reddit have a forum named ‘watchpeopledie’ (another place where this horrendous video was posted) running on its platform for seven whole years?

To answer these questions, we need to look at the content moderation processes of Facebook, Google and others, plus examine the effectiveness of using algorithms to help police content. 

But we must also push for change at the highest levels of these tech companies, to get tech executives to take responsibility for the content on their platforms. More can and should be done to proactively remove hate speech and stop the propagation of violence on social media.

The problem with human content moderation

Firstly, let’s look at what social media companies do now in terms of content moderation. 

While there are AI and other automated efforts in place to help, content moderation is still largely a manual process. As revealed in the tech blog Motherboard, at Facebook human moderators are tasked with identifying objectionable content and then taking action.

However, in the case of a live stream of a terrorist act, many precious minutes can roll past before action is taken.

“When escalating a potential case of terrorism in a livestream, moderators are told to fill in a selection of questions about the offending content,” wrote Motherboard, based on an internal training document it obtained for Facebook content moderators.

Clearly this process was both far too slow and completely ineffective when it came to stopping the Christchurch live stream.

“I’m not sure how this video was able to stream for [17] minutes,” one of Motherboard’s Facebook sources said.

As for Google’s YouTube, according to a Wired article “the vast majority of all videos are removed through automation and 73 percent of the ones that are automatically flagged are removed before a single person sees them.”

Those statistics may sound impressive, but remember YouTube’s massive scale. 27 percent means millions of people are potentially being exposed to terrorist videos on YouTube before they’re moderated.

Not to mention there are still copies of the Friday terrorist’s video floating around in the YouTube cesspit, as I write this.

Time to put the laser beam on white supremacists

To be fair to these companies, it’s not like they’re sitting on their hands doing nothing. In 2017, YouTube, Facebook, Microsoft and Twitter formed the Global Internet Forum to Counter Terrorism (GIFCT). 

While this organisation seems to be doing sterling work, I have my doubts about how broadly it defines “terror content.” In particular, its About page mentions ISIS and Al Qaeda – but not white supremacists.

The GIFCT group’s most recent update, in June of last year, noted that it has “added 88,000 hashes to our industry ‘hash sharing’ database.” This is a process where “digital fingerprints” are created of terrorist content, which helps identify copies and prevent their spread.

According to Buzzfeed investigative journalist Ellie Hall, who reported on the internet activities of ISIS over a period of two years, these and other initiatives have been very effective at preventing ISIS and Al Qaeda from spreading their hate-mongering content. But she questioned whether the same effort is being put into stamping out white supremacy content.

If there’s one action I’d love the big tech companies to take after this awful event, it’s for them to put their considerable resources (and brain power) into weeding out the white supremacists on their platforms.

Can AI really prevent hate content?

Lastly, let’s look at the progress of automated tools to prevent hate content from spreading.

Back in April 2018, Mark Zuckerberg testified to the US Senate. One senator asked about hate speech and what Facebook is doing to prevent it. Zuckerberg said in reply, “we’re developing AI tools that can identify certain classes of bad activity proactively and flag it for our team at Facebook.”

He went on to say that “99 percent of the ISIS and Al Qaeda content that we take down on Facebook” was flagged by their AI systems before any humans saw it. 

“So that’s a success in terms of rolling out AI tools that can proactively police and enforce safety across the community,” Zuckerberg claimed.

Hmmm.

It’s very clear that Facebook’s AI’s did nothing to proactively enforce safety last Friday, even after a live stream ran for over fifteen minutes on its platform.

Again, this seems to be a case of not having the right focus when tracking terrorist content.

As Kalev Leetaru, a senior fellow at the Center for Cyber and Homeland Security at Auburn University, told Marketwatch this week, Facebook’s automated systems compare new content to previous hate content from groups like ISIS or al-Qaeda. Which means Facebook’s algorithms “can fall short of identifying acts of violence perpetrated by other terrorist groups such as Boko Haram or white supremacists.”

Where to from here

Let’s get real about automation. For the foreseeable future, human moderation will continue to be needed to stamp out hate content. Regardless of what Mark Zuckerberg believes, the fact is AI isn’t currently up to the task. 

That said, the tech companies should update their algorithms to also target white supremacy terrorists (as well as ISIS et al). You only need to look at the profile of recent mass murderers, including the one last Friday, to see that white supremacy is a common theme.

The biggest issue though is that neither human nor AI moderation is much help in the case of live streams. The only viable solution, it seems to me, is to prevent people like Friday’s terrorist from live streaming in the first place.

One suspects the tech companies will need to work closely with government intelligence agencies to identify, monitor and proactively shut down people who use social media to distribute hate content. 

Before Friday, the response to that would’ve been just two words: “free speech.” But we’re no longer talking about the trivial matter of two right-wing provocateurs being prevented from speaking in New Zealand. We’re now talking about preventing extreme terrorist violence in our country. I think our former Prime Minister Helen Clark said it best, in regards to free speech:

“We all support free speech, but when that spills over into hate speech and propagation of violence, it has gone far too far. Such content is not tolerated on traditional media; why should it be on #socialmedia?”

Why indeed. So let’s fix this, by advocating for meaningful change  at companies like Facebook, Google, Twitter and reddit in how they deal with hate speech.