The AI failures of Facebook & YouTube

The big social media companies have rightfully come under immense pressure over the past week, in the wake of the Christchurch terror attack. 

But one question continues to bother me: why were Facebook and YouTube so ineffective when it came to shutting down the terrorist’s live stream and the dispersal of the video after?

Over the past couple of years, Facebook CEO Mark Zuckerberg has regularly trumpeted Facebook’s prowess in AI technology – in particular as a content moderation tool. As for YouTube, it’s owned by probably the world’s most technologically advanced internet company: Google.

Yet neither company was able to stop the dissemination of an appalling terrorist video, despite both claiming to be market leaders in advanced artificial intelligence.

Today’s AI technology could’ve dealt with this

Why is this a big deal? Because the technology already exists to shut down terror content in real-time. This according to Kalev Leetaru, a Senior Fellow at the George Washington University Center for Cyber & Homeland Security. 

“We have the tools today to automatically scan all of the livestreams across all of our social platforms as they are broadcast in realtime,” Leetaru wrote last week. Further, he says, these tools “are exceptionally capable of flagging a video the moment a weapon or gunfire or violence appears, pausing it from public view and referring it for immediate human review.”

So the technology exists, yet Facebook has admitted its AI system failed. Facebook’s vice president of integrity, Guy Rosen, told Stuff that “this particular video did not trigger our automatic detection systems.”

After the live stream, the video was copied and re-uploaded thousands of times across Facebook, YouTube, and other social platforms like Twitter and Reddit. 

According to Leetaru, this could also have been prevented by current content hashing and content matching technologies.

Content hashing basically means applying a digital signature to a piece of content. If another piece of content is substantially similar to the original, it can easily be flagged and deleted immediately. As Leetaru notes, this process has been successfully used for years to combat copyright infringement and child pornography.

The social platforms have “extraordinarily robust” content signature matching, says Leetaru, “able to flag even trace amounts of the original content buried under an avalanche of other material.” 

But clearly, this approach either wasn’t used by Facebook or YouTube to prevent distribution of the Christchurch terrorist’s video, or if it was used it had an unacceptably high failure rate.

Leetaru’s own conclusion is damning for Facebook and YouTube:

“The problem is that these approaches have substantial computational cost associated with them and when used in conjunction with human review to ensure maximal coverage, would eat into the companies’ profits.”

Stopping hate-mongers before they commit a crime

In the aftermath of this tragedy, I’ve also wondered if more could have been done to identify, monitor and shut down the terrorist’s social media presence – not to mention alert authorities – before he committed his monstrous crime.

There’s certainly a case to be made for big tech companies to work closely with government intelligence agencies, at least for the most obvious and extreme instances of people posting hate content. 

Now, I realise this is beginning to sound like the fictional police department PreCrime from the movie Minority Report, which arrested people before they’d committed their crimes. I don’t mean to suggest that we should lock up people for posting morally reprehensible things. But there must be more that social media companies can do with AI software, to at least flag and report people who spew virulent hate content.

In an email exchange, I asked Kalev Leetaru what he thinks of social platforms working more closely with government on policing hate content.

“So, the interaction between social platforms and governments is a complex space,” he replied. “Governments already likely use court orders to compel the socials to provide them data on ordinary users and dissidents. And if socials work with one government to remove “terrorist” users, other governments are going to demand the same abilities, but they might define a “terrorist” as someone who criticizes the government or “threatens national stability” by publishing information that undermines the government – like corruption charges. So, socials are understandably loathe to work more closely with governments, though they do [already] work closely with many Western governments.”

He cited the example of Facebook’s removal of Palestinian accounts, from late 2016 on, at the request of the Israeli government.

However, Leetaru says there is much that can be done by social media companies independent of government collaboration. Although there are issues with that approach too.

“There is a lot tech companies could do to flag obvious cases of accounts that are radicalizing or heading down a violent or abusive path,” he told me, “but often the accounts themselves may not cross that threshold in an obvious way prior to the person enacting violence.”

Even if someone posts something online immediately prior to committing their crime, which was the case with the Christchurch terrorist, this can be difficult to pick up in real-time.

“There are plenty of cases where right before they conduct an attack they post something online either a few days, hours or even minutes before they conduct an attack,” Leetaru said. “Often these remarks in retrospect have meaning, but not at the time – e.g. a tweet might just say something about ‘getting things started’ without saying anything more.”

So, point taken: it’s hard to make assumptions about a person’s real-life mindset or intentions based on a random Facebook post or tweet. 

Regardless, I still think the social platforms should – at the very least – take a moral position against hate content and actively police it themselves.

Lifting the rock on sharers of hate content

Here in New Zealand, there have already been people arrested and tried in court under the Objectionable Publications Act for sharing the terrorist video. 

The problem is, hundreds of other people shared the video using anonymous accounts on YouTube, Reddit and other platforms where a real name isn’t required. Could AI tech help identify these anonymous cowards, then ban them from social media and report them to police?

Again, I recognize there are significant privacy implications to unmasking anonymous accounts. But I think it’s worth at least having the discussion.

“In many countries, there are limits to what you can do,” said Leetaru when I asked him about this. “Here in the US, social platforms are private companies. They can remove the content, but there are few laws restricting sharing the content – so there’s not much that could be done against those individuals legally.”

He also warned against naming and shaming anonymous trolls. 

“Name and shame is always dangerous, since IP addresses are rotated regularly by ISPs – meaning your IP today might be someone across town’s tomorrow. And bad actors often use VPNs or other means to conceal their activity, including using their neighbor’s wifi or a coffee shop.”

Leetaru thinks the best solution is to “halt the uploads in the first place and suspend or ban accounts that try to upload.”

Fair enough. But for that to happen, both Facebook and YouTube need to put more priority and resources into using AI technology to sweep away hate content – at least as much focus as they put on policing copyright infringements.