Why Facebook and Twitter couldn’t stop Buffalo’s set video from going viral

After the 2019 mosque shooting in Christchurch, New Zealand, Facebook was widely criticized for allowing the shooter to livestream his killings for 17 minutes without interruption. Saturday’s racially charged shooting in Buffalo, New York, played out differently.

This time, the gunman shared his gruesome deeds on Twitch, a live-streaming video app popular with gamers, where it was shut down much faster, less than two minutes after the violence began, according to the company. When Twitch cut the stream, it would have only had 22 views.

That hasn’t stopped people from spreading screen recordings of the Twitch livestream – and writings of the shooter – to the internet, where they rack up millions of views, some of which came from widely shared links on Facebook and Twitter.

“It’s a tragedy because you only need a copy of the video for this thing to live forever online and multiply endlessly,” said Emerson Brooking, resident senior fellow at the Atlantic think tank. Council which studies social media.

It shows that while major social media platforms like Facebook and Twitter have improved since Christchurch in slowing the spread of horrific depictions of mass violence, they still cannot completely stop it. Twitch was able to quickly cut off the shooter’s real-time video stream because it’s an app designed to share a specific type of content: live first-person gameplay videos. Facebook, Twitter and YouTube have a much larger user base, posting a much wider range of posts, which are shared through algorithms designed to promote virality. For Facebook and Twitter, stopping the spread of any traces of this video would mean that these companies would have to fundamentally change the way information is shared on their apps.

The unhindered distribution of murder videos on the Internet is an important problem to be solved. For the victims and their families, these videos rob people of their dignity in their final moments. But they also encourage the glory-seeking behavior of would-be mass murderers, who plan gruesome violence that targets the social media virality that promotes their hateful ideologies.

Over the years, major social media platforms have gotten a lot better at slowing down and restricting the spread of these types of videos. But they haven’t been able to shut it down completely and probably never will.

So far, these companies’ efforts have focused on better identifying violent videos and then preventing users from sharing that same video or edited versions. In the case of the Buffalo shooting, YouTube said it has taken down at least 400 different versions of the shooter’s video that people have been trying to upload since Saturday afternoon. Facebook is also blocking people from uploading different versions of the video, but not disclosing the number. Twitter also said it was removing instances of the video.

These companies also help each other identify and block or remove this type of content by comparing ratings. They now share “hashes” — or fingerprints of an image or video — through the Global Internet Forum to Combat Terrorism, or GIFCT, an industry consortium founded in 2017. When these companies exchange hashes, it gives them the ability to find and take down violent videos. It’s the same way that platforms like YouTube search for videos that violate copyright.

After the Christchurch shootings in 2019, GIFCT created a new alert system, called the “Content Incident Protocol,” to start sharing hashes in an emergency like a mass shooting. In the case of the Buffalo shooting, a content incident protocol was activated at 4:52 p.m. ET Saturday, about two and a half hours after the shooting began. And as the people who wanted to distribute the distribution of the videos tried to modify the clips to thwart hash-trackers – for example, by adding banners or zooming in on parts of the clips – the companies in the consortium tried to react by creating new hashes that could flag edited videos.

But video hashing goes no further. One of the primary ways the video of the Buffalo shooter spread on mainstream social media was not through people posting the video directly, but through links to other websites.

In one example, a link to the shooter’s video hosted on Streamable, a lesser-known video site, was shared hundreds of times on Facebook and Twitter in the hours following the shooting. This link generated more than 43,000 interactions, including likes and shares, on Facebook, and it was viewed more than 3 million times before Streamable removed it, according to The New York Times.

A spokesperson for Streamable’s parent company, Hopin, did not respond to repeated questions from Recode about why the platform did not remove the shooter’s video sooner. The company sent a statement saying these types of videos violate the company’s Community Guidelines and Terms of Service, and that the company is working “diligently to promptly remove them and terminate the accounts of those who upload them.” . Streamable is not a member of GIFCT.

In a widely circulated screenshot, a user showed that he reported a post with the Streamable link and an image of the shot on Facebook shortly after it was posted, only to get a response from Facebook that the post did not violate its rules. A Meta spokesperson confirmed to Recode that posts with the Streamable link do indeed violate its policies. Meta said the response to the user who reported the link was made in error, and the company is investigating why.

Ultimately, due to the design of all these platforms, this is a mole game. Facebook, Twitter and YouTube have billions of users, and among those billions there will always be a percentage of users who find loopholes to exploit these systems. Several social media researchers have suggested that major platforms could do more by better scrutinizing fringe websites like 4chan and 8chan, where the links originated, in order to identify and block them early. The researchers also called on these platforms to invest more in their systems for receiving user reports.

Meanwhile, some lawmakers have blamed social media companies for allowing the video to go up in the first place.

“[T]here is a feeding frenzy on social media platforms where the hate is escalating more and more, this needs to stop,” New York Governor Kathy Hochul said at a press conference on Sunday. “These outlets need to be more vigilant in monitoring social media content, and certainly the fact that this can be live streamed on social media platforms and not taken down in a second tells me there is accountability. the low.”

Quickly capture and block content that has not yet proven feasible. Again, it took two minutes for Twitch to delete the live stream, which is one of the fastest response times we’ve seen so far from a social media platform that allows people to post in real time. But those two minutes were more than enough for the links to the video to go viral on bigger platforms like Facebook and Twitter. So the question is less how quickly these videos can be taken down than whether there is a way to prevent the afterlife they can earn on major social networks.

This is where the fundamental design of these platforms collides with reality. They are machines designed for mass engagement and ripe for exploitation. If and when that changes will depend on whether these companies are willing to put a key in this machine. So far, that doesn’t seem likely.

Peter Kafka contributed reporting for this article.