Why Mass Shooting Videos Keep Spreading Online Even As Tech Giants Try To Stop Them

If the aftermath of the horrific mass shooting three years ago in New Zealand is any indication, social media could be haunted for years by graphic video of Saturday’s attack on a grocery store in Buffalo, New York. .

In March 2019, a gunman in Christchurch, New Zealand killed 51 people at two mosques and broadcast it live on Facebook. To this day, major social media platforms and tech companies are still battling attempts by people trying to upload and share slightly edited versions of the shooter’s video, according to the Global Internet Forum to Counter Terrorism, a group from the technology industry whose purpose is to push counter violent extremist content online and ultimately reduce the spread of extremism.

There is still a lot of work to do.

The Buffalo attack killed 10 people and was streamed live on Twitch. President Joe Biden called it an act of terrorism and white supremacy. The attack targeted black people and was accompanied by a hateful manifesto.

Despite years of work trying to control the spread of this material, videos of the Buffalo attack were still readily available on major platforms like Facebook and Twitter in the days following the shooting. The ongoing struggle shows how great the challenge is for YouTube, Twitter, Facebook and other online services trying to eradicate the glorification of violence, which is generally prohibited under the terms of service of most apps.

“Our work takes place in an extremely confrontational and dynamic environment,” said Sarah Pollack, spokesperson for the anti-extremist technology forum, on Tuesday.

Twitch, the platform on which the Buffalo shooting was livestreamed, said it stopped streaming in less than two minutes. But in the hours and days that followed, those terrifying moments spread with a speed that called into question the effectiveness of the tech industry’s post-Christchurch strategy.

One of the ways video proponents shopped around was by turning to lesser-known video hosting sites like Streamable. On this site alone, the alleged shooter’s video was viewed 3 million times before being deleted on Sunday, The New York Times reported, and a link to the Streamable video was shared hundreds of times on Facebook and Twitter.

Streamable is not one of the 18 member online services of the Global Internet Counterterrorism Forum, so it has not had access to the full set of forum tools that major apps use to detect and remove videos. mass shootings. Amazon, owner of Twitch, is a member of the forum.

The worrying lesson, experts say, is that as long as there are smaller services unable or unwilling to completely phase out mass shooting videos, the industry will struggle to phase them out.

“There’s always going to be some increasingly sketchy company that’s happy to make money off the dregs of the internet,” said Hany Farid, a computer science professor at the University of California at Berkeley and an expert in fight against extremism online.

“I don’t think there’s an easy answer,” he said.

Facebook and Twitter both said on Tuesday that their internal teams were working with the industry’s anti-extremism forum to halt the broadcast of the Buffalo video, per the companies’ terms of service.

At the center of the companies’ strategy is a shared database that stores information about extremist videos. As mass shootings and other violent attacks occur, the 18 services participating in the industry forum add more information to the database, allowing all participating companies to know the fingerprints of violent videos , detect matches and speed up the withdrawal process.

Companies continue to share more digital fingerprints, called hashes, when they discover altered versions of banned videos.

Three years after the Christchurch shooting, “we continue to have hashes,” Pollack said. She said the Christchurch data made up 5% of the entire shared database, which also includes data on videos from ISIS and other extremist groups.

“Members will continue to be able to add new hashes of author-produced content as long as they find new versions,” she said.

“We have to expect there to be these continued attempts to manipulate content,” she added. “We never say, ‘Okay, we’ve got enough hashes.'”

Common ways people edit videos include overlaying text on the video or adding banners, Pollack said. Users can sometimes edit videos enough to evade matching software on established platforms, which then flag near-duplicates to the industry forum.

As of Sunday evening, the forum’s 18 member companies had identified 130 “visually distinct videos” related to the Buffalo shooting and 740 “visually distinct images”, the forum announced Tuesday.

Pollack said the industry forum welcomes new member companies if they are willing to meet certain criteria. She declined to say whether Streamable or any other newer tech companies were in talks to join, but said the forum wants to attract more members from different parts of the world and with different types of services.

“In order to move forward to fulfill this mission, we need to be able to bring more companies to the table,” she said. The forum was founded in 2017 after lobbying by European leaders. The most recent member to join was Zoom in December.

Another industry organization, Tech Against Terrorism, said it would begin alerting individual tech companies when it finds copies of Buffalo’s live stream or manifesto on their platforms.

Streamable, a Delaware startup, was acquired last year by a larger London-based tech company called Hopin. Wilmington’s address listed on its website is listed as “for sale” and was locked Monday, The Washington Post reported.

Hopin said in a statement Monday that he was trying to remove all videos of the accused shooter.

“These types of videos violate our Community Guidelines and our Terms of Service and we are working diligently to promptly remove them as well as terminate the accounts of those who upload them,” Hopin said.

“We are deeply disturbed by this senseless act of racially motivated gun violence and are deeply saddened for the innocent lives lost and for their families,” the company said.

Hopin did not respond to questions about the resources the company is devoting to countering extremism, or how well the company is working with other tech companies on the subject. Streamable did not immediately respond to a separate request for comment.

Brian Fishman, a former Facebook employee who oversaw efforts against “dangerous organizations” there, said companies and others must continue to look for “choke points” in the distribution of mass shooting videos.

“It is time to start thinking about this distribution model with the same rigor that we have collectively analyzed the ISIS distribution. It’s not as planned, but there are clear distribution steps,” Fishman said. mentioned on Twitter.

Brittan Heller, democracy and technology researcher at the Atlantic Council, said tech companies that allow live streaming should consider limitations to the service — such as a time limit or requiring a minimum number of subscribers — if they don’t. don’t already have them.

“It’s not just about having terms of service. It’s about having strong, enforceable and transparent enforcement regimes around content moderation,” said Heller, who is also a lawyer in private practice and advises technology companies on human rights and related topics.

She said unmoderated social media and video hosting sites would continue to exist as a sort of alternative business model, fueled in part by white supremacy and other violent ideologies.

“The tech industry cannot prevent this from happening. This is a uniquely American political issue,” Heller said.

While the biggest tech companies like Google, Microsoft, and Facebook parent company Meta have tightened enforcement of their terms of service, only big platforms like Twitch, Discord, and Telegram haven’t. subject to as much scrutiny from the public, regulators or legislators.

Farid says it makes a difference.

“They have yet to be held to account, where their CEOs are dragged before Congress week after week to answer for their crimes against humanity,” he said.

He said the most effective way to force change in tech companies would be to impose legal liability, because under current U.S. law, tech platforms cannot be held liable for posting or even promoting extremist content. This could soon change in Europe and the UK, putting pressure on businesses to make changes around the world.

Comments are closed.