Please ensure Javascript is enabled for purposes ofwebsite accessibility

After Buffalo shooting video circulates online, social media platforms face scrutiny


A police officer lifts the tape cordoning off the scene of a shooting at a supermarket, in Buffalo, N.Y., Sunday, May 15, 2022. (AP Photo/Matt Rourke)
A police officer lifts the tape cordoning off the scene of a shooting at a supermarket, in Buffalo, N.Y., Sunday, May 15, 2022. (AP Photo/Matt Rourke)
Facebook Share IconTwitter Share IconEmail Share Icon

WASHINGTON (TND) — A mass shooting livestreamed on social media has reignited calls from lawmakers for social media companies to moderate content posted to their sites more heavily.

The suspect, accused of killing 10 people and hurting three others after opening fire in a Buffalo, New York grocery store in what police describe as a “racially motivated violent extremist” shooting, livestreamed it on Twitch, a gaming platform. The alleged shooter, 18-year-old Payton Gendron, wrote online about his inspiration from other mass shootings that were broadcast over social media.

Videos from the shooting have also been posted and spread on other platforms like Twitter and Facebook.

New York Gov. Kathy Hochul and other lawmakers have called on social media companies to step up their efforts to remove violent content.

“Mark my words we'll be aggressive in our pursuit of anyone who subscribes to the ideals professed by other white supremacists and how there's a feeding frenzy on social media platforms where hate festers more hate, that has to stop,” Hochul said at a news conference after the shooting. “These outlets must be more vigilant in monitoring social media content and certainly the fact that this could be livestreamed on social media platforms and not taken down within a second, says to me that there is a responsibility out there.”

Twitch said the video was removed from its platform in less than two minutes. Facebook and Twitter have removed material from the shooting — though sometimes after hours and thousands of views — but it is difficult to completely eradicate it from their platforms.

Social media platforms’ role in preventing hate speech and harmful content from proliferating has become more prominent as more suspects in mass shootings have traced their extremist beliefs through online forums and social media platforms.

Big Tech has made significant changes and advancements when it comes to policing sites, but it remains a logistical and philosophical challenge for them. Platforms are also navigating through a gray area when it comes to policing hate speech.

These companies are going into areas when talking about violence and hate speech that are not always clearly defined by law,” said Mike Horning, associate professor at Virginia Tech School of Communications.

Each company has a clearly defined set of policies for removing posts or content when it direct threats to people or encourages violence against others, but other issues fall into more of a vague part of the law.

“One person might think that references to immigration policies could be considered hate speech, depending on how you phrase it, and another person might say, ‘well, I'm just expressing a point of view’ and that's where they have difficulty,” Horning said.

Under current federal laws, social media companies are left to make their own policies and guidelines to decide what qualifies as harmful content. Section 230, which has become a political flashpoint on both sides of the aisle, gives companies the authority to regulate what speech they feel is inappropriate and protects them from lawsuits over things posted by their users.

Big Tech and Section 230 have caught the ire of both Republicans and Democrats, though for separate reasons. Democrats have called on companies to crack down on misinformation and hateful rhetoric, while Republicans have accused them of censoring conservative voices and taking away the right to free speech.

Companies have taken steps in recent years to improve their content moderation and have developed machine-learning technology to flag potentially dangerous content and keep repeats of the same media from being uploaded.

Even with advanced technology and humans also moderating, it is extremely difficult to get to everything, experts say.

It's a bureaucratic process, so it may get flagged then a reviewer at the company would have to review it unless their algorithm automatically determined that it should be taken down,” Horning said. “When you're talking about looking at something, putting a human involved in evaluating that content there, there's millions of pieces of content that are published on these platforms in a given day, so they have limited bandwidth for the number of people that can go through and review and that’s usually why it winds up staying up for a period of time.”

Companies’ algorithms also bring them under fire when they take down content that should have remained posted.

“When you have all of that content coming in, it's very difficult and the nature of this stuff is that these kinds of things quickly go viral, because (it is) so sensational, people will share them,” Horning said. “I think it's a difficult task for (companies). It's easy for Congress to say ‘you need to regulate it,’ but it's very difficult for them to do it quickly.”

Loading ...