[ad_1]

One of the shooters appears to have livestreamed the attack on Facebook (FB). The disturbing video, which has not been verified by CNN, ran for nearly 17 minutes and purportedly shows the gunman walking into a mosque and opening fire.

“New Zealand Police alerted us to a video on Facebook shortly after the livestream commenced and we quickly removed both the shooter’s Facebook and Instagram accounts and the video,” Mia Garlick, Facebook’s director of policy for Australia and New Zealand, said in a statement.

Facebook declined further comment on when exactly it took down the video.

What we know

Hours after the attack, however, copies of the gruesome video continued to appear on Facebook, YouTube and Twitter, raising new questions about the companies’ ability to manage harmful content on their platforms.

49 killed in mass shooting at two mosques in Christchurch, New Zealand

Facebook is “removing any praise or support for the crime and the shooter or shooters as soon as we’re aware,” Garlick said.

Twitter (TWTR) said it suspended an account related to the shooting and is working to remove the video from its platform.
YouTube, which is owned by Google (GOOGL), removes “shocking, violent and graphic content” as soon as it is made aware of it, according to a Google spokesperson. YouTube also declined to comment on how long it took to first remove the video.

New Zealand police asked social media users to stop sharing the purported shooting footage and said they were seeking to have it taken down.

CNN is choosing not to publish additional information regarding the video until more details are available.

Tech firms ‘don’t see this as a priority’

This is the latest case of social media companies being caught off guard by killers posting videos of their crimes, and other users then sharing the disturbing footage. It has happened in the United States, Thailand, Denmark and other countries.

Friday’s video reignites questions about how social media platforms handle offensive content: Are the companies doing enough to try to catch this type of content? How quickly should they be expected to remove it?

“While Google, YouTube, Facebook and Twitter all say that they’re cooperating and acting in the best interest of citizens to remove this content, they’re actually not because they’re allowing these videos to reappear all the time,” said Lucinda Creighton, a senior adviser at the Counter Extremism Project, an international policy organization.

YouTube says it will crack down on recommending conspiracy videos

Facebook’s artificial intelligence tools and human moderators were apparently unable to detect the livestream of the shooting. The company says it was alerted to it by New Zealand police.

“The tech companies basically don’t see this as a priority, they wring their hands, they say this is terrible,” Creighton said. “But what they’re not doing is preventing this from reappearing.”

John Battersby, a counter-terrorism expert at Massey University of New Zealand, said the country had been spared mass terrorist attacks, partly because of its isolation. Social media had changed that.

“This fellow live streamed the shooting and his supporters have cheered him on, and most of them are not in New Zealand,” he said. “Unfortunately once it’s out there and it’s downloaded, it can still be (online),” he added.

The spread of the video could inspire copycats, said CNN legal enforcement analyst Steve Moore, a retired supervisory special agent for the FBI.

“What I would tell the public is this: Do you want to help terrorists? Because if you do, sharing this video is exactly how you do it,” Moore said.

“Do not share the video or you are part of this,” he added.

Hadas Gold, Donie O’Sullivan, Samuel Burke and Paul Murphy contributed to this report.

[ad_2]

Source link