[ad_1]
The Covid-19 infodemic taught social media giants like YouTube and Reddit an important lesson: They can—and must—take action to control the content on their sites.
By Joan Donovan, Wired
FOR THOSE WHO follow the politics of platforms, Monday’s great expulsion of malicious content creators was better late than never. For far too long, a very small contingent of extremely hateful content creators have used Silicon Valley’s love of the First Amendment to control the narrative on commercial content moderation. By labeling every effort to control their speech as “censorship,” these individuals and groups managed to create cover for their use of death threats, harassment, and other incitements to violence to silence opposition. For a long time, it has worked. Until now. In what looks like a coordinated purge by Twitch, Reddit, and YouTube, the reckoning is here for those who use racism and misogyny to gain attention and make money on social media.
For the past five years, I have been researching white supremacists online and how they capitalize on tech’s willful ignorance of the damage they are causing in the real world. At Harvard Kennedy School’s Shorenstein Center, I lead a team of researchers who look into the fraught politics of online life and how platforms connect the wires to the weeds. It’s too often the case that what happens online no longer stays online. Relying on media manipulation techniques to hide their identities and motives, a mass of racists began to come out in public in the lead up to Trump’s election, including the rise of the so-called alt-right. Due to social media we are all witnesses to white supremacist violence, including the murder of Heather Heyer in Charlottesville and the attack on Muslims in Christchurch. Researchers, journalists, and activists have fought to expose these networks and provide the basic research needed to detect, document, and debunk disinformation campaigns.
Featured Image, Wired
Full article @ Wired
[ad_2]
Source link