It’s exactly these structural problems that the call to action looks to address. Not by signing strict speech restrictions into law, but by signaling the urgency of the problem and acknowledging that online extremism will spread unless governments and tech companies refine the processes of identifying and eradicating dangerous content before it reaches those most vulnerable.

And there’s reason to believe many of us are more vulnerable than we think. Though it’s difficult to measure the trauma of being subjected to racist memes, slurs and violent imagery, the rise of online extremism and live-streamed violence is undoubtedly a mental health issue. In an Op-Ed for The Times, Prime Minister Jacinda Ardern of New Zealand said that “in the first week and a half after the attack, 8,000 people who saw it called mental health support lines here in New Zealand,” a staggering figure in a country of just under five million. And as reports from content moderators dealing with secondary traumatic stress and post-traumatic stress disorder show, the long-term impacts of the job take a quiet but powerful toll on the mental health of those who watch it. Which is to say that the issue is deeply nuanced; maximalist free speech for some may mean long-term psychological turmoil for others.

[As technology advances, will it continue to blur the lines between public and private? Sign up for Charlie Warzel’s limited-run newsletter to explore what’s at stake and what you can do about it.]

Of course the call to action is far from perfect. And some of its commitments, if codified into law, could very well have consequences. As Courtney Radsch, the advocacy director for the Committee to Protect Journalists, wrote on Wednesday, “the sweeping focus on online service providers risks pushing censorship into the infrastructure layer, commonly thought of as the layer that makes the internet work.”

Dr. Radsch’s argument reflects a tortured struggle of the digital age, which is that we seem currently unable to find a satisfactory way to combat violence, terroristic propaganda and recruitment without emboldening censors. The balance is precarious, and most paths forward for content moderation are a minefield of unintended consequences — just ask Facebook, Twitter and Google, who have been made to grapple with once seemingly innocuous choices that have brought us to where we are today.

The dizzying complexity of the task shouldn’t be a reason to shy away from a commitment to an internet that prevents the unnecessary amplification of violence and hatred. That tech companies like Facebook, Google and Microsoft, which have historically done everything in their power to avoid making overly censorious decisions, have signed on seems to suggest they agree.

Many of the provisions in the call to action like “development of industry standards or voluntary frameworks” or “cross-industry efforts” to share information to block coordinated attacks seem far more organizational than censorious. And some policies, including the commitment to “building media literacy to help counter distorted terrorist and violent extremist narratives” fits squarely within the Trump administration’s stated desire to fight grotesque speech with “productive speech.”



Source link Google News