Presidential contenders want social networks to do more to crack down on white nationalism

california

The gunman who killed 20 people in El Paso posted his white supremacist manifesto on the right-wing forum 8chan. The man who massacred 51 people at a Christchurch, New Zealand, mosque streamed video of the attack live on Facebook. And the shooter who gunned down three people at Gilroys garlic festival appeared to upload an Instagram post referencing a white nationalist author just hours before he opened fire.

As the death toll rises from shootings carried out by people whove espoused white nationalist ideas, some Democratic presidential candidates are calling for social media companies to more forcefully crack down on hateful content on their platforms. Their goal is to make it harder for groups or individuals to coordinate their activities and circulate content targeting racial, ethnic and religious minorities.

But the candidates farthest-reaching proposals to stop extremists from weaponizing the internet have also sparked constitutional concerns over how far the government can go in policing online speech.

Social media has long been a crucial tool for extremist groups to disseminate their rhetoric and recruit new supporters — both on fringe right-wing platforms like 8chan and in dark corners of more mainstream networks.

The organizers of white nationalist protests in Charlottesville in August 2017 that left one woman dead organized their event with Facebook group chats, as well as on the chat website Discord. And the man who killed 11 people at a Pittsburgh synagogue last October had a history of sharing content and interacting with high-profile white nationalists on the right-wing social media network Gab in the year before his attack, an analysis by the Southern Poverty Law Center found.

The El Paso gunman cited the Christchurch massacre in a manifesto he posted online, while the Christchurch shooter, in turn, cited a 2011 mass murder in Norway carried out by an Islamophobic extremist — a sign of how one attack can inspire others in a deadly cycle.

“White supremacists are using social media to connect and spread their hate and evil to others,” said Farah Pandith, a former State Department official who focused on fighting violent extremism and has written a book on the subject. “The tech companies have been slow to act and limited in their scope — we have to be realistic about the importance and seriousness of this threat.”

And it is a serious threat. Domestic right-wing terrorism is responsible for more deaths on U.S. soil since 9/11 than jihadism, according to statistics compiled by the New America think tank. Experts say many of those attacks appear to be fueled by strands of the same racist ideology that white people are being “replaced” by minorities or foreigners.

Former Texas Rep. Beto ORourke, an El Paso native, would go farthest of the presidential candidates in rethinking legal protections for social networks.

Currently, social media companies are insulated from lawsuits about content posted by users, under Section 230 of the Communications Decency Act — a provision thats been called “the 26 words that created the internet.”

ORourkes proposal would strip that legal immunity from large companies that dont set policies to block content that incites violence, intimidation, harassment, threats or defamation based on traits like race, sex or religion. And all internet companies could be held liable for knowingly promoting content that incites violence.

“This is a matter of life and death, and tech executives have a moral obligation to play an active role in banning online activities that incite violence and acts of domestic terrorism,” ORourke spokeswoman Aleigha Cavalier said in an email.

Most experts believe the First Amendment allows private companies to block content on their platforms. But its questionable whether the government can tell social media companies what speech they should block and what they should allow, said Jeff Kosseff, a cybersecurity law professor at the United States Naval Academy who wrote a book on legal protections in the digital age.

Kosseff said its a legal issue that hasnt been tested in the courts, and the outcome would depend on the exact language of the law ORourke is proposing.

“There are certain types of speech that the government can regulate,” such as imminent incitement of violence, or literal threats, he said. “But hate speech standing alone is really tricky.”

Hate speech that isnt an imminent threat is still protected by the Constitution, noted Daphne Keller, a researcher at Stanfords Center for Internet and Society and a former associate general counsel for Google. “A law cant just ban it. And Congress cant just tell platforms to ban it, either — that use of government power would still violate the First Amendment,” she said.

Many of the mainstream social giants already have voluntarily set terms of service that seek to block white nationalist content. Earlier this summer, YouTube said it would remove videos that promote white nationalism and other forms of hate speech, as well as videos denying the Holocaust.

But putting those policies into effect can be tricky. Sometimes, white nationalists just create new accounts after theyre banned, forcing platforms to play a game of whack-a-mole.

“This isnt something you can throw a bunch of A.I. at to fix,” Kosseff said. “The types of threats are evolving so rapidly.”

The law protecting social media companies from lawsuits, Section 230, has already faced criticism from both sides of the aisle in recent months, including House Speaker Nancy Pelosi and Texas Sen. Ted Cruz. Increased attention on how extremist thought spreads on the platfRead More – Source