The Bulwark

The Bulwark

Home
Shows
Newsletters
Chat
Special Projects
Events
Founders
Store
Archive
About

Share this post

The Bulwark
The Bulwark
Unanimous Supreme Court Keeps Hands Off Tech Platforms and Online Hate
Copy link
Facebook
Email
Notes
More
User's avatar
Discover more from The Bulwark
The Bulwark is home to Sarah Longwell, Tim Miller, Bill Kristol, JVL, Sam Stein, and more. We are the largest pro-democracy bundle on Substack for news and analysis on politics and culture—supported by a community built on good-faith.
Over 826,000 subscribers
Already have an account? Sign in

Unanimous Supreme Court Keeps Hands Off Tech Platforms and Online Hate

As there’s little room for the courts to act, curbing online hate will require creativity from lawmakers and tech firms.

Kim Wehle's avatar
Kim Wehle
May 23, 2023
19

Share this post

The Bulwark
The Bulwark
Unanimous Supreme Court Keeps Hands Off Tech Platforms and Online Hate
Copy link
Facebook
Email
Notes
More
2
Share
Attorney Nitsana Darshan-Leitner speaks to reporters outside of the U.S. Supreme Court following oral arguments for the case Twitter v. Taamneh on February 22, 2023 in Washington, DC. (Photo by Anna Moneymaker/Getty Images)

IN A UNANIMOUS DECISION authored by Justice Clarence Thomas, the Supreme Court last week threw out a lawsuit against Facebook, Twitter, and Google (owner of YouTube) over their roles in facilitating extremist violence. Although narrow, the ruling in Twitter v. Taamneh was a clean victory for the technology platforms, with the Biden administration publicly siding with Twitter. The president should now work with Congress to pass laws that disincentivize social media corporations from manipulating algorithms to maximize profits from online hate.

A federal law called the Anti-Terrorism Act allows victims of international terrorism to sue and obtain damages from “any person who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed . . . an act of international terrorism.” In their complaint, the family of Nawras Alassaf—one of 39 murdered when an ISIS terrorist fired into a crowd at a nightclub in Istanbul, Turkey in 2017—claimed that the technology companies aided the attack by allowing ISIS to share and spread its terrorist propaganda. The family alleged further that even after receiving complaints about ISIS’s use of their platforms, the companies “permitted ISIS-affiliated accounts to remain active, or removed only a portion of the content.” The trial court dismissed the complaint, the U.S. Court of Appeals for the Ninth Circuit reversed, and the Supreme Court reversed again, siding with the defendants.

After a lengthy detour into the nuances of what it means to “aid” or “abet” a wrongful act, Thomas concluded that supplying “generally available virtual platforms” and failing “to stop ISIS despite knowing it was using those platforms” does not establish liability under the law—even though it was undisputed that the nightclub attack was an “act of international terrorism” and that ISIS “committed, planned, or authorized” it. Everyone also agreed that the tech companies’ business models rely on placing ads “on or near the billions of videos, posts, comments, and tweets uploaded by the platforms’ users,” including violent and extremist content, and then applying “‘recommendation’ algorithms that automatically match advertisements and content with each user” based on the users’ individual search habits.

So, the Court accepted as fact that Facebook, Twitter, and Google make money off incendiary posts aimed at recruiting members to terrorist organizations, spreading violent propaganda, instilling fear and intimidation in the general population, and raising funding for terrorist activity. It merely concluded that unless a plaintiff can plausibly allege that a social media company did more to participate actively in a particular act of violence, the Anti-Terrorism Act can’t be used to motivate tech platforms to revise their business models.

Share

In a related case brought under another statute, Section 230 of the Communications Decency Act, the Supreme Court also refused to hold Google liable for coordinated attacks that occurred across Paris, France, in 2015, killing 130 people, including a 23-year-old American citizen. Nohemi Gonzalez’s family sued, claiming that “Google approved ISIS videos for advertisements and then shared proceeds with ISIS through YouTube’s revenue-sharing system.” Section 230 protects internet service providers from being held liable for information posted on their sites on the rationale that the users—not the companies—generate the content. The Court didn’t make any ruling under Section 230, however, instead merely holding that the complaint was so obviously flawed that the case should be sent back to the lower court for consideration of how it fares under the Court’s decision in the Twitter case.

Which leaves the rest of us, for now, with no plausible answer to online extremism and the havoc it wreaks in our lives.

Support The Bulwark: Consider becoming a free or paid subscriber.

According to the FBI, the use of social media is a key factor that has “contributed to the evolution of the terrorism threat landscape” since 9/11. A 2019 paper by Anjana Susarla for George Washington University’s Program on Extremism explains:

The way digital platforms, and especially social media platforms monetize access, specifically social media, increases our vulnerability as users to disinformation. Instead of extremist videos being hidden in some darker corners of the Internet, social media platforms make it easy for anyone to stumble upon and post negative content disseminating hatred against a particular community or group, with the consequence that radicalization through exposure to hateful material.

All nine Supreme Court justices nonetheless agreed that “the fact that some bad actors took advantage of these platforms is insufficient to state a claim that defendants knowingly gave substantial assistance and thereby aided and abetted those wrongdoers’ acts.” The justices were worried about unlimited liability for social media companies over the extremist content posted by violent, would-be terrorists. “Defendants’ arm’s-length relationship with ISIS,” Thomas reasoned, “was essentially no different from their relationship with their millions or billions of other users.”

Give a gift membership

But that misses the point. The vast majority of the billions of other users—those who, for every minute of the day, upload over 500 hours of video to YouTube, post 510,000 comments on Facebook, and tweet 347,000 times on Twitter—are not plotting and executing acts of terror, which are increasingly occurring among children being taught to hate online. Smart people at these companies could surely figure out a way to protect the majority of users’ content while weeding out the ones that stoke actual violence and death. As Susarla notes, the tech companies “are no longer corporate entities responsible for their shareholders alone, but their ability to mold private interactions and sway public opinion affects the strength of the participative process and institutions of democracy.”

The fact that the vote tally in both of these cases was unanimous makes a broader statement about the Court’s approach to the separation of powers when it comes to online hate and radicalization. Even the progressive justices have no interest in using their power to make policy regulating internet content. Congress is the branch to do something. And if it were ever to miraculously pass meaningful legislation, one can only hope that conservatives on the Court would apply the same hands-off approach to the inevitable lawsuit that Big Tech would bring to protect its unfettered ability to profit off of hate, violence, and death.

Thank you for reading The Bulwark. This article is public so feel free to share it.

Share


Subscribe to The Bulwark

Tens of thousands of paid subscribers
The Bulwark is home to Sarah Longwell, Tim Miller, Bill Kristol, JVL, Sam Stein, and more. We are the largest pro-democracy bundle on Substack for news and analysis on politics and culture—supported by a community built on good-faith.
Ken Lefkowitz's avatar
Bruce Lawrence's avatar
Walter Chuck's avatar
AustereRoberto's avatar
steve robertshaw's avatar
19 Likes∙
2 Restacks
19

Share this post

The Bulwark
The Bulwark
Unanimous Supreme Court Keeps Hands Off Tech Platforms and Online Hate
Copy link
Facebook
Email
Notes
More
2
Share
A guest post by
Kim Wehle
Prof of Law. Fmr Asst US Attorney. Writer @politico, @theatlantic, @bulwarkonline. Legal contributor @abcnews. Author. Latest book is Pardon Power: How the Pardon System Works--and Why. ORDER here: https://a.co/d/33EAbKR
Subscribe to Kim
The American Age Is Over
Emergency Triad: The United States commits imperial suicide.
Apr 3 • 
Jonathan V. Last
5,336

Share this post

The Bulwark
The Bulwark
The American Age Is Over
Copy link
Facebook
Email
Notes
More
1,469
How to Think (and Act) Like a Dissident Movement
AOC, solidarity, and people power.
Mar 24 • 
Jonathan V. Last
4,098

Share this post

The Bulwark
The Bulwark
How to Think (and Act) Like a Dissident Movement
Copy link
Facebook
Email
Notes
More
1,170
“How Can You Look at Yourself in the Mirror?”
George is furious.
Apr 3 • 
Sarah Longwell
2,102

Share this post

The Bulwark
The Bulwark
“How Can You Look at Yourself in the Mirror?”
Copy link
Facebook
Email
Notes
More
349
49:37

Ready for more?

© 2025 Bulwark Media
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More