Has the US Supreme Court Made It Harder to Regulate Social Media — or the Opposite?

At stake is the constitutionality of new bills that seek to regulate large platforms by targeting their algorithms.

July 22, 2024
SCOTUS
A view of the US Supreme Court in Washington, DC, July 1, 2024. (Kevin Mohatt/REUTERS)

In early July, the US Supreme Court rendered its decision in a pair of cases challenging laws in Florida and Texas that prohibit social media companies from censoring content based on viewpoint. The Supreme Court remanded the cases to the courts below for further fact-finding but made various statements about how the First Amendment applies to social media algorithms — staking out positions that could impact how social media will be regulated in the country where most major platforms are based.

Early responses to the Supreme Court’s decision in Moody v. NetChoice and NetChoice v. Paxton are so diverse as to suggest a misunderstanding of the core holding. Some say the court has taken the First Amendment too far in finding that social media algorithms are a form of protected expression — raising the prospect that any law targeting the function of algorithms on large platforms will be struck down, making meaningful regulation of social media all but impossible. A raft of bills seeking to target “addictive algorithms” or mandating that platforms offer more user choice over content feeds (such as the Filter Bubble Transparency Act) are now at risk. Has the Supreme Court just created a giant moat around social media platforms and their engagement-driven business model?

The concerns are rooted in what the majority had to say about the Fifth Circuit’s ruling in one of the companion cases, NetChoice v. Paxton. The Court of Appeal had held, in effect, that because algorithms driving Facebook’s News Feed and YouTube’s recommendation engine are automated, they’re not expressive. A majority of justices sought to provide direction to the courts below on a rehearing of the case by asserting that the Fifth Circuit is wrong: because these algorithms reflect a content moderation policy, they’re expressive. 

But beyond this, the Supreme Court’s decision was more subtle in ways many early commentators have overlooked. Five judges pointed to the possibility that, in some cases, social media algorithms aren’t expressive or may not engage strong First Amendment protections — giving us a glimpse of the boundaries around which future constitutional challenges to social media regulation will unfold.

Briefly, the entire court agreed that social media algorithms can be expressive in some cases, and six judges took the view that a law will violate the First Amendment where it affects a platform’s freedom to arrange content (order or rank it) using an algorithm shaped by a content moderation policy. But five of the court’s judges signalled the possibility that an algorithm that merely orders content automatically, based on user history or location, for example, may not be expressive and thus may not attract constitutional protection.

Justice Elena Kagan, whose opinion was joined by four other justices (Sonia Sotomayor, Brett Kavanaugh, Amy Coney Barrett and Chief Justice John Roberts), drew the most attention by clearly affirming that an algorithm that reflects a content moderation policy is protected speech. In a footnote, however, Kagan noted that “[we] do not deal here with feeds whose algorithms respond solely to how users act online — giving them the content they appear to want, without any regard to independent content standards.” This would suggest that those algorithms may not be protected forms of speech. But some commentators have doubted that any social media algorithm responds “solely to how users react online” — aside from a strictly chronological feed.

The Texas and Florida laws banning platform censorship based on viewpoint were obviously on dangerous grounds by overtly dictating what private companies could say or not say.

Justices Barrett and Ketanji Brown Jackson, each writing separately, ventured further into the terrain of algorithms that don’t reflect a moderation policy. Barrett distinguished between an algorithm that helps sift and delete undesirable content from one that “just presents automatically to each user whatever the algorithm thinks the user will like — e.g., content similar to posts with which the user previously engaged.” The former carries out an editorial policy; the latter doesn’t. She also pondered the use of artificial intelligence to sift out “hateful” content by drawing on a language model to determine what is hateful. “Technology,” she asserted, “may attenuate the connection between content-moderation actions (e.g., removing posts) and human beings’ constitutionally protected right [to free expression].” Justice Jackson, along similar lines, cautioned that “courts must make sure they carefully parse not only what entities are regulated, but how the regulated activities actually function before deciding if the activity in question constitutes expression and therefore comes within the First Amendment’s ambit.”

Justice Samuel Alito, in an opinion joined by Justices Clarence Thomas and Neil Gorsuch, was critical of the view held by the Kagan majority that “social-media platforms — which use secret algorithms to review and moderate an almost unimaginable quantity of data today — are just as expressive as the newspaper editors who marked up typescripts in blue pencil 50 years ago.” The key distinction for Alito is whether a curation (or “compilation”) is “inherently expressive” or merely serves as a “passive receptacle” of third-party speech. A compilation is inherently expressive only where “compilers express a message of their own.” Surveying the field of social media platforms and their use of algorithms, he concluded that “not all platforms curate all third-party content in an inherently expressive way.” Some platforms, he opined — including Facebook and YouTube — might be considered nothing more than “common carriers” of third-party speech, a question that remains to be decided.

At stake in the court’s divided perspective on how to apply the First Amendment to social media is the constitutionality of a host of new bills that seek to regulate large platforms by targeting their algorithms. The Texas and Florida laws banning platform censorship based on viewpoint were obviously on dangerous grounds by overtly dictating what private companies could say or not say. But other bills, including the Filter Bubble Transparency Act, currently before both chambers of Congress, and laws in California and New York aiming to curb the impact of “addictive feeds” on children, are also implicated. A common feature of these bills is to mandate that platforms offer a content feed not based on user data. Does compelling a platform to offer this choice amount to censorship or compelled speech? Maybe not if social media companies do not enjoy First Amendment protection over any and all algorithmic curation.

The concerns around the Supreme Court’s decision in NetChoice were thus warranted, given the stakes. But the caution expressed by all of the judges — their reluctance to conclude that any and all social media curation will attract constitutional protection — suggests that law makers still retain a wide ambit to govern social media in the public interest.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Robert Diab is a professor of law at Thomson Rivers University, in Kamloops, British Columbia, with specialties in civil liberties and human rights law.