[This article is authored by Harshitha Adari and Akarshi Narain, 2nd year students at the NALSAR University of Law, Hyderabad. It analyses the arguments in Gonzalez v. Google, a case that came before the United States Supreme Court, in the context of the judgment’s consequences on Internet free speech.]
Section 230 of the Communications Decency Act is the pillar of internet free speech. It provides “interactive computer services” such as video platforms, social media networks, blogs, and other platforms hosting third-party speech- broad immunity from liability for the content posted by users. It states that “no user or provider of an interactive computer service shall be treated as the speaker or publisher of any information provided by another information content provider.” This protection promotes free internet speech and immunizes service providers and users for removing objectionable content. The drafters of this legislation recognize that an internet unfettered by government regulation is a non-negotiable for free speech to thrive online. However, two pending cases before the US Supreme Court, Gonzalez v. Google and Twitter v. Taamneh, challenged the scope of this law’s protections.
Gonzalez v. Google can be traced back to November 2015, when the terrorist organization ISIS killed 130 people across Paris. Nohemi Gonzalez was the only American student to die in these attacks. Her parents sued Google, YouTube’s owner, for having committed “an act of international terrorism” not through encouraging the terrorist attack but through its failure to prevent ISIS from posting content on YouTube. This case could dramatically refigure the contours of the internet and online speech as it marks the first time the Supreme Court is considering Section 230’s scope, which is hugely relied upon by a range of platforms like Twitter, Reddit, and Meta. This statute, considered “a backbone of online activity,” allows the platforms to function without fear of overwhelming litigation for user content.
This article attempts to illuminate the ongoing debate between the supporters and opponents of regulation and demonstrate the downsides of selecting an extreme path. The authors begin by briefly explaining the main legal contention between the parties. This is followed by examining arguments on free speech versus regulation in the online forum. It is suggested that rather than adopting an extreme stance, more neutral options such as reasonable restrictions on online speech that respect the rights of all the stakeholders in the online space, should be explored.
Is YouTube eligible to take Section 230’s defence?
The petitioner, Gonzalez, argued that YouTube should not be allowed to take Section 230’s defence. Marshall’s Locksmith Serv. v. Google held that the plaintiff’s claim must treat the defendant as a “publisher” of the third-party content provided by “another information content provider” to avail this defence. The Petitioner argued that Google’s subsidiary, YouTube, could not be considered a “publisher” under the Section. It explained that YouTube does not publish, but merely recommends third-party content to users. The video platform creates “Up Next” recommendation lists and thumbnails, guiding users to endless content. .
The legislative intent behind this Section was to overrule Stratton Oakmont, Inc. v. Prodigy Services, which found that publishers can be liable for the content they affirmatively choose to publish. Gonzalez concedes that this defense cannot be applied here as the claim is more focused on recommendation and not dissemination of harmful content. Expounding further, while sending third-party content in file form would amount to an act of publication, sending a mere notification or URL would not. While the latter sends information about third-party content, the former directly sends harmful content to users. However, tech experts questioned this logic and pointed out that this distinction between the recommendation and the direct display of content is a debacle, as both involve the usage of pre-existing information to determine what content to show.
On the contrary, Google argued that YouTube is eligible to claim the defence as it is a “publisher” under Section 230. , as noted in Sarah v. Dirty World Entertainment Recordings LLC. Google contends that YouTube, similar to publishers like broadcasters and newspapers, orders and selects content with the help of its algorithms. Furthermore, it argued that using the algorithms is, in fact, “quintessential publishing,” which helps in sorting the content and aiding the users to navigate through the internet, hence the argument that applying the same neutral criteria qualifies YouTube as a publisher.
The Debate on Free Speech v. Regulation and the Need to Pick a Stance
Gonzalez v. Google represents supporters on two opposite ends of the spectrum. Advocates of Gonzalez argue that liability from third-party content was initially conceptualized during the internet’s infancy to protect fledgling industries. In an age where “Big Tech”, like Facebook, influences election results, calling it a fledgling industry would be absurd. Thus, it is high time that the protection under Section 230 be removed, and the powerful platforms face liability for harmful content. However, this argument fails to consider two crucial realities. Firstly, many “Small Tech” companies also exist, conveniently side-lined in the debate, who require this protection against high litigation costs and their survival. As such, removal of the protection could lead to an anti-competitive market with the domination of a few mega-platforms. Secondly, the need for protection is linked to a novel problem. The exponential expansion of websites renders complete scrutiny of content impractical. For instance, platforms such as Facebook, Reddit, and YouTube have millions of users. Moderating every user post is impossible without even including legal liability costs. This warrants dual protection for “Small Tech” companies due to their vulnerability, and for “Big Tech” companies due to their sheer user volume.
The debate also centres around the larger question of checks and balances on free speech in the online forum in light of increasing opportunities for fake news, propaganda, and virtual manipulation. Tech conglomerates argue that curtailing this protection would have chilling effects on free speech, with platforms over-regulating content to avoid potential litigation. However, supporters of Gonzalez highlight the broad immunity granted under Section 230 acting as a breeding ground for hate speech, online targeting, disinformation, etc. In fact, the argument that immunity could impair free speech is also put forth, as users tend to “self-censor” to avoid attacks by hate speech groups.
Putting the Case for Reasonable Restrictions on Internet Free Speech
Both sides warn of dire consequences should the other side prevail. What can be concluded is that adopting an extreme position could prove catastrophic. Stuck between a rock and a hard place, the space in between must be explored. In other words, reasonable restrictions have long been hand in glove with free speech. The rule for restrictions remains the same for the internet as well, according to the principle “What applies offline, also applies online,” as affirmed by the Human Rights Council. While the global nature of the internet hardens the task of applying uniform restrictions, a standard test that any potential restriction on internet free speech must pass has been proposed by Special Rapporteur Frank la Rue. This cumulative three-step test includes, firstly, that the restriction must be provided by law. Secondly, it must pursue one of the purposes envisaged in Article 19 of the International Covenant on Civil and Political Rights. Thirdly, it must be necessary and the least restrictive means to achieve the respective objective.
This three‑step test of legality, legitimacy and necessity can be tweaked according to state specificities. For instance, the entrenchment of free speech in the First Amendment to the US Constitution significantly widened the ambit of freedom of expression in the US vis à vis Europe. Forms of expression deemed illegal under the European Court of Human Rights, such as racist or hate speech, are protected in the US. Seen from this light, the test can serve as a guiding light or a basic structure for countries to build upon their restrictions on online free speech. It is also imperative that the application of the restricting legislation be restricted to an independent body in a non‑arbitrary and non‑discriminatory way, coupled with adequate remedies against the abusive application of such legislation.
Way Forward
In the current context, perhaps narrowing or tweaking the broad immunity granted under Section 230 could serve as an acceptable compromise to both warring parties. For instance, in India, while the liability over third-party content has decreased significantly, the duties of intermediaries have increased considerably. One such duty is to address user complaints as to potentially objectionable posts. This forms a feasible number of posts to vet and achieves the goal of addressing consumer grievances as well, thereby assuaging the concerns of both parties. This must be accompanied by an exhaustive definition of what posts could be considered objectionable to streamline the process. Similar ideas and solutions from jurisdictions worldwide can be explored to create an egalitarian online community that respects all its stakeholders.