Skip to content

Tech Law Forum @ NALSAR

A student-run group at NALSAR University of Law

Menu
  • Home
  • Blog Series
  • Write for us!
  • About Us
Menu

Web 2.0 Solutions for Web 3.0 Problems: Intermediary Liability and the Deepfake Crisis in India

Posted on February 3, 2026February 3, 2026 by Tech Law Forum NALSAR

[Divisha Dalal &  Rajdeep Dutta are postgraduate students at the University of Bristol. This article argues that India’s intermediary liability framework under Section 79 of the IT Act and the Draft Amendments to the IT Rules represent outdated Web 2.0 solutions inadequate for addressing Web 3.0 deepfake threats, as they rely on reactive “actual knowledge” standards rather than proactive systemic risk management. The authors propose a shift from India’s intermediary-centric approach to a duty of care model inspired by the EU’s Digital Services Act and AI Act, which would hold platforms accountable for algorithmic amplification of synthetic media and require safety-by-design measures as conditions for safe harbor protection.]

Introduction

As technological advancement continues at an exponential pace, its drawbacks and unintended externalities have correspondingly intensified, at times with catastrophic consequences. Historically, the notion that an individual could be so comprehensively personified or digitally constructed as to deceive the public at large would have been dismissed as a fanciful and untenable hypothesis. However, in the present digital and algorithm-driven ecosystem, such deception is no longer merely speculative; it has become both practicable and prevalent, enabled by technological developments that systematically erode the distinction between authentic and fabricated. One major form of this deception is deepfakes, which are defined as hyper-realistic, AI-generated synthetic media graduating from a technical novelty to a regulatory dilemma for India, resulting in a pacing problem.

The question is, can the safe harbour doctrine designed for a Web 2.0 era of user-generated text survive the Web 3.0 era of AI-generated deception? This article examines the current intermediary liability framework under Section 79 of the Information Technology Act, 2000, and specifically the Draft Amendments to the Information Technology Rules, 2021, released in October 2025. It argues that while the introduction of labelling mandates and good faith provision attempts to modernise the regulatory framework of the Information Technology Act, 2000, these measures remain patchwork repairs on an outdated liability model. By juxtaposing India’s reactive approach with the systemic risk framework established by the European Union, this article proposes a paradigm shift beyond the actual knowledge standard to a duty of care model.

Regulatory Framework and Limitations 

Section 79 of the Information Technology Act, 2000 (hereinafter referred to as the “IT Act”),  provides a safe harbour to intermediaries, including social media platforms, ISPs, and hosting services for third-party content, provided they observe due diligence and act as neutral conduits and the same has been observed by the Supreme Court of India in Shreya Singhal v. Union of India (2015).

However, the emergence of deepfake technologies fundamentally undermines the reasoning articulated by the Supreme Court, on two distinct and interrelated grounds. Unlike defamatory text, which may take time to circulate, a deepfake video is visual, visceral, and viral. Consequently, by the time a victim secures a court order (or even files a grievance), the reputational damage is often permanent. Moreover, the actual knowledge standard is too slow for a threat that moves at the speed of an algorithm. As platforms are no longer just hosts but active curators on account of their algorithms designed to boost viral content, they often amplify deepfakes. Accordingly, the question arises that when an intermediary’s code actively pushes a deepfake to millions of feeds, can it still claim to be a neutral facilitator?

Regulatory Response or Patchwork Repairs

In October 2025,  the Ministry of Electronics and Information Technology (“Meity”) attempted to address the lacunae of the IT Act by introducing the  Draft Amendments to the Information Technology Rules, 2021 (“Draft Amendment”). 

These amendments represent a shift in regulatory approach by introducing specific obligations for Synthetically Generated Information (hereinafter referred to as “SGI”). As defined under Rule 2 (1) (wa), SGI is an artificially or algorithmically created, generated, modified or altered to appear authentic. Analytically, this provision transcends the subjective term “deepfake” with a clear technical standard. By diverting legal responsibility away from contested classifications and towards the process of algorithmic creation, the Draft Amendment enhances doctrinal clarity and reduces interpretive ambiguity, while potentially lowering evidentiary thresholds for enforcement. 

The Overregulation Risk: Unintended Criminalisation

However, this technical precision serves as a double-edged sword where the definition’s breadth risks over-regulation. Without a de-minimis threshold, the rule inadvertently criminalises or stigmatises tools, such as AI image upscalers, grammar correction AI, or aesthetic filters, grouping them with malicious disinformation. 

Operational Deficiencies: The “Upload” Loophole & Encryption Dead Ends

The Draft Amendments also introduce transparency requirements. Under Rule 3(3), intermediaries are required to ensure that SGI is either clearly labelled or embedded with metadata, occupying at least 10 per cent of the screen surface area to ensure perceptibility and to prevent intermediaries from tokenistic disclosures. 

To support this, the Draft Amendments introduce a “Good Faith” provision under Rule 3(1)(b) which provides that intermediaries who voluntarily remove or disable access to SGI, even without a court order, will not forfeit their Safe Harbour protection. The provision attempts to address the Moderator’s Dilemma, wherein platforms previously feared that if they moderated content too heavily, it would be perceived as editorial control, resulting in platforms losing their safe harbour immunity. This also implies that the state is appointing platforms as the first line of defence.  

The transparency obligation under the rule works well for content created on a platform (e.g. Instagram or Facebook). However, the efficiency deteriorates in case the content is created first offline and then uploaded to the platform. In such cases, intermediaries lack ex-ante visibility into the provenance of the content and are dependent on either self-disclosure by the users or post hoc detection technologies, which are both susceptible to circumvention. The gap widens on End-to-End Encrypted (“E2EE”) platforms (e.g., WhatsApp, Signal). As the only way to enforce a labelling mandate would require scanning the device of the user, which would violate the promise of encryption or privacy. For example, enforcing such a requirement on a platform such as WhatsApp would be similar to the post office opening every sealed envelope to check for forged documents.

Accordingly, the labelling mandate results in a double standard where the rules impose obligations on content generated on platforms but fail to address externally generated content. Furthermore, this also highlights an enforcement gap within platform-centric regulatory models, wherein compliance obligations may not fully address the decentralised nature of contemporary synthetic media production.

The Honesty Box Mechanism and Enforcement Gap

Moreover, Rule 4(1A) requires Significant Social Media Intermediaries (hereinafter referred to as “SSMI”) to obtain a user declaration regarding SGI. However, malicious actors disseminating disinformation are unlikely to self-declare, rendering this an honesty box mechanism with limited utility against bad-faith actors. Consequently, the absence of platform-level verification tools, independent detection mechanisms, or meaningful penalties for false declarations, Rule 4(1A) provides limited use as a safeguard against the misuse of synthetic media. Instead, it operates primarily as a symbolic or signalling measure.

Additionally, while the IT Act governs the platform, the Bharatiya Nyaya Sanhita, 2023 (hereinafter referred to as “BNS”) governs the perpetrator. As per Sections 319 (Cheating by personation), 336 (Forgery), and 356 (Defamation) of the BNS provide the criminal law framework for prosecuting deepfake creators. However, without seamless coordination between the BNS’s penal provisions and the IT Rules’ traceability mandates, the liability gap remains theoretical.

Comparative Jurisprudence and Perspectives 

Contrastingly, the Regulation (EU) 2024/1689 (commonly known and hereinafter referred to as “European Union AI Act/ EU AI Act”) and the European Union’s Regulation (EU) 2022/2065 (commonly known and hereinafter referred to as the “Digital Services Act/DSA”) provide a better framework as compared to the present regulatory mechanism in India. 

Catching the Problem at the Source (The EU AI Act)

Article 50 of the EU AI Act lays down binding transparency requirements for both providers and deployers of specific categories of AI systems, generative AI included, capable of producing synthetic text, images, audio, or video. In particular, Article 50 prescribes that users have to be notified whenever content has been generated or manipulated artificially, except for applications in law enforcement or in artistic expression under conditions of control. Therefore, the obligation arises independently of the actual further dissemination, which thus ties any regulatory responsibility to the instance of its generation rather than simply the instance of platform hosting.

In contrast, Section 79 of the IT Act assumes that intermediaries are mere passive conduits without knowledge or control over content creation. This assumption does not hold consistently in the case of synthetic media. The EU AI Act limits functional immunity further up the value chain by placing disclosure and provenance obligations on AI system providers themselves, ensuring that synthetic content is traceable from the point of entry into the digital environment. Second, the Draft Amendments considered in the paragraphs above also remain largely intermediary-centric, with a vast volume of content created outside platform ecosystems remaining unscrutinised.

Targeting the “Viral” Engine (The DSA)

Articles 34 and 35 of the DSA further mandate Very Large Online Platforms (hereinafter referred to as “VLOP”) to periodically perform systemic risk analyses, testing how their services and, in particular, their algorithmic recommender systems affect civic discourse, democratic processes, and electoral integrity. Indeed, this position departs from the safe harbour model afforded under the IT Act, where liability as an intermediary is only triggered upon receipt of a court order or government notification of some kind. In particular, the DSA eschews the underlying assumption that liability should be tied to actual knowledge of specific content. In those cases where a platform’s architecture is designed in a way that predictably enables viral distribution of injurious synthetic media, liability can ensue in the absence of any takedown notice.

In addition, Article 35 requires VLOPs to implement reasonable, proportionate, and effective mitigation measures, which could involve adjusting recommendation algorithms, adding friction mechanisms to reduce virality, or improving labelling and contextualization of synthetic content. Such compliance is overseen by a regulatory framework that provides for audits and, in the case of systemic failure, significant financial penalties. Safe harbour protection under the DSA is accordingly conditional, dynamic, and dependent on risk governance. The EU AI Act and the DSA, taken together, underline a regulatory philosophy whereby safe harbour is not absolute immunity but a qualified privilege contingent upon proactive compliance. Immunity is retained only insofar as regulated actors discharge affirmative duties commensurate with their technical capacity and societal impact. On the other hand, Section 79 of the IT Act proceeds on an unabashedly binary logic, the intermediary is either immune or liable depending on whether actual knowledge may be imputed. Even the Draft Amendments in prescribing obligations specific to SGI do not fundamentally reset the safe harbour doctrine and labelling, and user declarations are largely procedural and reactive obligations that do not examine whether platform design itself contributes to the harm.

Conclusion and Recommendations

Artificial Intelligence has reshaped how digital content is created, altered and shared thereby eroding the boundaries between creation and manipulation of content. Within this context, deepfakes emerge not as a problematic misuse of technology but as a systemic vulnerability inherent in AI-powered content creation and algorithmic amplification. India’s response to these technological developments, vide the Draft Amendments, forms a necessary but partial iteration of the country’s emerging digital governance framework. By defining SGI and mandating transparency, the State has acknowledged that the passive conduit theory is obsolete in the age of generative AI. However, the current approach applies a Web 2.0 solution to a Web 3.0 problem.

While the amendments do argue for metadata and labelling, they fail to tackle the proliferation of deepfakes on encrypted messaging, the primary conduit of misinformation in India. Furthermore, reliance on a visual label like the 10% rule disregards label blindness, where users get into the habit of ignoring warnings, thus rendering such measures performative rather than protective. Building on the basis of these lacunae, India’s regulatory framework should consider a three-point strategic pivot:

Rather than following a reactive framework that targets individual content items which is an inefficient method in most cases, the Indian regime should embed the model of systemic risk taken by the European Union Digital Services Act. It is under this framework that liability would arise, not simply from hosting synthetic media, but from algorithmic amplification of such content in contexts where the risks are reasonably foreseeable. 

Furthermore, the definition of SGI requires precise delineation to exclude benign use cases. By restricting the scope to instances where synthetic media is utilised to deceive, defraud, or defame, the Information Technology Rules can be harmonised with the Bharatiya Nyaya Sanhita. This refinement ensures that legal resources are allocated toward mitigating malicious activities rather than regulating innocuous aesthetic tools. 

Finally, the safe harbour provisions as provided under the IT Act should evolve from a presumptive right into a conditional privilege predicated on a proactive duty of care. Applying this principle, platforms should demonstrate a commitment to safety by design, vide the deployment of reasonable and proportionate safeguards, including watermarking prior to the dissemination of content and compliance with the same should be a condition precedent for the availability of safe harbour protections.

Ultimately, until the law holds the algorithm accountable alongside the author, the proliferation of deepfakes will remain a technological inevitability rather than a manageable risk.

Categories

Recent Posts

  • Web 2.0 Solutions for Web 3.0 Problems: Intermediary Liability and the Deepfake Crisis in India
  • Algorithmic Manipulation Of Political Information: Assuring Accountability Through Listener Centric Approach
  • Securing Digital Evidence In India: A Case For Integrating Blockchain-Based Smart Contracts
  • India’s Soft Law Approach: Strategic Choice or Potential Oversight?
  • Rights Without Courts: India’s Troubling DPDPA Model
  • Where Does AI Training Infringe, and Do Model Weights Count? Lessons emerging from Getty Images v. Stability AI
  • Zero Days and Zero Rights? Legal Vacuum In India’s Cyber Incident Reporting Regime
  • A Critical Analysis of the Publicly Available Data Exemption in the Digital Personal Data Protection Act
  • The Cookie Consent Conundrum: Understanding EU’s Digital Privacy Law
  • Pixelated Perjury: Addressing India’s Regulatory Gaps in Tackling Deepfakes  

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
  • Twitter
  • LinkedIn
  • Instagram

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
© 2026 Tech Law Forum @ NALSAR | Powered by Minimalist Blog WordPress Theme