Skip to content

Tech Law Forum @ NALSAR

A student-run group at NALSAR University of Law

Menu
  • Home
  • Blog Series
  • Write for us!
  • About Us
Menu

Pixelated Perjury: Addressing India’s Regulatory Gaps in Tackling Deepfakes  

Posted on December 18, 2025December 18, 2025 by Tech Law Forum NALSAR

[This article has been authored by Isha Katiyar, a fourth-year B.Com. LL.B. (Hons.) student at Gujarat National Law University, Gandhinagar. It examines India’s legal framework for addressing deepfakes, highlighting gaps in the IT Act, BNS, and DPDP Act that fail to define or effectively regulate AI-generated media. The author proposes a comprehensive policy roadmap including statutory definitions, platform accountability, personality rights protection, and institutional mechanisms, drawing lessons from the EU, China, and the US to build a proactive regulatory approach.]

Introduction

Artificial intelligence, particularly Generative Adversarial Networks [“GANs”] has enabled the creation of hyper-realistic “deepfakes” that blur the line between fact and fabrication. These synthetic audio-visual manipulations have rapidly emerged as tools of privacy invasion, fraud, and political interference globally. India has witnessed a disturbing rise in the misuse of deepfake technology. Non-consensual explicit deepfakes targeting actresses like Rashmika Mandanna and journalist Rana Ayyub are stark reminders of the gendered violence perpetuated through synthetic media. According to the National Crime Records Bureau’s Crime in India 2022 report, a total of 65,893 cybercrime cases were registered, marking a 24.4% increase over 2021, reflecting the sharp upward trend in digital offences that deepfake technology is likely to exacerbate. A 2024 McAfee survey revealed that nearly 75% of Indians have viewed deepfake content and 38% reported being targeted by a deepfake scam, a concern that was formally raised in the Lok Sabha before the Minister of Electronics and IT in March 2025. The judiciary too has taken notice in Rajat Sharma v. Union of India and Chaitanya Rohilla v. Union of India, the Delhi High Court pressed the Union Government to constitute a committee to examine the dangers of deepfake technology, recognising its capacity to undermine privacy and public trust.

The risks are not confined to individual victims. In the corporate sphere, global scams using deepfake audio of CEOs to siphon funds highlight the potential for severe economic harm, a threat that India cannot ignore given its growing digital economy. Politically, deepfake videos circulated during recent state elections highlight the potential to distort democratic discourse. Supreme Court Justice Hima Kohli has publicly cautioned regarding deepfake technology being a ground-breaking innovation but also posing a grave threat of invasion of privacy, sexual harassment and misinformation. 

 

Why India’s Current Legal Framework Is Inadequate

India’s approach to regulating deepfakes is constrained by outdated legal provisions. The Information Technology Act, 2000 (“IT Act”) while addressing privacy violations under Section 66E and obscene content under Sections 67 and 67A does not define or explicitly criminalise the creation and distribution of deepfakes. Judicial interpretation of provisions under the IT Act has shown both their utility as well as limitations. In State of Tamil Nadu v. Suhas Katti, the first conviction under Section 67 IT Act occurred when the court dealt with obscene online content but the offence had arisen from direct human publication and not algorithmic manipulation. Even in Aveek Sarkar v. State of West Bengal, the Supreme Court clarified that obscenity must be judged by contemporary community standards which is difficult to apply to AI-generated sexual content that spreads virally within seconds. Courts have increasingly relied on John Doe injunctions such as those recently issued by the Delhi High Court to protect actors such as Aishwarya Rai and Abhishek Bachchan, but these just remain stopgap measures in the absence of actual statutory recognition of personality rights.

Section 79 of the IT Act provides safe harbour to intermediaries by linking it to “actual knowledge,” a standard that doesn’t work against deepfakes because of detection challenges and the reason that they spread quickly. The Bharatiya Nyaya Sanhita, 2023 (“BNS”) does have certain remedies like provisions on defamation under Section 356 and forgery under Sections 334 to 336 but these often don’t address the unique challenges of deepfakes. The Copyright Act, 1957 could protect people from having their likenesses misused, but it doesn’t clearly recognise personality rights over one’s voice and image which makes it hard for victims to get help. The recently enacted Digital Personal Data Protection Act, 2023 (“DPDP Act”) although important for protecting personal data, does not impose obligations on platforms to detect, label, or remove AI-generated synthetic media nor does it address the training of AI models using scraped personal data without consent.

 

Global Approaches India Can Learn From

India’s legal system is still behind when it comes to proactive regulation, but other countries have made great strides in regulating deepfakes. The European Union’s Artificial Intelligence Act, 2024 (“EU AI Act”) adopts a risk-based approach, with Annexe III identifying high-risk systems to guide deepfake regulation. By mandating labelling of synthetic content under Article 50 and backing it with election-monitoring measures during the June 2024 European Parliament elections, the EU has shown how proactive safeguards can curb deepfake misuse, an approach India can meaningfully learn from. China’s 2022 Provisions on the Administration of Deep Synthesis Internet Information Services under Article 17 also require labelling of deepfake content in a reasonable position and empower platforms to detect and remove unlabelled synthetic content. In practice, regulators have compelled major platforms such as Tencent and ByteDance to deploy watermarking and removal mechanisms for AI-generated videos that lacked disclosure. In the United States, California’s Assembly Bill 972 has been used in litigation to penalise the use of deepfake pornography in cases where victims’ likenesses were manipulated without consent, while Texas’s House Bill No. 449 has been invoked in election-related cases to restrict AI-driven political misinformation campaigns. Singapore’s IMDA AI Verify Framework adopts a voluntary approach, encouraging platforms to assess explainability and bias. At the ATxAI conference, Minister Josephine Teo launched the AI Verify Foundation with global partners like IBM, Microsoft, and Google to build open-source AI testing tools showcasing a collaborative model India can learn from.

These frameworks demonstrate the urgent need for India to move towards a proactive, ex-ante regulatory model rather than relying on scattered and outdated provisions.

 

Policy Roadmap for India’s Deepfake Governance

  1. Defining and Labelling Deepfakes

India should introduce a statutory definition of deepfakes within the IT Act or the upcoming Digital India Act, explicitly recognising them as AI-generated synthetic media that impersonate individuals or misrepresent actions, with the potential to cause reputational, psychological, financial, or democratic harm. Alongside this, it should mandate clear, visible labelling and irreversible watermarking of synthetic content by platforms and creators, drawing from the approaches adopted by other jurisdictions mentioned earlier. 

  1. Strengthening Platform Accountability

Section 79 of the IT Act must be amended to move from an ‘actual knowledge’ as upheld in the Shreya Singhal v. Union of India case to a ‘constructive knowledge’ standard, obligating platforms to proactively detect and remove harmful deepfakes rather than waiting for complaints or court orders. This change will encourage investment in robust AI detection tools, clear audit trails, and faster takedown processes. 

  1. Protecting Personality and Privacy Rights

The Delhi High Court has recently granted John Doe injunction orders to protect the personality rights of people like Ankur Warikoo, Anil Kapoor and Vishnu Manchu against deepfakes. However, these kinds of judicial interventions won’t last in the long run because India doesn’t have clear laws that protect personality rights. To fix this problem, the Copyright Act should be amended to recognise personality rights just like France has done under its French Civil Code Article 9, opposing the dissemination of a person’s picture in any form. This would give people legal control over how their likeness, voice, and persona are used. The DPDP Act should also be used to punish people who use personal data without permission to train generative AI models.

  1. Institutional Mechanisms and Capacity Building

India urgently requires formation of a specialised Deepfake and Synthetic Media Task Force within the Ministry of Electronics and Information Technology. This task force should include technologists, policymakers and legal experts to create technical standards, effective detection benchmarks and facilitate the swift removal of harmful content, especially during elections or public emergencies. At the same time, nationwide media literacy campaigns should be launched with digital literacy integrated into educational curricula, community outreach programs and partnerships with civil society groups. 

  1. Global Cooperation and Technological Investment

India needs to actively participate in cross-border collaborations since deepfakes are a transnational issue and a threat to the world economy as the United Nations has also noted. The sharing of best practices, information on detection technologies and coordinated removals of synthetic content hosted outside of its jurisdiction should be the main objectives of this kind of cooperation. India has shown its willingness to contribute to global AI governance by participating in forums like the Bletchley Park Declaration at the AI Safety Summit and this momentum should be strengthened through continued international engagement. India’s involvement in the G20 AI Principles and OECD AI framework also further reflects its its expanding influence on global AI governance. At the same time, India should invest in domestic AI research to develop detection tools that are culturally and linguistically tailored to its diverse population. 

 

Way Forward and Conclusion

The Delhi High Court has repeatedly recognised deepfakes as an emerging societal threat, urging the Union Government to act swiftly while the European Commission has also noted the inadequacy of existing Indian legal frameworks in dealing with such cyber offences. Although the Ministry of Electronics and Information Technology has issued advisories to social media platforms and promised new regulations, these steps must now be converted into a comprehensive and forward-looking framework. The recently passed U.S. Take It Down Act in response to mounting cases of schoolchildren and women being targeted with AI-generated intimate images, underscoring how public outrage and survivor advocacy can drive decisive legislative action. With global deepfake incidents rising at 900% annually, India must shift from reactive to proactive approaches by defining deepfakes legally, mandating labelling, ensuring platform accountability, and recognising personality rights. A dedicated task force, investment in culturally relevant detection tools, cross-border cooperation, and media literacy campaigns are essential to build a rights-focused and resilient digital ecosystem.

Categories

Recent Posts

  • Algorithmic Manipulation Of Political Information: Assuring Accountability Through Listener Centric Approach
  • Securing Digital Evidence In India: A Case For Integrating Blockchain-Based Smart Contracts
  • India’s Soft Law Approach: Strategic Choice or Potential Oversight?
  • Rights Without Courts: India’s Troubling DPDPA Model
  • Where Does AI Training Infringe, and Do Model Weights Count? Lessons emerging from Getty Images v. Stability AI
  • Zero Days and Zero Rights? Legal Vacuum In India’s Cyber Incident Reporting Regime
  • A Critical Analysis of the Publicly Available Data Exemption in the Digital Personal Data Protection Act
  • The Cookie Consent Conundrum: Understanding EU’s Digital Privacy Law
  • Pixelated Perjury: Addressing India’s Regulatory Gaps in Tackling Deepfakes  
  • Between Tokens and Stakes: The Unintended Overreach of India’s Online Gaming Law

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
  • Twitter
  • LinkedIn
  • Instagram

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
© 2026 Tech Law Forum @ NALSAR | Powered by Minimalist Blog WordPress Theme