[Ritwik Sharma is a fourth-year B.A., LL.B. (Hons.) student at the Rajiv Gandhi National University of Law (RGNUL), Punjab. In this analysis, the author aims to address the escalating crisis of Non-Consensual Intimate Imagery (NCII) generated through AI-driven “nudifying” platforms. Supported by empirical data and visual mapping, the piece argues that current judicial responses, which often rely on notice-and-takedown mechanisms,are insufficient to address the permanent and systemic nature of deepfake harms.]
- INTRODUCTION
On 5th March 2026, the Punjab and Haryana High Court issued a notice to various Union ministries to consider urgent regulatory intervention to tackle AI-generated deepfakes. What makes this call to legislate significant is that this was done after the notification of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (‘IT Rules, 2026’), under the Information Technology Act, 2000 (‘IT Act’) to address the same issue. On 24th February 2026, the Gujarat High Court had issued a similar notice to various Central and state authorities in a PIL seeking judicial intervention due to regulatory gaps in the IT Rules, 2026. Undoubtedly, the IT Rules, 2026 have significantly expanded the AI regulatory framework in India by bringing deepfakes under the definition of ‘synthetically generated information’ under Rule 2(1)(wa). However, this piece contends that such changes to the IT framework may not materially affect deepfake takedown litigation positively since it is inherently a gendered issue that requires gender-specific solutions. It hypothesises that prevailing judicial attitudes towards deepfakes in India that tend to make it easier for men with commercial or political fame to access takedown litigation dilutes the efficacy of takedown provisions. It relies on a 2023 study showing that nearly every victim of deepfakes is female, and close to all such deepfakes are sexual in nature, to show that female petitioners in India remain a small minority despite being the primary victims of deepfakes. To support this claim, this piece undertakes an empirical analysis of all judgements in India as of February 09th 2026 involving a claim to remove deepfakes, as presented below in Table 1. Based on this analysis, the piece offers more neutral judicial rationales to justify takedown orders of unauthorised deepfakes such as the violation of ‘digital personhood’ and defamation, instead of the violation of personality and publicity rights, which currently remain the most frequent grounds.
| No. | Judgement Title | High Court | Year | Sex
(M/F) |
AI Involved
(Yes/No) |
Type of Deepfake | Primary Route of Violation |
| 1. | Akira Desai v. Sambhawaami Studios LLP | Bombay | 2026 | M | Yes | Commercial | Personality and publicity rights |
| 2. | Ankur Warikoo v. John Doe | Delhi | 2025 | M | No | Commercial | Personality and publicity rights |
| 3. | Sadhguru Jagadish Vasudev v. Igor Isakov | Delhi | 2025 | M | Yes | Commercial | Personality and publicity rights |
| 4. | Shilpa Shetty Kundra v. Getoutlive.in | Bombay | 2025 | F | Yes | Sexual | Privacy under Article 21
|
| 5. | Akshay Hari Om Bhatia v. John Doe | Bombay | 2025 | M | Yes | Commercial; Political | Personality and publicity rights, and a mention of Privacy under Article 21 |
| 6. | Suniel V Shetty v. John Doe S Ashok Kumar | Bombay | 2025 | M | Yes | Commercial; Sexual | Personality and publicity rights |
| 7. | Sudhir Chaudhary v. Meta Platforms Inc | Delhi | 2025 | M | Yes | Political | Personality and publicity rights |
| 8. | Ranganthan Madhavan v. G. Fimlz Studioz | Delhi | 2025 | M | Yes | Commercial; Sexual | Personality and publicity rights |
| 9. | Chandrashekhar Bhimsen Naik v. State of Maharashtra | Bombay | 2025 | M | No | Commercial | Criminal Law (not relevant to this study) |
| 10. | Ajay v. Artists Planet | Delhi | 2025 | M | Yes | Commercial; Sexual | Personality and publicity rights |
| 11. | Ravi Shankar v. John Doe(s)/Ashok Kumar(s) | Delhi | 2025 | M | No | Commercial | Personality and publicity rights |
| 12. | Aishwarya Rai Bachchan v. Aishwaryaworld.com | Delhi | 2025 | F | Yes | Commercial; Sexual | Personality and publicity rights |
| 13. | Raj Shamani v. John Doe/Ashok Kumar | Delhi | 2025 | M | Yes | Commercial | Personality and publicity rights |
| 14. | Kamya Buch v. JIX5A | Delhi | 2025 | F | Yes | Sexual | Defamation |
| 15. | T.V. Today Network Ltd. v. Google LLC | Delhi | 2025 | F | No | Commercial | Personality and publicity rights |
| 16. | Abhishek Bachchan v. Bollywood Tee Shop | Delhi | 2025 | M | Yes | Commercial; Sexual | Personality and publicity rights |
| 17. | Konidala Pawan Kalyan v. Ashok Kumar John Doe | Delhi | 2025 | M | Yes | Commercial, Political | Personality and publicity rights |
| 18. | Akkineni Nagarjuna v. WWW.BFXXX.ORG | Delhi | 2025 | M | Yes | Commercial; Sexual | Personality and publicity rights |
| 19. | Nandamuri Taraka Rama Rao v. Ashok Kumar | Delhi | 2025 | M | Yes | Commercial | Personality and publicity rights |
| 20. | Ilaiyaraaja v. John Doe Ashok Kumar | Madras | 2025 | M | No | Commercial | Personality and publicity rights |
| 21. | National Stock Exchange of India Ltd. v. Meta Platforms, | Delhi | 2024 | M | No | Commercial | Personality and publicity rights |
| 22. | Arijit Singh v. Codible Ventures LLP | Bombay | 2024 | M | Yes | Commercial | Personality and publicity rights |
| 23. | Gaurav Bhatia v. Naveeen Kumar | Delhi | 2024 | M | Yes | Political | Privacy under Article 21 |
| 24. | Akshay Tanna v. John Doe | Delhi | 2024 | M | Yes | Commercial | Personality and publicity rights |
| 25. | Anil Kapoor v. Simply Life India | Delhi | 2023 | M | Yes | Commercial; Sexual | Privacy under Article 21, along with Personality and publicity rights |
Table 1: An exhaustive list of successful orders by Indian Courts directing takedown of deepfakes uploaded online.
- METHODOLOGY OF DATA COLLECTION AND IMPLICATIONS THROUGH GRAPHICAL ANALYSIS
For this empirical study, the primary tool of data collection was the online legal database and research platform SCCOnline. The judgements in Table 1 were collected using the platform’s ‘Boolean’ search filter for terms such as “deepfake”, “deep fake”, “artificial intelligence”, “AI tools”, etc. through which High Court and Supreme Court judgements were searched from the year 1900 to 2026. The judgements with these keywords which did not consist of a petitioner’s prayer for take down of deepfake media of the petitioner were filtered out. Thereafter, the name of the case with its SCC citation, the name of the High Court making that decision, the year of the judgement, and the sex of the aggrieved party were noted. Subsequently, it was analysed whether the facts of the case indicated that the deepfakes were created using AI tools, and the intended purpose of publication of the deepfake was noted and categorized into three types:
- Non-consensual sexually explicit deepfakes;
- Deepfakes intended to have a political consequence;
- Deepfakes intended to exploit some commercial benefit to the creator or loss to the aggrieved party.
Finally, the focus of the court in analysing the effect of the deepfake, and the area of rights that the court deemed to be violated by the publication of the deepfakes were studied. Based on this research methodology, the analysis of the data in Table 1 reveals the following.
First, the Supreme Court of India as of February 09th 2026 has not dealt with any case pertaining to the takedown of deepfakes. In Figure 1, the lack of Supreme Court cases on deepfakes reveals that takedown orders from High Court have not been appealed in the Supreme Court, and are largely unchallenged. The only High Courts in the country where deepfake-related cases have been filed are the Delhi High Court, the Bombay High Court and the Madras High Court. This suggests that High Court deepfake takedown orders are less likely to be appealed to the Supreme Court, given prima facie legal violations regardless of the legal route undertaken to avail remedies. Moreover, regarding the legal fora selected by aggrieved parties to get their deepfakes removed, it is evident that the Delhi High Court has emerged as the most attractive forum, followed by the Bombay High Court.
Figure 1: (a) Pie-chart showing the most chosen courts for filing takedown claims for deepfakes in India; (b) bar graph showing the number of cases related to deepfakes by year.
Second, no High Court judgement before 2023 mentioned deepfakes, which suggests that the phenomenon is new to the Indian litigation arena. Further, the rise in the number of cases that are related to deepfakes from 2023 to 2025 is extremely sharp. From just 4 cases in 2024, 2025 recorded 19 such cases, which is a staggering rise of 375% growth in deep-fake related cases in just one year. This consistent rise suggests that the current year is likely to witness at least just as many deepfake cases, if not more. Given the notification of the IT Rules, 2026, the likelihood of litigation is only higher since AI-generated deepfakes now have express statutory recognition under Rule 2(1)(wa) of the IT Rules, 2026.
Third, the bar graph in Figure 1 reveals that there has been a sudden rise in the number of cases since 2025, which can be attributed to the easier access to AI among the general population in late 2023. Similarly, From Figure 2, it can be observed that a huge majority of deepfake cases in India involve AI-generated deepfakes. Interestingly, women form only a small minority, i.e., 16% of all petitioners who have ever filed for a takedown suit for deepfakes, while men form an overwhelming majority. Further, Kamya Buch v. JIX5A (‘JIX5A’) and Akshay Tanna v. John Doe (‘Akshay Tanna’) are the only two cases involving non-celebrity petitioners, therefore forming only 0.08% of all cases. Notably, the ratio of male and female non-celebrity petitioners is equal, implying that both classes are vulnerable to being underrepresented in deepfake takedown litigation, while non-celebrity women still being the most underrepresented category despite being the most frequent victims.
Figure 2: (a) Pie-chart indicating the percentage of cases where deepfakes were generated using AI; (b) pie-chart showing the sex ratio of petitioners in such cases.
Fourth, despite personality and publicity rights not being awarded express statutory recognition in India, they have been invoked by Courts to justify takedown orders in every case barring two, i.e., JIX5A and Chandrashekhar Bhimsen Naik v. State of Maharashtra. Evidently, these cases involve ordinary petitioners with no commercial interests whatsoever. Akshay Tanna is the only case which involves a non-celebrity whose commercial interests have been recognized.
Last, from Figure 3, it can be observed that most of such cases involve deepfakes of a commercial nature, and the most commonly taken judicial route to justify removing them is the infringement of personality and publicity rights. In context of this research, the judicial rationales in deepfakes involving only political deepfakes can be equated with those involving commercial deepfakes since in both the cases involving purely political deepfakes, i.e., in Sudhir Chaudhary v. Meta Platforms Inc and Gaurav Bhatia v. Naveeen Kumar, the Delhi High Court found the petitioners to be well-known enough to justify the takedown of the deepfakes to prevent harm to their public image.
Therefore, the commercial aspect can be furthered explained by the fact that most petitioners in Table 1 are male actors or politicians, i.e., male celebrities. This fact strongly suggests that takedown litigation is far more accessible to men as opposed to women, particularly those with commercial or political influence. This shows that not only do non-celebrity women have disproportionately lower access to deepfake takedown litigation, they also have a substantially lower chance of success since the primary justification for takedown is the violation of personality and publicity rights which are enjoyed only by well-known individuals who can prove commercial harm through the publication of their deepfakes.
Figure 3: (a) Stacked bar graph showing the legal route of violation invoked in Indian deepfake litigation by gender and type of deepfake; (b) Venn-diagram showing the occurrence of deepfake cases categorized under commercial, political and sexual deepfakes.
- DIGITAL PERSONHOOD AND DEFAMATION
From the patterns highlighted in the previous section, two legal challenges can be identified: (a) High Courts prioritize commercial rights such as personality and publicity rights over the right to sexual privacy and dignity under Article 21; and (b) this dominant judicial reasoning favours male celebrities at the expense of women and non-celebrities who resultantly are less likely to succeed in deepfake takedown cases. Given that there is a legal basis to exclude non-celebrities from claiming a violation of personality rights such as in the Punjab & Haryana High Court’s decision in T-Series (Super Cassettes) Industries Ltd. v. Dreamline Reality Movies Pvt. Ltd. (‘Dreamline Reality’), non-celebrities and particularly female non-celebrities may not have any legal remedy to have a complete, permanent removal of their deepfakes from the internet. Since the IT Rules, 2026 do not have any provision for this kind of permanent removal outside of a complaint-based grievance redressal system that can only address individual instances of deepfake publication, I suggest revisiting existing judgements which offer a legal route of takedown other than personality and publicity rights.
Perhaps the most crucial judgement in Table 1 that non-celebrity women can place reliance on in their takedown pleas is Shilpa Shetty Kundra v. Getoutlive.in (‘Getoutlive.in’). Despite the plaintiff being a prominent female celebrity, the Bombay High Court in paragraph 12 observed that AI-generated content that reconstructs a person’s identity can be considered a violation of personality rights through the violation of their ‘digital personhood.’ This observation is extremely significant due to two reasons. First, the Court does not use commercial harm as a scale to determine whether a violation of personality rights has taken place. The Court’s express recognition of the misuse of AI to malign a woman’s dignity in paragraph 13 makes this case a solid authority for all female victims of sexual deepfakes. Second, this is the only judgement in India which expressly recognizes ‘digital personhood’ as a concept, and considers its violation to be a violative of personality rights. This judgement can be construed to mean that personality rights in the cyberspace are not limited to commercial harm alone, and are inclusive of digital identity.
Another outlier is JIX5A where the Delhi High Court granted ad-interim relief simply because the AI-generated deepfakes of the non-celebrity female petitioner were “deplorable” and “defamatory” due to their sexually explicit nature. Without naming Article 21, the Court considered such publication to be a violation of the petitioner’s fundamental rights. This step is extremely significant because such a judicial reasoning for justifying takedown protects an individual’s identity and privacy inherently, without the need to prove any form of commercial harm. Despite the petitioner being a non-celebrity woman, the Court did not make any gendered observation, which makes this case the best-placed authority among all cases in Table 1 for male non-celebrities to rely on in their deepfake takedown pleas. The gender-neutral consideration of “defamation” as a ground for takedown makes this case exceptionally significant as a precedent, particularly because it shows that it is possible to make a successful deepfake takedown plea without being a celebrity.
Finally, the judicial approach in Akshay Hari Om Bhatia v. John Doe to recognize the violation of Article 21 despite primarily grounding the takedown order in the violation of his personality and publicity rights is a welcome step. It implies that the two are not mutually exclusive, and that the violation of privacy is a broader concern. This case can thus be of consideration for non-celebrities who cannot prove commercial harm. Similarly, Akshay Tanna can be of immense value for non-celebrity individuals who are victims of deepfakes involving impersonation of their identity, but cannot sufficiently prove commercial harm. In this case, the Court was satisfied that the disrepute caused to the petitioner due to the deepfake by being misleading was sufficient to prove a violation of personality rights. Such judicial positions are favourable for non-celebrity service professionals whose digital persona is key to their careers.
- THE WAY FORWARD
At a time when AI is evolving faster than the Parliament’s ability to regulate it owing to the pacing problem, it is expedient to protect all individuals from its ill effects. By applying gender-neutral, commercial rationales alone to justify takedown orders, the Courts risk limiting their precedential value to benefit merely a select few who already enjoy socio-economic privileges that reflect in their capacity to litigate. I contend that the only way to bridge the gender gap between male and female petitioners in deepfake takedown cases is for Courts to make the right to takedown of non-consensual publication of deepfakes more accessible to non-celebrity women by grounding their orders in rationales accessible to the ordinary woman. This can be done by either basing takedown orders in the violation of the right to privacy under Article 21 instead of personality and publicity rights, or in the violation of ‘digital personhood’ as done in Getoutlive.in. This approach automatically benefits non-celebrity men along with all women. The way forward is measuring harm not from a commercial standpoint, but from the perspective of legal injury caused as a result of someone else’s unauthorised reconstruction of an individual’s digital identity in the cyberspace.
Finally, in line with JIX5A, I urge the Courts to consider ‘defamation’ as a fool-proof ground to takedown all non-consensual deepfakes of a sexual nature. Given that Rule 3(1)(b)(v) of the IT Rules, 2026 already makes intermediaries duty-bound to prevent their platforms from hosting misinformation, I suggest that the position in JIX5A should extend to cases involving non-sexual deepfakes as well. Disrepute to the aggrieved party is still caused to the extent of being defamatory if there is commercial or political harm through the publication of the deepfake. Hence, wherever the petitioner is a non-celebrity, the violation of the right to privacy under Article 21 should be a sufficient ground to justifying the takedown of deepfakes. Likewise, for celebrities who are victims of commercial or political deepfakes, defamation can always be a valid ground alongside the violation of personality and publicity rights to justify takedown orders.