[Rahul P. is a graduate research fellow at the JSW Centre for the Future of Law at NLSIU Bengaluru. In this piece, the author interrogates the escalating tension between the regulation of “Synthetically Generated Information” (SGI) and the maintenance of platform neutrality, specifically through the lens of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026. He argues that by leveraging safe harbour provisions to mandate proactive monitoring and persistent metadata, the new rules risk transforming intermediaries into a de facto “speech police.” This shift toward automated filtering and pre-emptive takedowns, he contends, creates a significant threat of over-compliance that could undermine free expression and privacy within India’s digital ecosystem.]
India’s new synthetic media regulations aim to address the growing harms of AI-generated content in the digital public sphere. But in doing so, they risk transforming online platforms into a de facto speech police, with profound implications for free expression and privacy.
On February 10th 2026, the government of India notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, creating a legal framework for “synthetically generated information” (SGI), a category that covers AI-generated and AI-modified audio-visual material, including deepfakes. The amended rules mark India’s first attempt to govern AI-generated media and address growing concerns about the real-world harms caused by AI-manipulated content.
Deepfakes and other AI systems are used for creating non-consensual intimate imagery, financial fraud, and impersonation scams. Indian courts have also urged the government to draft laws to regulate AI-generated content, providing the context for the amended rules.
At the same time, regulating AI-generated content carries acute constitutional challenges. Draft amendment rules released earlier on October 22nd, 2025, drew sharp criticism from free speech advocates. While the notified rules have sought to address some issues, major concerns remain, including dramatically reduced timelines and the imposition of duties on intermediaries to label all SGI content and embed persistent metadata, which has raised eyebrows.
These duties are tied to regulatory compliance for safe harbour under section 79 of the IT Act. Although the platforms are meant to help control the harms of AI-generated content. By using safe harbour as a lever, the amendments risk transforming platforms into proactive gatekeepers of internet speech. This shift risks overcompliance by intermediaries, leading to over-censorship, pre-emptive takedowns, and automated filtering. All of which will have a profound impact on free speech, privacy, and platform neutrality.
What the Rules Require
At the heart of the amendment is the definition of “synthetically generated information” which means “audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event capturing any audio-visual content.”
The definition has excluded benign uses, including editing, formatting, and enhancement that do not misrepresent the context of the underlying audio or video; the use of computer resources to improve accessibility or quality; and good-faith creation of documents, etc. Once SGI is defined, the rules impose a cascade of obligations on the intermediaries:
Mandatory labelling and provenance.
Rule 3(3)(a)(ii) provides that the intermediaries that enable the generation or publication of synthetically generated content must label all lawful SGI content in such a way that the users can easily identify it. All SGIs should also be embedded with permanent metadata that enables the identification of the content’s origin. Significant social media intermediaries (SSMIs) have the additional obligation under Rule 4(1)A to seek a declaration from users on whether the content is SGI and to verify it using automated tools before publishing it.
Content Takedown & Reporting.
Rule 3(1)(c) and Rule 3(1) (ca) also impose obligations on intermediaries to remove and report unlawful content. The intermediaries must immediately disable access or remove SGI that violates the platform rules or any law. They must also report the content in relation to the commission of an offence under any applicable law to the appropriate authority.
The amendment has drastically reduced the takedown time to 3 hours, down from 36 hours, after receiving a government or court order. The intermediaries should also respond to complaints within 7 days, down from 15 days, but in cases of removal requests related to certain sensitive issues, the resolution time is now 36 hours, half of the earlier 72 hours.
Furthermore, Rule 3(1) (ca)(ii)(III) mandates that platforms should identify and share the identity of a user accused of breaching the rules with the complainant.
Notifying Users
Rule 3(1)(c) obligates intermediaries to notify users every 3 months that any unlawful content will be dealt with by immediate removal or account termination. They are also to be informed that certain breaches will be reported to the appropriate authority and can carry criminal liability.
Friction with the Safe Harbour provision
In India, Section 79 of the Information Technology Act 2000 provides a “safe harbour” that shields intermediaries, such as social media platforms and service providers, from liability for third-party content. However, they must meet due diligence requirements. The intermediaries should not initiate transmission, select recipients, or modify content in any way. They must remove unlawful content upon receiving “actual knowledge”. The Supreme Court of India has limited actual knowledge to an order from the government or a competent court. In effect, the intermediaries are supposed to be passive observers and not active adjudicators.
The amended rules challenge this dynamic. Adding permanent metadata and labelling can be seen as modifying the content’s information. Essentially, the platforms are no longer simply transmitting user-generated content but actively transforming it. Further obligations placed on SSMIs mandate them to pre-verify every audiovisual content before publishing. This essentially involves these intermediaries in pre-emptive content removal, which removes the very neutrality on which safe harbour is built.
The government has stated that the intermediaries must follow the new rules as due diligence to retain safe harbour. The government has also clarified that disabling or removing SGI content in accordance with the rules “shall not amount to violation of Section 79 conditions”. In practice, any slip on the part of intermediaries, be it missing a watermark or a delayed takedown, could carry legal consequences. This is problematic in multiple ways. Firstly, the present AI detection tools are not advanced enough to accurately detect AI-generated audio or video. Essentially, the inefficiency of the technology can lead to loss of safe harbour. Secondly, the reduced timeline forces intermediaries to pre-emptively remove content rather than take due care.
The use of Section 79 (3)(b) of the IT Act as a parallel content removal mechanism rather than relying on the content removal regime under Section 69A of the IT Act has been criticised in the last few years. Now, with the legalised Sahyog portal( a takedown notice publishing portal) and Section 3(1)(d) introduced by Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, on October 22nd 2025, the government and its agencies can issue notices to the intermediaries to remove the content.
The 2026 rules have also reduced the timeline for compliance for these orders to a mere 3 hours, removing any time to consider the order other than complying with it, essentially by linking immunity to duties. The law effectively bribes platforms into aggressive content policing.
The Privacy and Free Speech Toll
Amended rules also raise constitutional issues regarding free speech and privacy. Embedding a unique identifier in every synthetically generated piece of content, enabling traceability, removes users’ online anonymity. The internet has remained a safe space for dissent, satire and public participation because of this anonymity. Once every SGI becomes traceable, users will fear identification, scrutiny and other legal consequences. A political cartoonist who uses AI tools to make content will now have to anticipate legal scrutiny and public targeting from the time he publishes his content. Under these circumstances, often the most rational response is self-censorship. Users will stop engaging with politically sensitive issues as a whole.
Rules further impact privacy concerns by requiring platforms to disclose the identities of the alleged accused to the complainants. Although victim redressal is important in politically or religiously sensitive issues, this kind of disclosure without any procedural safeguards can lead to serious safety risks to the users. There is also the chance that false complaints are used to gain identities as well. This creates an atmosphere where the fear of exposure discourages users from engaging in public discourse.
Rules also require the platforms to issue a notice every 3 months to their users, warning them of the consequences of using AI-generated content dishonestly. Over time, this regulatory signalling will create anticipatory compliance, leading users to avoid experimental use of AI. These pertinent notices act more like behavioural warning signs rather than information bulletins. The law, while prohibiting unlawful content, now also creates an atmosphere of restraint through this constant messaging.
The Act of turning platforms into 24*7 monitors as part of due diligence under safe harbour essentially incentivises them to adopt an aggressive monetisation model. Content that is politically sensitive or critical of authority is more likely to be pre-emptively removed. SSMIs who are further required to verify content before publishing; this essentially means that “free speech” is only published after approval from these private entities. These obligations, which make the platforms gatekeepers of content, will force users to self-censor rather than risk rejection or delay in publishing their content.
Striking a Balance
The newly amended rules represent a genuine attempt by Indian authorities to address the harms of AI-powered cyberspace. They have tried to regulate these contents by imposing obligations on the intermediaries, although this strategy sounds good on paper, and it is true that the intermediaries have to take an active part in this process, the current iterations of these rules fundamentally alter safe harbour provisions as well as hamper constitutional rights of free speech and privacy.
Linking safe harbours to these obligations converts intermediaries into active gatekeepers of internet speech. Faced with the threat of losing immunity, the platforms are structurally incentivised to err on the side of caution, which can result in over-removal, pre-emptive takedowns, and filtering. This is amplified by the reduced timelines, which can lead to a lack of oversight and erroneous censorship. These, coupled with the obligations of labelling and pre-verification, create a regime of prior restraint that will have a chilling effect on free speech.
A more proportionate regulatory strategy would be to adopt a risk-based approach focusing on high-risk uses of AI-generated content. The labelling and traceability should be narrowly constructed with procedural safeguards for the users. Compliance timelines should be more realistic; a faster decision is not always better.
No regulation should come at the cost of turning the internet into a constant zone of monitoring. If the intermediaries are converted into speech censors, the casualty will be the democratic discourse itself. So, the job before lawmakers is not to simply regulate synthetic media, but to do so in a way that preserves the constitutional promises of free speech and privacy.
India’s new synthetic media regulations aim to address the growing harms of AI-generated content in the digital public sphere. But in doing so, they risk transforming online platforms into a de facto speech police, with profound implications for free expression and privacy.
On February 10th 2026, the government of India notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, creating a legal framework for “synthetically generated information” (SGI), a category that covers AI-generated and AI-modified audio-visual material, including deepfakes. The amended rules mark India’s first attempt to govern AI-generated media and address growing concerns about the real-world harms caused by AI-manipulated content.
Deepfakes and other AI systems are used for creating non-consensual intimate imagery, financial fraud, and impersonation scams. Indian courts have also urged the government to draft laws to regulate AI-generated content, providing the context for the amended rules.
At the same time, regulating AI-generated content carries acute constitutional challenges. Draft amendment rules released earlier on October 22nd, 2025, drew sharp criticism from free speech advocates. While the notified rules have sought to address some issues, major concerns remain, including dramatically reduced timelines and the imposition of duties on intermediaries to label all SGI content and embed persistent metadata, which has raised eyebrows.
These duties are tied to regulatory compliance for safe harbour under section 79 of the IT Act. Although the platforms are meant to help control the harms of AI-generated content. By using safe harbour as a lever, the amendments risk transforming platforms into proactive gatekeepers of internet speech. This shift risks overcompliance by intermediaries, leading to over-censorship, pre-emptive takedowns, and automated filtering. All of which will have a profound impact on free speech, privacy, and platform neutrality.
What the Rules Require
At the heart of the amendment is the definition of “synthetically generated information” which means “audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event capturing any audio-visual content.”
The definition has excluded benign uses, including editing, formatting, and enhancement that do not misrepresent the context of the underlying audio or video; the use of computer resources to improve accessibility or quality; and good-faith creation of documents, etc. Once SGI is defined, the rules impose a cascade of obligations on the intermediaries:
Mandatory labelling and provenance.
Rule 3(3)(a)(ii) provides that the intermediaries that enable the generation or publication of synthetically generated content must label all lawful SGI content in such a way that the users can easily identify it. All SGIs should also be embedded with permanent metadata that enables the identification of the content’s origin. Significant social media intermediaries (SSMIs) have the additional obligation under Rule 4(1)A to seek a declaration from users on whether the content is SGI and to verify it using automated tools before publishing it.
Content Takedown & Reporting.
Rule 3(1)(c) and Rule 3(1) (ca) also impose obligations on intermediaries to remove and report unlawful content. The intermediaries must immediately disable access or remove SGI that violates the platform rules or any law. They must also report the content in relation to the commission of an offence under any applicable law to the appropriate authority.
The amendment has drastically reduced the takedown time to 3 hours, down from 36 hours, after receiving a government or court order. The intermediaries should also respond to complaints within 7 days, down from 15 days, but in cases of removal requests related to certain sensitive issues, the resolution time is now 36 hours, half of the earlier 72 hours.
Furthermore, Rule 3(1) (ca)(ii)(III) mandates that platforms should identify and share the identity of a user accused of breaching the rules with the complainant.
Notifying Users
Rule 3(1)(c) obligates intermediaries to notify users every 3 months that any unlawful content will be dealt with by immediate removal or account termination. They are also to be informed that certain breaches will be reported to the appropriate authority and can carry criminal liability.
Friction with the Safe Harbour provision
In India, Section 79 of the Information Technology Act 2000 provides a “safe harbour” that shields intermediaries, such as social media platforms and service providers, from liability for third-party content. However, they must meet due diligence requirements. The intermediaries should not initiate transmission, select recipients, or modify content in any way. They must remove unlawful content upon receiving “actual knowledge”. The Supreme Court of India has limited actual knowledge to an order from the government or a competent court. In effect, the intermediaries are supposed to be passive observers and not active adjudicators.
The amended rules challenge this dynamic. Adding permanent metadata and labelling can be seen as modifying the content’s information. Essentially, the platforms are no longer simply transmitting user-generated content but actively transforming it. Further obligations placed on SSMIs mandate them to pre-verify every audiovisual content before publishing. This essentially involves these intermediaries in pre-emptive content removal, which removes the very neutrality on which safe harbour is built.
The government has stated that the intermediaries must follow the new rules as due diligence to retain safe harbour. The government has also clarified that disabling or removing SGI content in accordance with the rules “shall not amount to violation of Section 79 conditions”. In practice, any slip on the part of intermediaries, be it missing a watermark or a delayed takedown, could carry legal consequences. This is problematic in multiple ways. Firstly, the present AI detection tools are not advanced enough to accurately detect AI-generated audio or video. Essentially, the inefficiency of the technology can lead to loss of safe harbour. Secondly, the reduced timeline forces intermediaries to pre-emptively remove content rather than take due care.
The use of Section 79 (3)(b) of the IT Act as a parallel content removal mechanism rather than relying on the content removal regime under Section 69A of the IT Act has been criticised in the last few years. Now, with the legalised Sahyog portal( a takedown notice publishing portal) and Section 3(1)(d) introduced by Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, on October 22nd 2025, the government and its agencies can issue notices to the intermediaries to remove the content.
The 2026 rules have also reduced the timeline for compliance for these orders to a mere 3 hours, removing any time to consider the order other than complying with it, essentially by linking immunity to duties. The law effectively bribes platforms into aggressive content policing.
The Privacy and Free Speech Toll
Amended rules also raise constitutional issues regarding free speech and privacy. Embedding a unique identifier in every synthetically generated piece of content, enabling traceability, removes users’ online anonymity. The internet has remained a safe space for dissent, satire and public participation because of this anonymity. Once every SGI becomes traceable, users will fear identification, scrutiny and other legal consequences. A political cartoonist who uses AI tools to make content will now have to anticipate legal scrutiny and public targeting from the time he publishes his content. Under these circumstances, often the most rational response is self-censorship. Users will stop engaging with politically sensitive issues as a whole.
Rules further impact privacy concerns by requiring platforms to disclose the identities of the alleged accused to the complainants. Although victim redressal is important in politically or religiously sensitive issues, this kind of disclosure without any procedural safeguards can lead to serious safety risks to the users. There is also the chance that false complaints are used to gain identities as well. This creates an atmosphere where the fear of exposure discourages users from engaging in public discourse.
Rules also require the platforms to issue a notice every 3 months to their users, warning them of the consequences of using AI-generated content dishonestly. Over time, this regulatory signalling will create anticipatory compliance, leading users to avoid experimental use of AI. These pertinent notices act more like behavioural warning signs rather than information bulletins. The law, while prohibiting unlawful content, now also creates an atmosphere of restraint through this constant messaging.
The Act of turning platforms into 24*7 monitors as part of due diligence under safe harbour essentially incentivises them to adopt an aggressive monetisation model. Content that is politically sensitive or critical of authority is more likely to be pre-emptively removed. SSMIs who are further required to verify content before publishing; this essentially means that “free speech” is only published after approval from these private entities. These obligations, which make the platforms gatekeepers of content, will force users to self-censor rather than risk rejection or delay in publishing their content.
Striking a Balance
The newly amended rules represent a genuine attempt by Indian authorities to address the harms of AI-powered cyberspace. They have tried to regulate these contents by imposing obligations on the intermediaries, although this strategy sounds good on paper, and it is true that the intermediaries have to take an active part in this process, the current iterations of these rules fundamentally alter safe harbour provisions as well as hamper constitutional rights of free speech and privacy.
Linking safe harbours to these obligations converts intermediaries into active gatekeepers of internet speech. Faced with the threat of losing immunity, the platforms are structurally incentivised to err on the side of caution, which can result in over-removal, pre-emptive takedowns, and filtering. This is amplified by the reduced timelines, which can lead to a lack of oversight and erroneous censorship. These, coupled with the obligations of labelling and pre-verification, create a regime of prior restraint that will have a chilling effect on free speech.
A more proportionate regulatory strategy would be to adopt a risk-based approach focusing on high-risk uses of AI-generated content. The labelling and traceability should be narrowly constructed with procedural safeguards for the users. Compliance timelines should be more realistic; a faster decision is not always better.
No regulation should come at the cost of turning the internet into a constant zone of monitoring. If the intermediaries are converted into speech censors, the casualty will be the democratic discourse itself. So, the job before lawmakers is not to simply regulate synthetic media, but to do so in a way that preserves the constitutional promises of free speech and privacy.
India’s Synthetic Media Rules: How India’s Deepfake Rules Turn Platforms into Speech Police
India’s new synthetic media regulations aim to address the growing harms of AI-generated content in the digital public sphere. But in doing so, they risk transforming online platforms into a de facto speech police, with profound implications for free expression and privacy.
On February 10th 2026, the government of India notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, creating a legal framework for “synthetically generated information” (SGI), a category that covers AI-generated and AI-modified audio-visual material, including deepfakes. The amended rules mark India’s first attempt to govern AI-generated media and address growing concerns about the real-world harms caused by AI-manipulated content.
Deepfakes and other AI systems are used for creating non-consensual intimate imagery, financial fraud, and impersonation scams. Indian courts have also urged the government to draft laws to regulate AI-generated content, providing the context for the amended rules.
At the same time, regulating AI-generated content carries acute constitutional challenges. Draft amendment rules released earlier on October 22nd, 2025, drew sharp criticism from free speech advocates. While the notified rules have sought to address some issues, major concerns remain, including dramatically reduced timelines and the imposition of duties on intermediaries to label all SGI content and embed persistent metadata, which has raised eyebrows.
These duties are tied to regulatory compliance for safe harbour under section 79 of the IT Act. Although the platforms are meant to help control the harms of AI-generated content. By using safe harbour as a lever, the amendments risk transforming platforms into proactive gatekeepers of internet speech. This shift risks overcompliance by intermediaries, leading to over-censorship, pre-emptive takedowns, and automated filtering. All of which will have a profound impact on free speech, privacy, and platform neutrality.
What the Rules Require
At the heart of the amendment is the definition of “synthetically generated information” which means “audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event capturing any audio-visual content.”
The definition has excluded benign uses, including editing, formatting, and enhancement that do not misrepresent the context of the underlying audio or video; the use of computer resources to improve accessibility or quality; and good-faith creation of documents, etc. Once SGI is defined, the rules impose a cascade of obligations on the intermediaries:
Mandatory labelling and provenance.
Rule 3(3)(a)(ii) provides that the intermediaries that enable the generation or publication of synthetically generated content must label all lawful SGI content in such a way that the users can easily identify it. All SGIs should also be embedded with permanent metadata that enables the identification of the content’s origin. Significant social media intermediaries (SSMIs) have the additional obligation under Rule 4(1)A to seek a declaration from users on whether the content is SGI and to verify it using automated tools before publishing it.
Content Takedown & Reporting.
Rule 3(1)(c) and Rule 3(1) (ca) also impose obligations on intermediaries to remove and report unlawful content. The intermediaries must immediately disable access or remove SGI that violates the platform rules or any law. They must also report the content in relation to the commission of an offence under any applicable law to the appropriate authority.
The amendment has drastically reduced the takedown time to 3 hours, down from 36 hours, after receiving a government or court order. The intermediaries should also respond to complaints within 7 days, down from 15 days, but in cases of removal requests related to certain sensitive issues, the resolution time is now 36 hours, half of the earlier 72 hours.
Furthermore, Rule 3(1) (ca)(ii)(III) mandates that platforms should identify and share the identity of a user accused of breaching the rules with the complainant.
Notifying Users
Rule 3(1)(c) obligates intermediaries to notify users every 3 months that any unlawful content will be dealt with by immediate removal or account termination. They are also to be informed that certain breaches will be reported to the appropriate authority and can carry criminal liability.
Friction with the Safe Harbour provision
In India, Section 79 of the Information Technology Act 2000 provides a “safe harbour” that shields intermediaries, such as social media platforms and service providers, from liability for third-party content. However, they must meet due diligence requirements. The intermediaries should not initiate transmission, select recipients, or modify content in any way. They must remove unlawful content upon receiving “actual knowledge”. The Supreme Court of India has limited actual knowledge to an order from the government or a competent court. In effect, the intermediaries are supposed to be passive observers and not active adjudicators.
The amended rules challenge this dynamic. Adding permanent metadata and labelling can be seen as modifying the content’s information. Essentially, the platforms are no longer simply transmitting user-generated content but actively transforming it. Further obligations placed on SSMIs mandate them to pre-verify every audiovisual content before publishing. This essentially involves these intermediaries in pre-emptive content removal, which removes the very neutrality on which safe harbour is built.
The government has stated that the intermediaries must follow the new rules as due diligence to retain safe harbour. The government has also clarified that disabling or removing SGI content in accordance with the rules “shall not amount to violation of Section 79 conditions”. In practice, any slip on the part of intermediaries, be it missing a watermark or a delayed takedown, could carry legal consequences. This is problematic in multiple ways. Firstly, the present AI detection tools are not advanced enough to accurately detect AI-generated audio or video. Essentially, the inefficiency of the technology can lead to loss of safe harbour. Secondly, the reduced timeline forces intermediaries to pre-emptively remove content rather than take due care.
The use of Section 79 (3)(b) of the IT Act as a parallel content removal mechanism rather than relying on the content removal regime under Section 69A of the IT Act has been criticised in the last few years. Now, with the legalised Sahyog portal( a takedown notice publishing portal) and Section 3(1)(d) introduced by Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, on October 22nd 2025, the government and its agencies can issue notices to the intermediaries to remove the content.
The 2026 rules have also reduced the timeline for compliance for these orders to a mere 3 hours, removing any time to consider the order other than complying with it, essentially by linking immunity to duties. The law effectively bribes platforms into aggressive content policing.
The Privacy and Free Speech Toll
Amended rules also raise constitutional issues regarding free speech and privacy. Embedding a unique identifier in every synthetically generated piece of content, enabling traceability, removes users’ online anonymity. The internet has remained a safe space for dissent, satire and public participation because of this anonymity. Once every SGI becomes traceable, users will fear identification, scrutiny and other legal consequences. A political cartoonist who uses AI tools to make content will now have to anticipate legal scrutiny and public targeting from the time he publishes his content. Under these circumstances, often the most rational response is self-censorship. Users will stop engaging with politically sensitive issues as a whole.
Rules further impact privacy concerns by requiring platforms to disclose the identities of the alleged accused to the complainants. Although victim redressal is important in politically or religiously sensitive issues, this kind of disclosure without any procedural safeguards can lead to serious safety risks to the users. There is also the chance that false complaints are used to gain identities as well. This creates an atmosphere where the fear of exposure discourages users from engaging in public discourse.
Rules also require the platforms to issue a notice every 3 months to their users, warning them of the consequences of using AI-generated content dishonestly. Over time, this regulatory signalling will create anticipatory compliance, leading users to avoid experimental use of AI. These pertinent notices act more like behavioural warning signs rather than information bulletins. The law, while prohibiting unlawful content, now also creates an atmosphere of restraint through this constant messaging.
The Act of turning platforms into 24*7 monitors as part of due diligence under safe harbour essentially incentivises them to adopt an aggressive monetisation model. Content that is politically sensitive or critical of authority is more likely to be pre-emptively removed. SSMIs who are further required to verify content before publishing; this essentially means that “free speech” is only published after approval from these private entities. These obligations, which make the platforms gatekeepers of content, will force users to self-censor rather than risk rejection or delay in publishing their content.
Striking a Balance
The newly amended rules represent a genuine attempt by Indian authorities to address the harms of AI-powered cyberspace. They have tried to regulate these contents by imposing obligations on the intermediaries, although this strategy sounds good on paper, and it is true that the intermediaries have to take an active part in this process, the current iterations of these rules fundamentally alter safe harbour provisions as well as hamper constitutional rights of free speech and privacy.
Linking safe harbours to these obligations converts intermediaries into active gatekeepers of internet speech. Faced with the threat of losing immunity, the platforms are structurally incentivised to err on the side of caution, which can result in over-removal, pre-emptive takedowns, and filtering. This is amplified by the reduced timelines, which can lead to a lack of oversight and erroneous censorship. These, coupled with the obligations of labelling and pre-verification, create a regime of prior restraint that will have a chilling effect on free speech.
A more proportionate regulatory strategy would be to adopt a risk-based approach focusing on high-risk uses of AI-generated content. The labelling and traceability should be narrowly constructed with procedural safeguards for the users. Compliance timelines should be more realistic; a faster decision is not always better.
No regulation should come at the cost of turning the internet into a constant zone of monitoring. If the intermediaries are converted into speech censors, the casualty will be the democratic discourse itself. So, the job before lawmakers is not to simply regulate synthetic media, but to do so in a way that preserves the constitutional promises of free speech and privacy.
India’s new synthetic media regulations aim to address the growing harms of AI-generated content in the digital public sphere. But in doing so, they risk transforming online platforms into a de facto speech police, with profound implications for free expression and privacy.
On February 10th 2026, the government of India notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, creating a legal framework for “synthetically generated information” (SGI), a category that covers AI-generated and AI-modified audio-visual material, including deepfakes. The amended rules mark India’s first attempt to govern AI-generated media and address growing concerns about the real-world harms caused by AI-manipulated content.
Deepfakes and other AI systems are used for creating non-consensual intimate imagery, financial fraud, and impersonation scams. Indian courts have also urged the government to draft laws to regulate AI-generated content, providing the context for the amended rules.
At the same time, regulating AI-generated content carries acute constitutional challenges. Draft amendment rules released earlier on October 22nd, 2025, drew sharp criticism from free speech advocates. While the notified rules have sought to address some issues, major concerns remain, including dramatically reduced timelines and the imposition of duties on intermediaries to label all SGI content and embed persistent metadata, which has raised eyebrows.
These duties are tied to regulatory compliance for safe harbour under section 79 of the IT Act. Although the platforms are meant to help control the harms of AI-generated content. By using safe harbour as a lever, the amendments risk transforming platforms into proactive gatekeepers of internet speech. This shift risks overcompliance by intermediaries, leading to over-censorship, pre-emptive takedowns, and automated filtering. All of which will have a profound impact on free speech, privacy, and platform neutrality.
What the Rules Require
At the heart of the amendment is the definition of “synthetically generated information” which means “audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event capturing any audio-visual content.”
The definition has excluded benign uses, including editing, formatting, and enhancement that do not misrepresent the context of the underlying audio or video; the use of computer resources to improve accessibility or quality; and good-faith creation of documents, etc. Once SGI is defined, the rules impose a cascade of obligations on the intermediaries:
Mandatory labelling and provenance.
Rule 3(3)(a)(ii) provides that the intermediaries that enable the generation or publication of synthetically generated content must label all lawful SGI content in such a way that the users can easily identify it. All SGIs should also be embedded with permanent metadata that enables the identification of the content’s origin. Significant social media intermediaries (SSMIs) have the additional obligation under Rule 4(1)A to seek a declaration from users on whether the content is SGI and to verify it using automated tools before publishing it.
Content Takedown & Reporting.
Rule 3(1)(c) and Rule 3(1) (ca) also impose obligations on intermediaries to remove and report unlawful content. The intermediaries must immediately disable access or remove SGI that violates the platform rules or any law. They must also report the content in relation to the commission of an offence under any applicable law to the appropriate authority.
The amendment has drastically reduced the takedown time to 3 hours, down from 36 hours, after receiving a government or court order. The intermediaries should also respond to complaints within 7 days, down from 15 days, but in cases of removal requests related to certain sensitive issues, the resolution time is now 36 hours, half of the earlier 72 hours.
Furthermore, Rule 3(1) (ca)(ii)(III) mandates that platforms should identify and share the identity of a user accused of breaching the rules with the complainant.
Notifying Users
Rule 3(1)(c) obligates intermediaries to notify users every 3 months that any unlawful content will be dealt with by immediate removal or account termination. They are also to be informed that certain breaches will be reported to the appropriate authority and can carry criminal liability.
Friction with the Safe Harbour provision
In India, Section 79 of the Information Technology Act 2000 provides a “safe harbour” that shields intermediaries, such as social media platforms and service providers, from liability for third-party content. However, they must meet due diligence requirements. The intermediaries should not initiate transmission, select recipients, or modify content in any way. They must remove unlawful content upon receiving “actual knowledge”. The Supreme Court of India has limited actual knowledge to an order from the government or a competent court. In effect, the intermediaries are supposed to be passive observers and not active adjudicators.
The amended rules challenge this dynamic. Adding permanent metadata and labelling can be seen as modifying the content’s information. Essentially, the platforms are no longer simply transmitting user-generated content but actively transforming it. Further obligations placed on SSMIs mandate them to pre-verify every audiovisual content before publishing. This essentially involves these intermediaries in pre-emptive content removal, which removes the very neutrality on which safe harbour is built.
The government has stated that the intermediaries must follow the new rules as due diligence to retain safe harbour. The government has also clarified that disabling or removing SGI content in accordance with the rules “shall not amount to violation of Section 79 conditions”. In practice, any slip on the part of intermediaries, be it missing a watermark or a delayed takedown, could carry legal consequences. This is problematic in multiple ways. Firstly, the present AI detection tools are not advanced enough to accurately detect AI-generated audio or video. Essentially, the inefficiency of the technology can lead to loss of safe harbour. Secondly, the reduced timeline forces intermediaries to pre-emptively remove content rather than take due care.
The use of Section 79 (3)(b) of the IT Act as a parallel content removal mechanism rather than relying on the content removal regime under Section 69A of the IT Act has been criticised in the last few years. Now, with the legalised Sahyog portal( a takedown notice publishing portal) and Section 3(1)(d) introduced by Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, on October 22nd 2025, the government and its agencies can issue notices to the intermediaries to remove the content.
The 2026 rules have also reduced the timeline for compliance for these orders to a mere 3 hours, removing any time to consider the order other than complying with it, essentially by linking immunity to duties. The law effectively bribes platforms into aggressive content policing.
The Privacy and Free Speech Toll
Amended rules also raise constitutional issues regarding free speech and privacy. Embedding a unique identifier in every synthetically generated piece of content, enabling traceability, removes users’ online anonymity. The internet has remained a safe space for dissent, satire and public participation because of this anonymity. Once every SGI becomes traceable, users will fear identification, scrutiny and other legal consequences. A political cartoonist who uses AI tools to make content will now have to anticipate legal scrutiny and public targeting from the time he publishes his content. Under these circumstances, often the most rational response is self-censorship. Users will stop engaging with politically sensitive issues as a whole.
Rules further impact privacy concerns by requiring platforms to disclose the identities of the alleged accused to the complainants. Although victim redressal is important in politically or religiously sensitive issues, this kind of disclosure without any procedural safeguards can lead to serious safety risks to the users. There is also the chance that false complaints are used to gain identities as well. This creates an atmosphere where the fear of exposure discourages users from engaging in public discourse.
Rules also require the platforms to issue a notice every 3 months to their users, warning them of the consequences of using AI-generated content dishonestly. Over time, this regulatory signalling will create anticipatory compliance, leading users to avoid experimental use of AI. These pertinent notices act more like behavioural warning signs rather than information bulletins. The law, while prohibiting unlawful content, now also creates an atmosphere of restraint through this constant messaging.
The Act of turning platforms into 24*7 monitors as part of due diligence under safe harbour essentially incentivises them to adopt an aggressive monetisation model. Content that is politically sensitive or critical of authority is more likely to be pre-emptively removed. SSMIs who are further required to verify content before publishing; this essentially means that “free speech” is only published after approval from these private entities. These obligations, which make the platforms gatekeepers of content, will force users to self-censor rather than risk rejection or delay in publishing their content.
Striking a Balance
The newly amended rules represent a genuine attempt by Indian authorities to address the harms of AI-powered cyberspace. They have tried to regulate these contents by imposing obligations on the intermediaries, although this strategy sounds good on paper, and it is true that the intermediaries have to take an active part in this process, the current iterations of these rules fundamentally alter safe harbour provisions as well as hamper constitutional rights of free speech and privacy.
Linking safe harbours to these obligations converts intermediaries into active gatekeepers of internet speech. Faced with the threat of losing immunity, the platforms are structurally incentivised to err on the side of caution, which can result in over-removal, pre-emptive takedowns, and filtering. This is amplified by the reduced timelines, which can lead to a lack of oversight and erroneous censorship. These, coupled with the obligations of labelling and pre-verification, create a regime of prior restraint that will have a chilling effect on free speech.
A more proportionate regulatory strategy would be to adopt a risk-based approach focusing on high-risk uses of AI-generated content. The labelling and traceability should be narrowly constructed with procedural safeguards for the users. Compliance timelines should be more realistic; a faster decision is not always better.
No regulation should come at the cost of turning the internet into a constant zone of monitoring. If the intermediaries are converted into speech censors, the casualty will be the democratic discourse itself. So, the job before lawmakers is not to simply regulate synthetic media, but to do so in a way that preserves the constitutional promises of free speech and privacy.