
The ministry of electronics and information technology (MeitY) has released draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, introducing a framework to identify and regulate AI-generated or synthetically-generated information. It has invited public feedback on the amendments by November 6.
The definition of “synthetically generated information” is a key proposed change. It has been described as any information “artificially or algorithmically created, generated, modified or altered using a computer resource in a manner that appears reasonably authentic or true.” With this definition, the government seeks to bring AI-generated content under the due diligence and takedown obligations applicable to unlawful online information.
The draft amendments make labelling and traceability mandatory for synthetic content. Intermediaries offering tools that create or modify digital media are required to ensure such information carries a visible or audible label or an embedded permanent metadata identifier that clearly marks it as AI-generated. For images or videos, the label must cover at least 10% of the surface area. For audio, it must be played during the first 10% of its duration.
Intermediaries will be prohibited from suppressing or removing such identifiers, making it difficult to disguise the origin of AI-generated content.
For large social media platforms, referred to as significant social media intermediaries, the obligations are more elaborate. These companies will be required to ask users to declare whether their uploaded content is synthetically-generated before publication. They must also deploy “reasonable and proportionate technical measures” like automated detection systems to verify the same.
Verified or declared synthetic content must be clearly labelled or accompanied by a visible notice, allowing users to distinguish authentic media from manipulated material. Failure to comply could expose these intermediaries to loss of safe-harbour protections and other regulatory penalties.
The government has sought to protect intermediaries that act in good faith. A new proviso to Rule 3(1)(b) clarifies that if a platform removes or disables access to synthetic content as part of grievance redressal or reasonable efforts to prevent harm, it will not lose its safe-harbour protection under Section 79 of the Information Technology Act.
In its explanatory note, MeitY said the draft is designed to ensure an “open, safe, trusted and accountable Internet” amid growing concerns over videos, synthetic voices, and AI-generated misinformation, which can mislead users.
Dhruv Garg, partner at Indian Governance & Policy Project, said, “It is interesting to note that India has implicitly chosen to regulate the generative AI platforms as intermediaries, giving them plausible safe harbour protections. While some other jurisdictions have already established regulations around disclosures and labelling, it is essential that these requirements balance transparency with the need for scalability, innovation, and creative expression.”
MeitY said the proposed amendments will create “visible labelling, metadata traceability, and transparency for all public-facing AI-generated media,” and help “empower users to distinguish authentic from synthetic information.” It added that the obligations would apply only to publicly available or published content, not to private or unpublished material.
The explanatory note said the move was prompted by a surge in deepfake incidents in India and abroad, where fabricated videos and audio clips have been used to depict individuals making false statements, create non-consensual intimate imagery, and conduct fraud or impersonation.
MeitY noted that concerns over such content have also been raised in Parliament, prompting the ministry to issue advisories in recent years urging social media intermediaries to act against deepfake-related harms.