Highlights:
- AI-generated and deepfake content must be clearly labeled
- Objectionable content must be removed within three hours
- New rules take effect February 20, 2026
- Platforms must verify user disclosures on AI use
- Safe harbor protections remain for compliant intermediaries
The Indian government is tightening oversight of artificial intelligence, generated and deepfake content, placing new legal obligations on social media platforms to label synthetic media and remove harmful posts within hours. The updated rules, notified on February 10 and set to take effect on February 20, amend India’s Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules.
Under the revised framework, social media companies and other digital intermediaries must clearly and prominently label content created or altered using artificial intelligence. This includes AI-generated or AI-modified audio, video, images, and audiovisual material that appears realistic and could mislead users. The government refers to such material as “synthetically generated information,” or SGI.
A major change is the accelerated timeline for content moderation. In certain cases involving unlawful or harmful material, platforms must now remove content within three hours of receiving a lawful order or a valid complaint. This is a sharp reduction from the earlier 36-hour window. Other response timelines have also been shortened, with deadlines reduced from 15 days to seven days and from 24 hours to 12 hours, depending on the nature of the violation.
Mandatory labeling is at the core of the new rules. Platforms that host or distribute AI-generated content must ensure it is clearly disclosed to users and cannot be misrepresented as authentic. Where technically feasible, intermediaries are also required to embed persistent metadata or provenance markers, such as unique identifiers, to help trace synthetic content back to its source. The rules explicitly prohibit platforms from allowing these labels or metadata to be removed or tampered with.
To strengthen enforcement, the government is also shifting greater responsibility onto platforms. Social media companies must obtain user declarations at the time of upload, asking whether content has been generated or altered using AI tools. Platforms are expected to deploy reasonable and proportionate technical measures, including automated detection systems, to verify the accuracy of these declarations. Failure to exercise due diligence could expose intermediaries to legal liability.
The amendments further clarify that AI-generated content used for illegal purposes will be treated the same as other unlawful material. This includes synthetic content involving child sexual abuse material, obscenity, impersonation, false electronic records, fraud, or content linked to weapons, explosives, or other criminal activities. Platforms are required to prevent their services from being used to create or distribute such material.
The government says the changes are aimed at addressing the growing misuse of deepfakes and AI tools for misinformation, harassment, and fraud, while ensuring faster action against harmful content. At the same time, officials have sought to reassure technology companies that safe harbor protections remain intact. Intermediaries will continue to receive protection under Section 79 of the IT Act as long as they comply with the revised rules and act in good faith when removing or restricting access to synthetic content, including through automated tools.
Together, the new measures mark one of India’s most comprehensive efforts yet to regulate AI-generated content, signaling a tougher stance on deepfakes while balancing platform accountability with continued legal protections.
















