SRINAGAR: The Ministry of Electronics and Information Technology (MeitY) on Tuesday notified a new regulatory framework declaring certain categories of AI-generated or synthetically generated information (SGI) as illegal under Indian law.

The framework covers content involving sexual abuse material, non-consensual intimate images, obscene or sexually explicit material, fake documents or electronic records, as well as AI content used for fraud, harassment, child abuse, misinformation or other criminal activities.

The framework has been introduced through amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Under the amended rules, illegal AI-generated content will be treated on par with other unlawful online material, and violations may attract criminal action under laws including the Bharatiya Nyaya Sanhita, 2023.

The new rules require all AI-generated content to be clearly and prominently labelled as “synthetically generated”. Such labels must be easily noticeable and include permanent metadata or technical markers, along with a unique identifier linking the content to the platform or tool used to create it. Users will not be permitted to remove or conceal these labels, ensuring transparency for viewers and listeners.

Platforms have also been directed to deploy reasonable technical safeguards to prevent the creation and circulation of illegal synthetic content, including child sexual abuse material, forged documents and deceptive content falsely depicting real persons or events.

The move follows recent controversy surrounding the Elon Musk-owned AI chatbot Grok on X, which came under scrutiny in India for allegedly generating obscene and sexualised images involving women and minors.

As per the amended rules, platforms such as X, Meta, Instagram and other content-hosting services cannot claim immunity on the ground that AI-generated content falls outside legal oversight. They are required to remove or block illegal content within prescribed timelines to retain safe harbour protection under Section 79 of the IT Act.

Social media platforms must also periodically remind users—at least once every three months—of their legal responsibilities, including warnings that accounts may be suspended or terminated for violations and that sharing illegal content could result in fines or imprisonment.

Special disclosure requirements have been introduced for platforms offering AI tools such as ChatGPT, Gemini and Grok. These platforms must inform users that misuse of AI systems can lead to punishment under criminal law, child protection statutes, election laws and laws governing obscenity, trafficking and harassment.

A key feature of the 2026 amendment is the significant reduction in response timelines. The time limit for complying with government or law-enforcement orders has been reduced from 36 hours to three hours. Grievances must now be resolved within seven days instead of 15, urgent cases within 36 hours instead of 72, and certain takedown requests within two hours.

Large social media intermediaries with substantial user bases will face additional obligations, including requiring users to declare whether uploaded content is AI-generated and using technical measures to verify such claims. Confirmed synthetic content must be clearly labelled, and platforms that knowingly allow or promote illegal AI-generated material may face legal consequences for failing in their statutory duties.

At the same time, the rules clarify that good-faith digital activities are excluded from the definition of synthetic content. Routine editing, colour correction, noise reduction, subtitles, translations, document formatting, educational material and accessibility tools will not be treated as AI-generated content, provided they do not mislead users or create false or deceptive records. -(KL)