The Government of India has proposed new regulations to ensure transparency and accountability in the use of artificial intelligence. The Ministry of Electronics and Information Technology (MeitY) has introduced draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
The move aims to combat the rapid rise of deepfakes, synthetic visuals, and AI-generated misinformation by making platforms and users responsible for clearly identifying such content.

Key Provisions of the Draft Rules
- Mandatory Labelling: All digital platforms will be required to label or watermark any content—visual, audio, or video—that has been generated, modified, or altered using artificial intelligence tools.
- Visibility Standards:
- For images and videos, the label must cover at least 10% of the display area.
- For audio content, the label must be heard or displayed within the first 10% of playback duration.
- For images and videos, the label must cover at least 10% of the display area.
- User Declaration: Every user uploading media content must provide a declaration confirming whether the content is AI-generated or synthetically modified.
- Metadata Traceability: Platforms must embed metadata within the file indicating its synthetic origin, including information such as the algorithm or tool used to create or modify it.
- Platform Accountability: Major social media platforms and digital intermediaries will be required to implement systems for detecting, verifying, and flagging synthetic content, while ensuring that users cannot easily remove the labels.
- Public Consultation: Stakeholders and citizens have been invited to submit feedback on the draft rules before they are finalized and implemented.
Purpose of the Regulation
The government’s primary goal is to protect citizens from misinformation, impersonation, and manipulation caused by deepfakes and synthetic media.
With AI-generated content becoming increasingly realistic and accessible, there is a growing concern that such tools can be used to spread false information, damage reputations, manipulate elections, and mislead the public.
By introducing clear labelling requirements, the government seeks to ensure transparency, digital safety, and accountability across online platforms.
Implications: For Platforms and Technology Companies, Content Creators, Society and Governance
- Must deploy AI detection systems and add clear labels or watermarks on synthetic content.
- Required to record metadata and maintain audit trails for AI-generated material.
- Higher compliance costs due to new verification and monitoring mechanisms.
- Risk of losing intermediary “safe-harbour” protection under the IT Act if they fail to comply.
For Users and Content Creators:
- Users uploading content will have to declare its authenticity and label AI-generated or altered content.
- Enhances digital literacy and helps the public identify manipulated visuals or voices.
- May encourage responsible and ethical use of generative AI tools.
For Society and Governance:
- Aims to curb deepfakes, false propaganda, and AI-based impersonation.
- Strengthens trust in digital information ecosystems.
- However, experts caution that implementation must balance free speech, creativity, and innovation with regulation.
Conclusion
The Government of India’s proposal to mandate labelling of AI-generated and synthetic content marks a major step in digital regulation. By enforcing transparency, accountability, and traceability, the move seeks to protect citizens from the dangers of deepfakes and misinformation, while ensuring responsible innovation in the era of artificial intelligence.
FAQs
1. What is “synthetic content”?
Ans: Synthetic content refers to media—such as images, videos, or audio—created or modified using artificial intelligence or computer algorithms to make it appear real or authentic.
2. Why is the government proposing these rules now?
Ans: The government is responding to the growing misuse of AI tools that can generate deepfakes, fake voices, or manipulated visuals capable of spreading misinformation, influencing elections, or harming individuals’ reputations.
3. What will platforms need to do?
Ans: Platforms will need to integrate detection tools, apply visible labels or audio cues on AI-generated content, collect user declarations, and embed metadata showing that the content is synthetic.
4. Will every piece of content need a label?
Ans: No. Only content that has been created, altered, or enhanced using AI tools must be labelled. Genuine, unmodified content is not required to carry any markings.
5. How will this affect creators who use AI for art or entertainment?
Ans: Creators can continue to use AI tools, but they must label their outputs as “AI-generated” when sharing them publicly. This ensures transparency without restricting creativity.

My self Anita Sahani. I have completed my B.Com from Purbanchal College Silapathar. I am working in Dev Library as a Content Manager. A website that provides all SCERT, NCERT 3 to 12, and BA, B.com, B.Sc, and Computer Science with Post Graduate Notes & Suggestions, Novel, eBooks, Health, Finance, Biography, Quotes, Study Materials, and more.






