Deepfakes and AI-generated misinformation are eroding trust and enabling fraud online. India’s proposed amendments to the IT Rules, 2021 would regulate “synthetically generated information” via platform definitions, labels and due diligence. IGAP assesses feasibility and offers risk-based fixes now.
The rapid proliferation of synthetically generated content, including deepfakes, AI-generated misinformation, and other algorithmically altered media, poses growing risks to online safety, public trust, and the integrity of digital ecosystems. While such technologies enable innovation, creativity, and accessibility, their misuse has facilitated deception, impersonation, financial fraud, reputational harm, and large-scale misinformation.
Recognising these emerging challenges, the Government of India proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 to regulate “synthetically generated information.” These amendments represent India’s first formal regulatory effort to address AI-generated and algorithmically processed content through intermediary obligations, including definitions, labelling requirements, and due-diligence standards for platforms.
IGAP’s submission to the proposed amendments welcomes the government’s proactive approach while examining the proposed amendments through the lens of technical feasibility, proportionality, and global best practices. It offers targeted recommendations to ensure that the regulatory framework remains harm-focused, risk-based, and future-ready, strengthening protections against genuinely deceptive synthetic media while preserving legitimate digital activity and India’s broader innovation ecosystem.
Comments on the Draft IT Intermediary Guidelines Amendments, 2025 (Synthetically Generated Information)
The rapid proliferation of synthetically generated content, including deepfakes, AI-generated misinformation, and other algorithmically altered media, poses growing risks to online safety, public trust, and the integrity of digital ecosystems. While such technologies enable innovation, creativity, and accessibility, their misuse has facilitated deception, impersonation, financial fraud, reputational harm, and large-scale misinformation.
Recognising these emerging challenges, the Government of India proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 to regulate “synthetically generated information.” These amendments represent India’s first formal regulatory effort to address AI-generated and algorithmically processed content through intermediary obligations, including definitions, labelling requirements, and due-diligence standards for platforms.
IGAP’s submission to the proposed amendments welcomes the government’s proactive approach while examining the proposed amendments through the lens of technical feasibility, proportionality, and global best practices. It offers targeted recommendations to ensure that the regulatory framework remains harm-focused, risk-based, and future-ready, strengthening protections against genuinely deceptive synthetic media while preserving legitimate digital activity and India’s broader innovation ecosystem.
Download the full submission here.