Ai GovernanceCyber LawDeepfakes, Digital Harm, and the Rising Responsibility of Intermediaries – Feb 2026

February 11, 20260

Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 – Deepfakes, Digital Harm, and the Rising Responsibility of Intermediaries

Deepfake technology has fundamentally altered the evidentiary and trust value of digital content. What began as experimental AI-generated media has rapidly evolved into a powerful instrument for fraud, sexual exploitation, political misinformation, corporate sabotage, and reputational harm. Audio, video, and images—once considered reliable—can now be convincingly fabricated at scale.

For Indian regulators, the deepfake crisis has exposed a structural weakness in platform governance: speed and accountability. Harm from synthetic media is not linear—it is exponential. A delayed response can permanently destroy privacy, dignity, and public trust.

Recognising this reality, the Government of India has recalibrated intermediary responsibility through a significant amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules.

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 came into force today, February 20, 2026. These amendments were notified by India’s Ministry of Electronics and Information Technology (MeitY) on February 10, 2026.

Understanding the Deepfake Threat Landscape

Deepfakes today extend far beyond parody or experimentation. They now manifest as:

  • Non-consensual sexual content, disproportionately targeting women and minors
  • Voice cloning and impersonation fraud, involving executives, relatives, or officials
  • Political misinformation, capable of influencing democratic processes
  • Corporate disinformation, affecting stock prices, contracts, and brand credibility
  • Evidentiary pollution, undermining the reliability of digital evidence

What makes deepfakes uniquely dangerous is synthetic plausibility—the victim must now prove that something false is false.

Intermediaries at the Centre of the Crisis

India’s intermediary framework under the IT Act grants conditional safe harbour under Section 79, contingent on due diligence and expeditious action. Historically, platforms have relied on the defence of being neutral conduits or passive hosts.

Deepfakes have decisively broken that assumption. Platforms are no longer merely hosting content; they are curating visibility, scale, and virality. The 2026 amendment reflects this shift in regulatory thinking.

Formal Legal Recognition of Synthetically Generated Information

For the first time, Indian subordinate legislation expressly defines “Synthetically Generated Information”—content that is created, altered, or manipulated using artificial intelligence, machine learning, or algorithmic techniques in a manner that makes it appear indistinguishably real to an ordinary user.

The amended Rules now clearly cover:

  • AI-generated or AI-altered audio, video, images, and multimedia
  • Content that falsely represents identity, speech, actions, or events
  • Synthetic media capable of misleading users about authenticity or origin

This explicit recognition resolves long-standing ambiguity around whether deepfakes were merely misleading content or a distinct category of digital harm. By acknowledging synthetic realism as a legal risk, the law aligns regulatory intent with technological reality.

Mandatory Disclosure, Labelling, and Platform Verification Obligations

Beyond takedown duties, the amended Rules impose affirmative transparency and disclosure obligations, particularly on Significant Social Media Intermediaries (SSMIs).

Key requirements include:

  • Mandatory prominent labelling of AI-generated or AI-altered content to ensure users can clearly distinguish synthetic media from authentic content
  • User declarations at the point of upload, requiring disclosure where content has been generated or materially altered using AI tools
  • Deployment of technical and automated measures by platforms to detect, verify, and label synthetic media, rather than relying solely on user self-reporting

These provisions mark a clear regulatory pivot from reactive harm control to preventive transparency and informed consumption. AI-generated content is not prohibited, but deception and opacity are no longer acceptable.

Failure to implement effective labelling and verification systems may constitute a breach of due diligence, directly impacting an intermediary’s entitlement to safe harbour protection.

The 3-Hour Takedown Mandate: Speed as a Legal Obligation

One of the most consequential changes introduced by the amendment is the mandatory three-hour timeline for takedown or disabling access once an intermediary has actual knowledge, receives a valid user complaint, or is directed by an authorised authority.

This requirement applies particularly to:

  • Deepfake and impersonation content
  • Content violating dignity, privacy, or bodily autonomy
  • Media capable of causing immediate and irreversible harm

Delay is no longer defensible on the grounds of scale, volume, or internal review cycles.

Safe Harbour Recalibrated: From Intent to Infrastructure

The amendment clarifies that safe harbour under Section 79 is not automatic. It is conditional upon demonstrable compliance with due diligence obligations, including:

  • Effective grievance redressal mechanisms
  • Time-bound takedowns
  • Proactive risk mitigation for synthetic and high-harm content

Intermediaries must now demonstrate systems, workflows, and accountability, not merely good intentions.

What This Means for Platforms and Tech Companies

The amended Rules effectively move India toward a duty-of-care model for intermediaries. In practical terms, platforms must now invest in:

  • Automated detection and labelling tools for synthetic media
  • Human-in-the-loop moderation for high-risk content
  • Evidence preservation and cooperation with law enforcement
  • Redesign of content upload workflows to incorporate AI-disclosure prompts, labelling mechanisms, and verification checks, especially for audio-visual media

Deepfake compliance is no longer a policy issue—it is an operational and governance issue.

Conclusion

Deepfakes represent the erosion of visual and auditory trust in the digital ecosystem. India’s regulatory response—through explicit recognition of synthetic media, mandatory disclosure, and strict takedown timelines—marks a decisive shift from passive platform governance to active harm prevention.

The message to intermediaries is unambiguous: If you profit from scale, you are responsible for speed, transparency, and control.

Compliance Checklist for Platforms (Quick Reference)

Platforms operating in India should immediately assess whether they have:

  1. Clear internal definitions and classification of synthetically generated information, (
  2. User-facing AI-disclosure and declaration mechanisms at upload,
  3. Prominent labelling of AI-generated or altered content,
  4. Automated and human moderation systems capable of meeting the 3-hour takedown mandate,
  5. Documented SOPs for grievance handling and escalation, and
  6. Auditable compliance records to demonstrate good-faith due diligence for safe harbour protection.

CorpoTech Legal focuses on technology-driven legal risk across the digital ecosystem, advising platforms, enterprises, and institutions on cyber law compliance, intermediary liability, AI governance, data protection, digital evidence, and technology policy. The firm’s approach combines statutory interpretation with operational compliance, helping organisations translate evolving regulations into enforceable internal frameworks, SOPs, and governance models.

Author: Ajay Sharma, Techno Legal Advisor _ CorpoTech Legal

Leave a Reply

Your email address will not be published. Required fields are marked *

New Delhi, India
+91 882 684 6161
info@corpotechlegal.com

Follow us:

FREE CONSULTATION

CorpoTech Legal Law Firm. Calls may be recorded for quality and training purposes.

Copyright © CorpoTech Legal 2024

Disclaimer & Confirmation

The rules of the Bar Council of India prohibit law firms from soliciting work or advertising in any manner. By clicking on ‘I AGREE’, the user acknowledges that:

The user wishes to gain more information about CorpoTech legal, its practice areas and the firm’s lawyers, for his/her own information and use;

The user acknowledges that there has been no attempt by CorpoTech legal to advertise or solicit work.

All information contained on this website is the intellectual property of CorpoTech legal.