India’s Intermediary Rules, 2021 (amended in 2023), impose proactive duties on platforms to curb unlawful content such as defamation, CSAM, and NCII. Yet, despite timely takedowns, such content often persists—resurfacing through search engine caches or being regenerated by AI models. This exposes critical gaps in compliance and enforcement. Machine Unlearning (MuL), a growing global technique, offers a way for AI systems to “forget” specific data, addressing the problem at the model level. As legal responsibilities blur post-takedown, especially with AI memory at play, a shift toward technological solutions like MuL is vital to ensure enduring compliance and user protection.
- Takedown and Duties under Intermediary Rules, 2021 (Amended 2023)
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, as amended in 2023, define the due diligence obligations and takedown protocols for intermediaries in India. These include:
- Rule 3(1)(b): Intermediaries must publish rules informing users not to upload unlawful content (e.g., defamation, CSAM, NCII, hate speech, IP violations, impersonation).
- Rule 3(1)(d): Requires intermediaries to act within 36 hours of receiving a court order or government notification to disable access to unlawful content.
- 2023 Amendment: Introduced a shift from passive compliance to “reasonable efforts” by intermediaries to prevent the recurrence of unlawful content—not merely respond to complaints.
For Significant Social Media Intermediaries (SSMIs), the rules go further:
- Appoint key compliance personnel based in India.
- Enable voluntary user verification.
- Use AI-based tools to proactively detect and remove content like CSAM and NCII.
- Takedown of sensitive content within 24 hours, especially if it involves nudity, morphed images, or impersonation.
- Ensure transparency and grievance redressal, including notifying users whose content is taken down.
These provisions aim to safeguard digital spaces and ensure accountability. However, their effectiveness is limited when content, once taken down, reappears or persists via AI-driven replication.
- The Persistence Problem: Why Takedowns Are No Longer Enough
Despite successful takedown actions, especially in cases of defamation, child sexual abuse material (CSAM), or non-consensual intimate imagery (NCII), content often resurfaces elsewhere on the web.
This happens because:
- Search engine algorithms index and cache the data.
- Generative AI systems and Large Language Models (LLMs) trained on web-scale data can recreate the removed content in paraphrased or visual form.
- Harmful content can become part of the “model memory”, making it persist even after legal takedowns.
This creates a compliance blind spot: legal deletion on platforms does not guarantee true digital erasure.
- What is Machine Unlearning (MuL)? Is It Being Used?
Definition:
Machine Unlearning (MuL) refers to the process by which a machine learning model forgets specific data it was trained on, without retraining the entire model.
This is essential when:
- Sensitive or private data is inadvertently used in model training.
- A takedown is required by law or privacy regulations.
- Biased or incorrect training data needs to be removed.
Global Adoption:
While MuL is an emerging concept, several initiatives are underway:
- Google’s Machine Unlearning Challenge fosters research in scalable unlearning techniques.
- UNESCO’s AI Ethics Framework calls for the right to erasure and transparency in AI memory.
- Some LLM developers like OpenAI and Meta are exploring incremental learning and weight pruning to address this.
However, commercial-grade LLMs with full MuL capabilities are not yet widely available.
Key Challenges:
- Verifying that unlearning has been successful (lack of audit tools).
- Balancing performance loss vs. data removal.
- Absence of global standards or regulatory mandates for MuL.
- Legal Issues When Content Is Removed but Persists Elsewhere
The paradox of “erased but still existing” content creates complex legal risks:
Can Intermediaries Still Be Held Liable?
- Safe Harbour Protections under Section 79 of the IT Act hinge on compliance with takedown duties.
- If an intermediary takes down content in accordance with Rule 3(1)(d), its liability may be extinguished.
- But if AI systems hosted on or integrated by intermediaries reproduce the content, liability may resurrect under a broader interpretation of “actual knowledge” and negligence in preventing recurrence.
Legal Gaps:
- Indian law currently does not mandate unlearning obligations on AI developers or intermediaries.
- DPDP Act, 2023 recognizes the right to correction and erasure of personal data but not AI memory.
- Absence of model-level transparency makes it hard for victims to seek redress if the harmful content resurfaces in generated form.
- How Technology Can Help Solve the Legal Challenge
To address the mismatch between law and AI capabilities, a hybrid strategy of policy innovation and technological enforcement is required:
Legal-Tech Integration
- Develop “Model Memory Takedown Notices”—a framework to petition AI developers for content removal from training data and inference patterns.
- Classify LLM developers as “AI Information Processors” with obligations under IT Rules or a new AI law.
Technical Tools
- Deploy MuL-enhanced LLMs with auditable logs to show unlearning of specific prompts or data.
- Create Unlearning APIs for legal authorities to interface with model providers (similar to data access APIs).
- Use digital watermarking and data poisoning to prevent certain content from influencing model training.
Global Alignment
- Adopt standards from UNESCO AI Ethics, OECD AI Principles, and EU AI Act.
- Encourage multilateral cooperation to define AI deletion standards akin to GDPR’s Right to be Forgotten.
Conclusion: Beyond Takedown—Toward a Future-Proof Compliance Model
India’s Intermediary Rules, 2021 (as amended in 2023), represent a robust attempt to regulate digital platforms and enforce takedown obligations. However, with the rise of AI and generative models capable of recreating harmful content, platform-level takedown is no longer sufficient. The future of compliance lies in combining content takedown with Machine Unlearning (MuL)—ensuring that unlawful or sensitive data is erased not just from user interfaces but also from AI model memory.
Legal frameworks must evolve to address these residual risks, assign responsibilities to AI developers, and introduce auditable unlearning mechanisms. At the intersection of law and technology, Machine Unlearning stands out as a crucial tool for protecting individual dignity, privacy, and data rights.
CorpoTech Legal’s Role in Building Responsible AI Ecosystems
At CorpoTech Legal, we are actively engaging with the challenges and solutions surrounding Responsible AI and AI Governance. Through our policy research, advisory, and stakeholder engagements, we are working to:
- Advocate for Model Memory Takedown frameworks aligned with India’s DPDP Act and global standards such as the UNESCO AI Ethics Guidelines and OECD AI Principles.
- Guide enterprises and startups in building AI systems compliant with emerging unlearning and transparency obligations.
- Publish regular updates and legal perspectives on AI compliance, cyber forensics, and digital accountability—several of which have been featured in industry journals, conferences, and legal press briefings, including Legal Era and Cyber India Review.
- Collaborate with academic and industry leaders to propose ethical audit standards for LLMs, focusing on model explainability and privacy protection.
Through these initiatives, CorpoTech Legal is committed to supporting a techno-legal ecosystem that not only responds to today’s compliance challenges but also anticipates tomorrow’s digital risks—ensuring AI innovation remains safe, ethical, and accountable.