Category Archives: AI development service providers

India's AI & Deepfake Rules Go Live on 20 February 2026

India’s AI & Deepfake Rules Go Live on 20 February 2026: What Every Agency and Creator Needs to Do Before It’s Too Late!

Introduction

Let’s be honest for a second. How many AI-generated creatives did your team push out last week? An AI voiceover for a brand reel? A Midjourney poster for a product launch? Maybe an avatar explainer video for a SaaS client? 

If you nodded even once, this content is written specifically for you, and the timing couldn’t be more urgent.

India’s Ministry of Electronics and Information Technology (MeitY) has notified modifications to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. A major focus is on synthetically generated information (AI-generated or AI-altered content), with new obligations regarding labelling and disclosure, as well as faster takedown timelines. These changes take effect from 20 February 2026.  

These “AREN’T” vague future proposals sitting in a committee room. They are notified, gazetted, and enforceable in the coming days. And if your agency or content pipeline hasn’t adjusted yet, you’re already behind.

Worried? Well, this isn’t something to worry about if you correctly know what these amendments entail

So let’s break this down properly, but not in dry “Legalities”, but in the kind of plain, practical language you can actually act on before the week is over.

Why This Moment Is Different From Every Other “Digital Compliance” Conversation You’ve Sat Through

India has been talking about regulating synthetic media, deepfakes, and AI-generated content for a while. There have been advisories, industry consultations, and enough policy commentary to fill a bookshelf. But this time, the rules have actually landed with a gazette notification, a PIB press release, and a MeitY FAQ document that spells out expectations in surprisingly accessible language.

What makes these amendments particularly significant isn’t just that they exist. It’s what they target. The regulatory focus is on synthetically generated information, which is a deliberately broad category that covers content created or materially altered using artificial intelligence. That includes:

  • Deepfakes and face-swapped visuals
  • AI-generated images used in advertising or brand creatives
  • Synthetic spokesperson and avatar videos
  • AI voiceovers, voice-cloned audio, or text-to-speech productions
  • AI-enhanced or reconstructed audio tracks
  • Video content materially altered using generative or enhancement models

If a machine touched your media in any meaningful way, you’re in the scope of this framework. End of story!.

But here’s the part most AI development service providers are underestimating. Even though the direct legal obligations in the IT Rules technically sit with intermediaries, the platforms, the social networks, the distribution infrastructure, the compliance pressure flows upstream very fast. 

Platforms will demand compliant content from creators and agencies. Clients will want protection clauses. And when something goes wrong, the agency that built the content is almost always the first phone call.

The Three Core Changes You Need to Understand Plain and Simple

1. Mandatory Labelling for AI and Synthetic Content

The amendments introduce a clear expectation that content created or materially altered using AI must be disclosed to viewers in a way they can actually understand. 

The goal is all about transparency. Viewers should know whether what they’re watching, hearing, or reading is authentic or artificially generated. This is not going to be a soft best practice anymore. It’s an embedded compliance requirement.

Think about what “material alteration” actually means in a real agency workflow:

  • Using an AI background replacement tool on a product photo — That’s material alteration
  • Generating a voiceover using ElevenLabs or any voice synthesis platform — That’s synthetic audio
  • Running a video through Runway or similar tools to upscale and enhance footage — That’s AI-altered content
  • Creating a social media poster with no real photograph using a generative model — That’s AI-generated visual content

Every single one of these formats falls under the disclosure requirement. The days of quietly embedding AI in your creative pipeline without any user-facing acknowledgement are officially over.

2. Dramatically Shortened Takedown Timelines — Down to 3 Hours

Policy commentary and coverage around the amendments point to a move toward a 3-hour compliance window for intermediaries to act on certain lawful takedown directions. 

To put that in perspective, many agencies currently don’t even have a process for emergency content removal. Not to mention, a client approval message can take three hours.

The new operating environment assumes your team can:

  • Pause active campaigns immediately upon notification
  • Unpublish or remove content across all relevant platforms
  • Begin documenting the action taken with timestamps
  • Initiate a replacement or corrective content process (all within a very short window)

However, this doesn’t mean every post is living on a three-hour clock. But if a piece of synthetic content triggers a lawful direction, say, a deepfake-adjacent creative that appears to misrepresent a real person, or a synthetic “news-style” clip flagged as misleading, the machinery around that takedown is now dramatically faster than before. Your agency needs to be proactive and fast, too.

3. Deeper Alignment With Indian Criminal Law for Harmful Synthetic Media

This is the layer most agencies are treating as someone else’s problem. It shouldn’t be. The compliance environment now explicitly intersects with existing Indian criminal law when synthetic content is used to:

  • Impersonate real individuals without consent
  • Misrepresent statements or put fabricated words in someone’s mouth
  • Produce sexual content involving real people without consent
  • Target or exploit minors in any form
  • Facilitate harassment, defamation, or coordinated misinformation

Policy analysis has been clear that enforcement in these categories is designed to create genuine, personal accountability, not just platform-level consequences. 

AI development service providers who build or distribute such content, even unknowingly and even under a client’s instruction, can find themselves in a legally precarious position very quickly.

Who Does This Actually Affect in Day-to-Day Practice

If you’re running a digital marketing agency or an AI development company, a social media production house, a performance marketing team, or even a one-person creative studio, here’s a simple gut-check: Do you use AI tools in your content production? If the answer is yes, even occasionally, even for a single client, you need a compliance framework.

The specific formats that sit squarely in scope include:

  • AI-generated brand creatives — images, posters, catalogue visuals, social media graphics
  • AI voiceovers and voice-cloned ads — any audio where a synthetic or cloned voice is used
  • AI avatar explainer videos — synthetic spokesperson formats and digital human productions
  • Deepfake-style visual transformations — face swaps, morphed imagery, identity-altering edits
  • Political or social issue content — where realism could mislead audiences about real events
  • Influencer or UGC content — where AI enhancement has been applied even subtly

Make sure to read the above list again. This isn’t a niche scenario. This is a significant portion of what agencies are producing every single week in 2026.

The Practical Compliance Playbook — What to Actually Implement This Week

Step 1: Add “AI” Disclosure as a Standard Deliverable, Not an Afterthought

Your agency needs a set of standardised disclosure lines that live in your copy bank and get applied to every AI-assisted creative. Here are formats you can start using immediately:

  • “This content contains AI-generated or AI-assisted elements.”
  • “AI-generated visuals / AI-enhanced audio used in this production.”
  • “Synthetic media disclosure: AI was used in parts of this video.”

Best practice right now is to disclose in three places simultaneously:

  • On-screen — a small but clearly readable label during the content itself
  • In the caption or description field — visible before the viewer even clicks play
  • Via the platform’s native AI disclosure toggle — Meta, YouTube, and others have these built in

Tripling up on disclosure isn’t paranoia. It’s insurance. If any single touchpoint is missed during a busy publishing day and in a fast-moving agency environment, they sometimes are the other two maintain your compliance posture.

Step 2: Build an “AI Used?” Checklist for Your Publishing Standard Operating Procedure (SOP)

Before any creative goes live, someone on your team should be running through a structured set of mandatory questions. Make this a checkbox gate inside your project management system — Asana, Trello, Notion, ClickUp, or whatever job card system you currently use:

  • Was AI used to generate any image or video segment in this creative?
  • Was voice AI involved in any form — cloning, TTS, or audio enhancement?
  • Does this content feature a real person whose face or body has been altered?
  • Could this mislead a viewer about a real-world event, statement, or person?
  • Has disclosure been applied on-screen, in the caption, and via platform toggle?
  • Has the client’s written sign-off been obtained and saved?

This is not a document that lives in a folder and gets reviewed quarterly. It’s a mandatory gate that every AI-assisted post passes through before the publish button is touched.

Click to download our SOP for using AI content for Creators

Step 3: Maintain a Content Provenance Log

This is the piece most agencies skip entirely, and it’s the one that matters most when something goes wrong. For each post or ad unit, maintain a lightweight record that captures:

  • Client name and campaign name
  • Date of publication and platforms posted on
  • AI tools used — e.g., Midjourney, Runway, ElevenLabs, Adobe Firefly, etc.
  • Prompt references or project file links
  • Disclosure formats applied — on-screen, caption, platform toggle
  • Client approval trail — email, WhatsApp screenshot, or written sign-off

A WhatsApp screenshot of a client approving the final creative is a completely valid form of documentation. An email chain works too. The point is that you can reconstruct the entire decision trail without scrambling. This provenance log is your defence document — proof that you operated with deliberate process and appropriate transparency when it matters most.

Step 4: Update Client Contracts and Scopes of Work

Add two specific clauses to your proposals going forward:

  • AI Usage Disclosure Clause — the client acknowledges that AI may be used in content production and agrees that appropriate disclosures will be applied in line with applicable rules and platform policies
  • Approvals and Liability Clause — the agency delivers compliant content following its SOP; the client approves the final version; any post-approval changes require a fresh written sign-off before publication

This isn’t about creating an adversarial relationship with clients. It’s about creating clarity in a space where ambiguity has historically created enormous problems. Clients who are genuinely invested in brand safety will welcome these clauses. 

The ones who push back are worth having a frank conversation with now, rather than after an incident.

Step 5: Build a Rapid Takedown Protocol Before You Need One

Every agency needs a designated compliance owner. One person leads the response when a takedown direction arrives. Beyond that, you need:

  • A live escalation channel — a WhatsApp group or Slack thread with the client and agency leads, active and ready
  • A three-step emergency response flow:
    1. Pause all active campaigns and unpublish the flagged content immediately
    2. Prepare and push a compliant replacement version as quickly as possible
    3. Document every action taken with timestamps and preserve all related evidence

Practice this internally before a real situation tests it. The agencies that handle these moments without losing client trust are the ones that treat the protocol as operational infrastructure and not emergency improvisation.

The Content Patterns That Are Simply Too High-Risk to Touch

Even with full disclosure in place, certain types of synthetic content carry risk that no label can adequately offset. These are the formats and categories your agency should treat as a hard line:

  • Fabricated statements by real people — Making someone appear to say something they never said, regardless of how clearly it’s labelled
  • Synthetic news-style clips — AI-generated footage depicting events that never actually happened
  • Celebrity or public figure impersonation for endorsement — Using synthetic likeness to imply a real person’s backing
  • Any synthetic content involving minors — Zero tolerance, No exceptions
  • Deceptive before-and-after formats — Particularly in health, wellness, financial, or medical categories, where implied guarantees can mislead vulnerable audiences
  • Sexual deepfakes — Content of a sexual nature featuring real people without consent

Policy analysis consistently identifies impersonation, deceptive deepfakes, and content harmful to children as the categories where enforcement intent is most serious and most personal in its consequences.

A Quick-Reference Compliance Checklist to Share With Your Entire Team

Pin this inside your project management tool, print it, laminate it, put it wherever your team actually looks before they hit publish:

  • Disclosure on-screen or visible watermark applied
  • Disclosure in the caption or description field
  • Platform’s native AI toggle activated (where available)
  • Client written approval secured and saved
  • Content provenance log entry completed (tools + prompts + files)
  • Fast-removal plan confirmed (compliance owner knows + escalation channel is live)

Official References: Kindly Verify & Don’t Just Take Our Word For It

This is not the kind of topic you want to take anyone’s word for, including this blog’s. As such, we have taken the liberty of adding three “legit” primary government sources that authenticate everything discussed here and are publicly accessible.

Go read them directly, bookmark them, and share them with your legal counsel or compliance team.

1. Gazette of India — MeitY Notification (Official PDF)

This is the gazetted source document where the amendments are formally notified. If you need to show a client, a lawyer, or an internal stakeholder the actual regulatory text, this is the document to share.

🔗 https://egazette.gov.in/WriteReadData/2026/269993.pdf

 

2. Press Information Bureau (PIB) — Official Press Release

The PIB release is the government’s own plain-language communication about what the amendments contain and why they’ve been introduced. It’s written for general audiences and is an excellent starting point for understanding intent, not just the technical legal language.

🔗 https://www.pib.gov.in/PressReleseDetailm.aspx?PRID=2226617

 

3. MeitY FAQ Document (Official PDF)

Perhaps the most practically useful of the three, this FAQ document breaks down the notified amendments in question-and-answer format. It covers synthetic and deepfake content guidance, due diligence expectations, and what compliance looks like in operational terms. Strongly recommended reading for every agency owner and content lead.

🔗https://www.meity.gov.in/static/uploads/2025/10/065b6deb585441b5ccdf8be42502a49c.pdf

 

Please note: Regulatory documents can and do get updated. Always check the MeitY official website at meity.gov.in and the eGazette portal at egazette.gov.in for the most current versions. If in doubt, consult a qualified legal professional familiar with Indian IT law before making compliance decisions for your business.

The Bottom Line: Why Getting This Right Is Actually a Business Advantage

Here’s the reframe worth sitting with for a moment. Compliance in creative industries has a long history of feeling like friction. This includes a set of constraints that slows down interesting work. But in the AI content space in 2026, that dynamic is inverting fast. 

Brands are increasingly aware that synthetic media carries reputational risk. Marketing directors are asking agencies hard questions about how AI is actually being used. Some are writing AI governance requirements directly into RFPs.

The agencies that have a documented disclosure system, a content provenance log, trained teams, and clear contract language are walking into those conversations with a concrete answer. They’re not scrambling to retrofit a compliance posture onto a chaotic production pipeline. They’re offering clients something genuinely valuable:

  • Creative capability — Delivering compelling AI-assisted content at speed and scale
  • Transparency — With disclosure systems that are structured, consistent, and defensible
  • Documentation — With provenance logs that create accountability at every step
  • Risk control — With rapid-response protocols that protect brand reputation in real time

This mix of creativity and responsibility is precisely what the next phase of the AI content era will reward. And as such, it starts with treating these new rules not as a burden, but as a framework that lets you build trust at scale.

The rules are live. The clock is ticking. The question is just whether you’re going to move now, or explain later why you didn’t.

Frequently Asked Questions (FAQs)

Q1. What exactly qualifies as “AI-generated or synthetic content” under India’s new IT Rules amendments?

Any content created from scratch or materially altered using AI tools qualifies, including AI-generated images, voice-cloned audio, avatar videos, deepfake face swaps, AI-upscaled footage, and even AI background replacements. The rule of thumb is simple: if an AI tool meaningfully contributed to how the final content looks, sounds, or feels, it qualifies. When in doubt, disclose. Over-disclosing carries far less risk than under-disclosing.

Q2. My agency only uses AI occasionally for minor edits. Do these rules still apply to us?

Yes, absolutely. The amendments don’t distinguish between heavy AI usage and occasional AI usage. Even a single AI-generated visual, a subtle voice enhancement, or one background swap technically constitutes synthetic content under this framework. The scope is intentionally broad because the concern is audience transparency, not how frequently AI is used in your pipeline.

Q3. Since legal obligations technically fall on platforms and intermediaries, why should agencies and creators worry?

Because compliance pressure moves upstream fast. Platforms will demand compliant content from creators. Clients will want documented processes. And when harmful synthetic content causes damage, the agency that built it and the brand that commissioned it enter the picture quickly. Proactive compliance is your professional and legal protection, regardless of where the formal obligation technically sits.

Q4. What should an agency do if a client requests content that falls into a high-risk category of synthetic media?

Decline clearly and document that declination in writing. For grey-area formats that are permissible but need careful handling, your AI Usage Disclosure Clause and Approvals and Liability Clause ensure the client is informed and shares accountability. No creative brief is worth criminal exposure; having these conversations before production is far easier than managing fallout after publication.

Q5. Where can we read the actual government documents to verify these rules for ourselves?

Go directly to the three official sources: the Gazette of India MeitY Notification PDF at egazette.gov.in, the PIB official press release at pib.gov.in, and the MeitY FAQ document at meity.gov.in. All three links are included in the References section of this blog. For decisions affecting your business legally, always consult a qualified professional familiar with Indian IT law.