India's emerging AI stack vs the EU AI Act: what operators need to reconcile
Two regulatory regimes with different scopes, different obligations, and different timelines. India's MeitY framework and the EU AI Act both apply to operators with cross-border deployments. Here's the comparative read for the operator who has to ship under both.
Why does an operator in 2026 need to read both at once?
Because the cross-border footprint of even a small AI deployment now intersects both regimes.
A B2B SaaS at $10M ARR with a customer-success agent might serve EU customers (triggering EU AI Act obligations), train on data sourced from Indian users (triggering India's data and AI obligations), and deploy through Indian infrastructure for cost reasons (triggering Indian operator obligations). Three jurisdictional touchpoints in a single deployment that the operator considered domestic. This is the modal case in 2026, not the exotic one.
The regimes are not aligned. They were authored in different windows, with different threat models, by different stakeholders. Operators trying to ship under both have to reconcile two compliance surfaces that don't agree on definitions, scope, or timelines.
What does each regime actually cover?
A simplified comparative view.
| Dimension | EU AI Act | India's emerging stack | |---|---|---| | Authority | EU Parliament and Council, Regulation 2024/1689 | MeitY advisories, IT Rules under IT Act 2000, the FREE-AI report from MeitY's expert committee | | Status | In force; obligations phased through 2025–2027 | Advisory + sector-specific; framework legislation expected 2026–2027 | | Scope trigger | "Placing on the market" or "putting into service" within EU | Operating, training, or deploying in India; impact on Indian users | | Risk classification | Prohibited / High-risk (Annex III) / Limited-risk / Minimal-risk | Cluster-based (high-risk, general-purpose, sector-regulated) per FREE-AI | | GPAI obligations | Specific provisions for general-purpose AI models with systemic risk | Advisory-stage; financial sector (RBI), capital markets (SEBI), and consumer-protection rules apply sectorally | | Penalty surface | Up to 7% of global annual turnover | Sectoral (RBI fines, SEBI fines); broader penalty structure pending | | Extraterritoriality | Applies when output is used in EU even if operator is non-EU | India IT Rules apply when serving Indian users; FREE-AI proposes broader extraterritorial scope |
Both regimes use risk-based classification. Both have extraterritorial reach. Both impose heavier obligations on what they classify as high-risk. The classifications themselves don't always agree.
Where do the regimes meaningfully diverge?
Three places.
Divergence 1 — the high-risk classification
The EU Annex III list is specific. Eight named domains, each with sub-categories. Examples include education (admissions, grading), employment (recruitment, performance evaluation), essential services (creditworthiness, emergency call routing), law enforcement, and migration.
India's FREE-AI report proposes a more flexible cluster-based classification that depends on sectoral regulators (RBI for financial, SEBI for capital markets, IRDAI for insurance, MoH for health, etc.) to identify high-risk applications inside their remit. Less specific than EU Annex III; broader in coverage at sector level.
The reconciliation problem — a deployment classified as Annex III high-risk in EU may not be high-risk in India, and vice-versa. An operator with a recruitment-AI tool faces strict EU obligations (impact assessment, human oversight, transparency notices) but may face only general transparency advisory in India. The reverse can also happen — an Indian financial-services agent under RBI's cluster classification may face stricter Indian obligations than equivalent EU obligations.
Divergence 2 — the GPAI surface
The EU AI Act has explicit GPAI obligations under Title VIII Chapter V. Foundation-model providers above a defined compute threshold must publish model documentation, run systemic risk evaluations, report serious incidents, and engage with the AI Office. The obligations are documented and enforceable.
India's GPAI surface is currently advisory. The March 2024 MeitY advisory required certain GPAI deployments to obtain explicit MeitY approval — that requirement was substantially walked back after industry pushback. The current state is sector-specific advisory plus the FREE-AI committee's pending recommendations. Operators deploying GPAI in India face less prescriptive obligations today, with the expectation that framework legislation in 2026–2027 will tighten the surface.
The reconciliation — operators using foundation models from EU-jurisdiction providers must understand the EU GPAI obligations. Operators deploying those same models in India face Indian sectoral obligations layered on top. The two layers don't conflict but they don't combine cleanly either.
Divergence 3 — the timeline
EU AI Act obligations entered force in stages. Prohibited AI practices took effect in February 2025. GPAI obligations took effect August 2025. High-risk system obligations take effect August 2026. By 2027, the full obligation surface is operational.
India's framework legislation timeline is less defined. Industry expectation is that draft legislation appears late 2026 or 2027, with phased implementation across 2027–2028. Sectoral obligations are continuous and apply now (RBI's AI guidelines, SEBI's algo-trading rules, etc.).
The reconciliation — EU obligations are knowable today and ship-able against. Indian obligations require sector-specific compliance now plus monitoring of the framework legislation timeline. Operators who design for the EU surface and assume Indian sector obligations will roughly track usually find themselves close enough to compliant when Indian framework legislation lands.
Where do the regimes overlap usefully?
Three places where designing for one helps the other.
Overlap 1 — transparency obligations. Both regimes require disclosure when content is AI-generated and disclosure when a user is interacting with an AI system rather than a human. EU Article 50 codifies it; Indian sectoral rules (consumer protection, information technology) require it functionally. An operator who builds explicit AI-disclosure into their product satisfies both surfaces.
Overlap 2 — impact assessments for high-risk deployments. EU requires fundamental rights impact assessments for Annex III systems. Indian sector regulators (RBI, SEBI) require risk assessments for AI deployments in regulated functions. The artifact is similar enough that one assessment, well-done, can be adapted to satisfy both.
Overlap 3 — incident reporting. EU GPAI providers report serious incidents to the AI Office. Indian sectoral rules require incident reporting to relevant regulators (RBI for financial-sector incidents, CERT-In for cyber incidents). One incident-tracking pipeline, well-designed, can route reports to both.
The operator implication — building compliance infrastructure for the stricter regime (EU on most dimensions, India on sectoral specifics) usually covers the looser regime by inheritance. Operators trying to design separately for each regime usually duplicate effort.
What does the operator who deploys in both jurisdictions actually do?
Six concrete steps, in order.
Step 1 — map your deployment to EU Annex III. Read Annex III. Note any matches. If your deployment touches any of the eight domains, you're in EU high-risk territory and need the full impact-assessment infrastructure.
Step 2 — map your deployment to Indian sectoral classification. Identify which Indian sector regulator (RBI, SEBI, IRDAI, MoH, etc.) covers your domain. Check that regulator's published AI guidelines. If you don't fall under a sector regulator, the IT Act and consumer-protection law apply by default.
Step 3 — build the unified disclosure layer. AI-generation disclosure on outputs. Human-interaction disclosure on chat interfaces. Both EU Article 50 compliant and Indian consumer-protection compliant. Single pipeline.
Step 4 — build the unified impact-assessment artifact. Document the deployment, the data sources, the failure modes, the mitigations, the human-oversight provisions. Format adaptable to EU AI Act fundamental rights impact assessment template and Indian sector-regulator risk assessment templates.
Step 5 — build the unified incident-reporting pipeline. Internal incident detection and classification. Routing rules to EU AI Office (for GPAI) and Indian sector regulators (per domain). Audit trail retention.
Step 6 — monitor the framework legislation timeline. Indian framework legislation expected late 2026 or 2027 will likely require operators to revise their classification, their disclosure, and possibly their data-handling. Have someone tracking the MeitY publications and the FREE-AI implementation timeline.
The infrastructure is buildable. The cost — for most $5M to $50M ARR operators — is one to three months of focused compliance work, distributed across legal, engineering, and operations. The cost of not building it shows up as either EU AI Act penalties (up to 7% of global turnover) or Indian sectoral fines, neither of which are recoverable.
Where Doxia Axis sits in this
We don't issue legal advice. The compliance infrastructure described above is operator-led; we ship the technical infrastructure (logging, schema, disclosure layers) that the legal infrastructure needs to sit on top of.
For operators with active deployments in both EU and India, the two existing insights pieces — EU AI Act GPAI in force and India AI regulatory stack — go deeper on each regime individually. This piece is the comparative companion.
Where to go from here
- EU AI Act deeper read: /insights/eu-ai-act-gpai-in-force.
- India AI stack deeper read: /insights/india-ai-regulatory-stack.
- The agent governance frame: /insights/agentic-governance-stack.
- Or just request the audit: /audit. The audit names the technical compliance layer; the legal layer is your counsel's call.