comply.businys.dev
Free — ungatedPublished April 2026

The Agentic Systems Gap:
EU AI Act Compliance for MCP Deployments

The EU AI Act has 113 articles. None of them mention AI agents. Here is the defensible compliance position for MCP-based agentic deployments — and the technical infrastructure that implements it.

Disclaimer: This is technical implementation guidance, not legal advice. EU AI Act obligations vary by AI system classification. Consult qualified legal counsel before making compliance decisions.
Contents

1. What the EU AI Act actually requires

The EU AI Act establishes a risk-based framework for AI systems operating in the EU. Obligations scale with risk classification. For AI systems classified as high-risk under Annex III — which includes systems used in employment, education, critical infrastructure, and law enforcement — the requirements are substantial.

The core technical obligations for high-risk AI systems:

  • Article 9 — Risk management system. Continuous identification, analysis, and mitigation of foreseeable risks throughout the AI system lifecycle.
  • Article 10 — Data governance. Training, validation, and testing datasets must meet quality criteria. Data residency and processing location requirements apply.
  • Article 12 — Record-keeping. Automatic logging of events throughout the AI system's lifetime to a level sufficient to ensure post-hoc traceability. Logs must be stored for at least six months.
  • Article 13 — Transparency and provision of information. High-risk AI systems must be transparent enough for deployers to interpret their output and use them correctly.
  • Article 14 — Human oversight. High-risk AI systems must be designed and developed to allow effective human oversight. Specifically, Art. 14(3)(d–e) requires that humans can override, interrupt, or stop the system's operation.

The August 2, 2026 date is when these obligations become enforceable for providers and deployers of high-risk AI systems. Fines for non-compliance reach 3% of global annual turnover.

2. The agentic systems gap

The EU AI Act was drafted and negotiated before agentic AI systems existed at commercial scale. The 2024 final text reflects the AI landscape of 2021–2023: models with defined inputs and outputs, single-step inference, human-in-the-loop workflows.

The word “agent” appears nowhere in the Act's 113 articles. There is no definition of an AI agent. No provision for multi-agent chains. No guidance on what constitutes a single AI system when multiple agents interact. No framework for attributing accountability across a tool-calling chain.

This creates three specific compliance problems for agentic deployments:

1
System boundary ambiguity
The Act requires logging and oversight at the AI system level. When an agent orchestrates ten tools across three servers, is that one AI system or eleven? The Act provides no answer. The defensible position is to treat the entire agent-tool chain as a single system — requiring end-to-end logging at every tool call.
2
Dynamic capability acquisition
The Act assumes AI systems have defined, known capabilities at the time of assessment. Agentic systems using tool discovery acquire capabilities dynamically at runtime. A risk assessment conducted at deployment may be invalid by the time the agent actually runs. Continuous monitoring becomes necessary.
3
Human oversight at agent speed
Article 14 requires human oversight capability. Agents can execute dozens of tool calls per second. The traditional interpretation — a human who can observe and stop the system — requires real-time visibility into every tool call, not batch reports.

3. Why this matters now

The EU AI Act Annex III obligations take effect August 2, 2026. The European AI Office will publish agentic systems guidance — eventually. Based on the timeline of GDPR technical guidance, authoritative guidance on agentic systems is unlikely before the enforcement date.

Organizations deploying agentic AI systems that could be classified as high-risk have three options:

  • Wait for guidance. Risk deploying non-compliant systems through the enforcement window.
  • Halt agentic deployments. Commercially untenable for most organizations.
  • Build the defensible position now. Implement the technical controls that satisfy the Act's intent for AI systems of any kind, before agentic-specific guidance arrives.

The third option is also the fastest path to internal approval for agentic AI projects. The question that blocks internal deployment is not “does this comply?” — it is “can you show me what this AI is doing?” The audit record answers that question before the compliance team asks it.

4. The defensible position

In the absence of agentic-specific guidance, the defensible compliance position for MCP-based deployments is to satisfy the Act's stated intent across all four core obligations — applied to the agent-tool chain as a whole:

Traceability
Art. 12Record-keeping
Every tool call — tool name, agent identity, inputs, outputs, timestamp, duration, error status — logged immutably. SHA-256 hash chain on lineage nodes prevents retroactive alteration.
Transparency
Art. 13Transparency
Real-time observable call stream. Agent Lineage graph showing which agent called which tool, in what order, with what result. Exportable as JSON or PDF for documentation packages.
Human oversight
Art. 14Human oversight
Confirmation middleware implementing Art. 14(3)(d–e) human override hooks. Any tool call can be interrupted before execution. Every override logged with reason and operator identity.
Risk management
Art. 9Risk management
Continuous reputation scoring with loop detection, burst protection, and automatic throttling. Anomalous agent behaviour flagged in real time with full audit trail.

This is not a guarantee of compliance. It is the most technically complete answer to the Act's stated obligations that currently exists for agentic deployments. Any organization that implements these controls can walk into a compliance review with documentation — before guidance exists that would require something different.

5. Technical implementation

The complete compliance stack is available as a single open-source library. Installation takes minutes. The audit record starts building immediately.

# Install
npm install @businys/ops
# Enable compliance middleware
import { createMCPProxy } from "@businys/ops"
const proxy = createMCPProxy({
auditLog: true, // Art. 12 record-keeping
lineage: true, // Art. 13 traceability
confirmation: true, // Art. 14 human oversight
reputation: true, // Art. 9 risk management
})

Data residency is configured with a single environment variable. Route audit and lineage data to eu-west-1 by default for EU deployments:

BDEV_DATA_REGION=eu-west-1

The hosted dashboard at businysdotdev.vercel.app provides the real-time call stream, Agent Lineage visualisation, anomaly feed, and compliance export interface.

6. The audit record as the compliance artifact

The conventional compliance framing is “compliance for regulated industries.” The more accurate framing is: ship AI features faster because you can prove they are safe.

Every agentic AI deployment faces the same internal obstacle: the question “can you show me what this AI is doing before we approve it?” With a complete audit record and Agent Lineage from day one, the answer is ready before anyone asks.

The compliance artifact — the Article 13 documentation package — is a by-product of normal operation. It generates itself. Legal teams review a pre-built document rather than reconstructing events from incomplete logs. Compliance reviews accelerate rather than block deployment.

The audit record also directly addresses the Act's post-market monitoring requirement (Art. 72). Providers of high-risk AI systems must establish a post-market monitoring system. The call log, reputation scores, anomaly feed, and lineage graphs constitute that system for agentic deployments.

“The Act has no guidance on agentic systems. The defensible position before guidance arrives is exactly what @businys/ops provides: complete observability, human override capability, immutable audit records, and documented risk controls. Any compliance consultant reviewing your system will find the infrastructure they would have specified — already built.”

Article-by-article mapping →Compliance onboardingGet started with @businys/ops