comply.businys.dev

EU AI Act

Article-by-Article Mapping

For each EU AI Act obligation relevant to agentic AI deployments: what it requires technically, which @businys/ops feature addresses it, and what the compliance artifact looks like. Not legal advice — technical implementation guidance.

High-risk systemsAll AI systemsProviders
Art. 9Risk management
High-risk systemsReputation & Anomaly
Obligation

Establish, implement, document, and maintain a risk management system for the AI system throughout its entire lifecycle.

Technical implementation

Continuous monitoring of agent behaviour with automatic anomaly detection, reputation scoring, loop detection, and burst rate protection. Risks must be documented with mitigation status.

Art. 10Data governance
High-risk systemsData Residency
Obligation

Training, validation, and testing datasets must meet quality criteria. Data must be managed appropriately in terms of collection, processing, and storage location.

Technical implementation

Data residency configuration routes audit and lineage records to the specified region (eu-west-1, ca-central-1, us-east-1). No PII in tool call records by default. Configurable field redaction before storage.

Art. 12Record-keeping
High-risk systemsAudit Log
Obligation

High-risk AI systems must enable automatic logging of events throughout their lifetime to a level sufficient to ensure post-hoc traceability of the system's output. Logs must be retained for at least six months.

Technical implementation

Immutable per-call audit log capturing tool name, agent identity, inputs, outputs, duration, error status, and timestamp. SHA-256 hash chain on lineage nodes prevents retroactive alteration. Configurable retention periods, minimum six months enforced.

Art. 13Transparency
High-risk systemsAgent Lineage
Obligation

High-risk AI systems must be designed and developed to ensure their operation is sufficiently transparent that deployers can interpret the system's output and use it appropriately.

Technical implementation

Real-time observable call stream. Agent Lineage graph showing the full tool-call chain: which agent called which tool, in what order, with what parameters and result. Exportable as JSON or PDF for documentation packages and legal review.

Art. 14(3)(d)Human override
High-risk systemsConfirmation Middleware
Obligation

High-risk AI systems must be designed to allow natural persons to decide not to use the system or to override the system's decisions, recommendations, or predictions.

Technical implementation

Confirmation middleware that intercepts any tool call before execution. Operators can approve, reject, or modify any pending tool call. Every decision logged with operator identity, action taken, and reason.

Art. 14(3)(e)Human interruption
High-risk systemsSession Halt
Obligation

Natural persons in charge of human oversight must be able to intervene in the operation of the high-risk AI system and stop it through a halt button or similar procedure.

Technical implementation

Session-level interruption capability. Any active agent session can be halted from the dashboard. In-flight tool calls are cancelled. The halt event is logged with timestamp and operator identity.

Art. 16Provider obligations
ProvidersCompliance Export
Obligation

Providers of high-risk AI systems must ensure their systems comply with requirements of Chapter III Section 2, register the system, affix CE marking, and draw up EU declaration of conformity.

Technical implementation

Compliance export generates the Article 13 documentation package — a structured record of system capabilities, risk controls, audit architecture, and test evidence. Not a CE marking substitute; a documentation foundation for the conformity assessment process.

Art. 17Quality management
ProvidersOngoing Monitoring
Obligation

Providers shall put a quality management system in place that ensures compliance with the Regulation. It shall include procedures for ongoing monitoring.

Technical implementation

Ongoing monitoring is built into the middleware pipeline: reputation scores update on every call, anomaly feed surfaces issues in real time, and monthly aggregate reports are available via the dashboard. Post-market monitoring requirement of Art. 72 is addressed by the same infrastructure.

Art. 72Post-market monitoring
ProvidersPost-Market Monitoring
Obligation

Providers of high-risk AI systems shall establish a post-market monitoring system and collect, document, and analyse relevant data throughout the AI system's lifetime.

Technical implementation

The call log, reputation scores, anomaly feed, and lineage graphs constitute a post-market monitoring system for agentic deployments. Data is collected automatically on every tool invocation. Dashboard analytics provide trend analysis, error rate tracking, and tool-level breakdown.

Not legal advice. This mapping reflects our technical interpretation of EU AI Act obligations as applied to MCP-based agentic deployments. Your legal counsel must confirm which obligations apply to your specific AI system classification and deployment context.

Read the Agentic Gap Guide →Compliance onboarding