EU AI Act
Article-by-Article Mapping
For each EU AI Act obligation relevant to agentic AI deployments: what it requires technically, which @businys/ops feature addresses it, and what the compliance artifact looks like. Not legal advice — technical implementation guidance.
Establish, implement, document, and maintain a risk management system for the AI system throughout its entire lifecycle.
Continuous monitoring of agent behaviour with automatic anomaly detection, reputation scoring, loop detection, and burst rate protection. Risks must be documented with mitigation status.
Training, validation, and testing datasets must meet quality criteria. Data must be managed appropriately in terms of collection, processing, and storage location.
Data residency configuration routes audit and lineage records to the specified region (eu-west-1, ca-central-1, us-east-1). No PII in tool call records by default. Configurable field redaction before storage.
High-risk AI systems must enable automatic logging of events throughout their lifetime to a level sufficient to ensure post-hoc traceability of the system's output. Logs must be retained for at least six months.
Immutable per-call audit log capturing tool name, agent identity, inputs, outputs, duration, error status, and timestamp. SHA-256 hash chain on lineage nodes prevents retroactive alteration. Configurable retention periods, minimum six months enforced.
High-risk AI systems must be designed and developed to ensure their operation is sufficiently transparent that deployers can interpret the system's output and use it appropriately.
Real-time observable call stream. Agent Lineage graph showing the full tool-call chain: which agent called which tool, in what order, with what parameters and result. Exportable as JSON or PDF for documentation packages and legal review.
High-risk AI systems must be designed to allow natural persons to decide not to use the system or to override the system's decisions, recommendations, or predictions.
Confirmation middleware that intercepts any tool call before execution. Operators can approve, reject, or modify any pending tool call. Every decision logged with operator identity, action taken, and reason.
Natural persons in charge of human oversight must be able to intervene in the operation of the high-risk AI system and stop it through a halt button or similar procedure.
Session-level interruption capability. Any active agent session can be halted from the dashboard. In-flight tool calls are cancelled. The halt event is logged with timestamp and operator identity.
Providers of high-risk AI systems must ensure their systems comply with requirements of Chapter III Section 2, register the system, affix CE marking, and draw up EU declaration of conformity.
Compliance export generates the Article 13 documentation package — a structured record of system capabilities, risk controls, audit architecture, and test evidence. Not a CE marking substitute; a documentation foundation for the conformity assessment process.
Providers shall put a quality management system in place that ensures compliance with the Regulation. It shall include procedures for ongoing monitoring.
Ongoing monitoring is built into the middleware pipeline: reputation scores update on every call, anomaly feed surfaces issues in real time, and monthly aggregate reports are available via the dashboard. Post-market monitoring requirement of Art. 72 is addressed by the same infrastructure.
Providers of high-risk AI systems shall establish a post-market monitoring system and collect, document, and analyse relevant data throughout the AI system's lifetime.
The call log, reputation scores, anomaly feed, and lineage graphs constitute a post-market monitoring system for agentic deployments. Data is collected automatically on every tool invocation. Dashboard analytics provide trend analysis, error rate tracking, and tool-level breakdown.
Not legal advice. This mapping reflects our technical interpretation of EU AI Act obligations as applied to MCP-based agentic deployments. Your legal counsel must confirm which obligations apply to your specific AI system classification and deployment context.