The EU AI Act deadline is August 2026. That's 18 months to implement comprehensive compliance for any AI agent serving European users. Here's your practical roadmap.
This checklist covers what you need to do, by when, and how each requirement maps to actionable steps. Bookmark this guide and work through it systematically. Compliance delayed is compliance denied.
Step 1: Determine your risk classification
Timeline: Complete by March 2026
Before anything else, you need to know which AI Act category applies to your agent. This determines all subsequent requirements.
High-risk AI systems (strictest requirements)
Your agent falls into this category if used for:
- Biometric identification or categorisation of natural persons
- Management of critical infrastructure (transport, water, gas, electricity)
- Education and vocational training (scoring, assessment, admission decisions)
- Employment and worker management (recruitment, promotion, termination decisions)
- Access to essential services (loan applications, insurance eligibility, benefit decisions)
- Law enforcement (individual risk assessment, polygraph analysis, emotion recognition)
- Migration and border control (visa applications, deportation risk assessment)
- Administration of justice (legal case analysis, judicial decision support)
If you're high-risk: You need full compliance with all requirements below.
Limited risk AI systems (disclosure requirements)
Your agent falls here if it:
- Interacts directly with humans (chatbots, customer service agents)
- Generates or manipulates content (text, images, audio, video)
- Detects emotions or determines social categories based on biometric data
If you're limited risk: You must clearly disclose that users are interacting with an AI system.
Minimal risk (no specific requirements)
Everything else falls into this category. You can operate freely but should monitor for regulatory changes.
Step 2: Establish risk management systems
Timeline: Complete by May 2026
High-risk AI systems must implement continuous risk management covering the entire system lifecycle.
Risk identification and documentation
Create a comprehensive risk register covering:
- Discrimination and bias risks across protected characteristics
- Privacy and data protection vulnerabilities
- Security weaknesses (prompt injection, data exfiltration, jailbreaking)
- Safety risks from system failures or misuse
- Fundamental rights impacts on users and affected parties
Document each risk with:
- Detailed description and potential consequences
- Likelihood assessment (rare, unlikely, possible, likely, almost certain)
- Impact severity (negligible, minor, moderate, major, catastrophic)
- Current mitigation measures
- Residual risk level after mitigation
- Review schedule and ownership
Risk mitigation measures
Implement specific measures for identified risks:
For bias and discrimination:
- Diverse training datasets with documented demographic coverage
- Bias testing across different user groups and use cases
- Human oversight mechanisms for sensitive decisions
- Clear escalation paths when bias is detected
For security vulnerabilities:
- Comprehensive adversarial testing (prompt injection, data exfiltration, jailbreaking)
- Input sanitisation and output filtering
- Secure API design with rate limiting and authentication
- Regular security assessments by third parties
For privacy risks:
- Data minimisation principles in training and operation
- Purpose limitation for data processing
- User consent mechanisms where required
- Data retention and deletion policies
Step 3: Implement quality management systems
Timeline: Complete by June 2026
Document your development and deployment processes to demonstrate systematic quality control.
Required documentation
Design specifications covering:
- System architecture and core functionality
- Input/output specifications and data flows
- Human oversight integration points
- Performance metrics and success criteria
Development methodology including:
- Data governance procedures for training datasets
- Model training and validation processes
- Testing protocols (functional, performance, security)
- Version control and change management
- Quality assurance checkpoints
Deployment procedures covering:
- Environment setup and configuration management
- Release approval processes
- Rollback procedures for problematic deployments
- Monitoring and alerting systems
Process metrics and monitoring
Establish measurable quality indicators:
- Performance metrics (accuracy, precision, recall, F1 scores)
- Security metrics (successful attack resistance, vulnerability remediation time)
- Bias metrics (fairness across demographic groups)
- Reliability metrics (uptime, error rates, recovery time)
- User satisfaction metrics (user feedback scores, complaint rates)
Set up continuous monitoring for these metrics with:
- Automated alerting for metric degradation
- Regular review cycles (at least quarterly)
- Clear escalation procedures for quality issues
- Documentation of corrective actions taken
Step 4: Create comprehensive technical documentation
Timeline: Complete by July 2026
The AI Act requires extensive technical documentation. Start early because this is time-intensive.
Core documentation requirements
System description including:
- General characteristics, capabilities, and limitations
- Intended purpose and reasonably foreseeable misuse
- Level of accuracy and robustness expected
- Cybersecurity measures implemented
Data governance documentation covering:
- Training data characteristics, sources, and selection criteria
- Data preprocessing and cleaning procedures
- Data validation and quality assurance measures
- Bias assessment results and mitigation actions
Model information including:
- Architecture details and design rationale
- Training procedures and hyperparameter choices
- Validation and testing methodologies
- Performance evaluation results across different conditions
Risk assessment documentation showing:
- Comprehensive risk analysis methodology
- Identified risks and their likelihood/impact assessment
- Mitigation measures and their effectiveness
- Residual risk acceptance rationale
User documentation
Instructions for deployers covering:
- Proper installation and configuration procedures
- Human oversight requirements and implementation guidance
- Monitoring recommendations and alerting setup
- Incident response procedures for system failures
End-user guidance including:
- Clear disclosure of AI system nature and limitations
- Appropriate use cases and contraindications
- Human review requirements for high-stakes decisions
- Contact information for support and complaints
Step 5: Implement conformity assessments
Timeline: Complete by August 2026
High-risk AI systems need formal conformity assessments before EU market deployment.
Internal conformity assessment
For most high-risk AI systems, you can self-assess conformity by:
Comprehensive testing across all requirements:
- Functional testing to verify intended performance
- Security testing including adversarial attack simulation
- Bias testing across relevant demographic groups
- Robustness testing under various operating conditions
Documentation review ensuring all required documentation is complete, accurate, and accessible.
EU Declaration of Conformity formally stating compliance with all applicable AI Act requirements.
Third-party assessment (when required)
Certain high-risk categories require third-party conformity assessment by notified bodies:
- Remote biometric identification systems
- AI systems used for critical infrastructure management in certain sectors
- AI systems used in education/training that determine access or life course
If third-party assessment applies:
- Identify appropriate notified bodies in your sector
- Prepare comprehensive technical documentation package
- Submit formal application with required fees
- Respond to assessor questions and requests for clarification
- Implement any required corrections or improvements
Step 6: Establish ongoing compliance operations
Timeline: Operational by August 2026
Compliance isn't a one-time event. Build systems for ongoing adherence.
Post-market monitoring
Performance monitoring tracking:
- System accuracy and reliability metrics over time
- User feedback and complaint patterns
- Incident reports and their root causes
- Changes in operating environment or user behaviour
Bias monitoring including:
- Regular assessment of discriminatory outcomes
- Analysis of performance across different demographic groups
- Investigation of bias complaints or concerns
- Corrective action tracking and effectiveness measurement
Incident response procedures
Incident classification covering:
- System failures that impact user safety or rights
- Security breaches or successful attacks
- Bias incidents affecting protected groups
- Regulatory non-compliance discoveries
Response procedures including:
- Immediate containment and user protection measures
- Root cause analysis and impact assessment
- Corrective action planning and implementation
- Regulatory notification requirements (where applicable)
- Communication with affected users and stakeholders
Documentation maintenance
Regular reviews of:
- Risk assessments and mitigation measures (at least annually)
- Technical documentation accuracy and completeness
- User instructions and guidance materials
- Quality management system effectiveness
Change management for:
- System updates and new feature deployments
- Training data changes or updates
- Operating environment modifications
- Regulatory requirement changes
How SXM Hardened helps with compliance
Many of these requirements demand expertise that most development teams don't have in-house. Building comprehensive security testing, bias assessment, and risk management capabilities from scratch takes months and costs tens of thousands.
SXM Hardened addresses multiple compliance requirements through automated testing:
Security compliance via 37 automated tests covering prompt injection, data exfiltration, and jailbreak resistance. Maps directly to AI Act cybersecurity requirements.
Documentation support through detailed test reports that provide evidence of security assessment for conformity assessment documentation.
Risk assessment input with specific vulnerability identification and remediation guidance for your risk management systems.
Ongoing monitoring through re-certification requirements that support post-market monitoring obligations.
Blockchain attestation providing immutable audit trails for regulatory compliance demonstration.
At $19.95 for basic certification, SXM Hardened delivers professional-grade security assessment at a fraction of traditional consulting costs.
Implementation timeline summary
March 2026: Risk classification complete May 2026: Risk management systems operational June 2026: Quality management documentation complete July 2026: Technical documentation package ready August 2026: Conformity assessment complete, ongoing operations established
Start now. August 2026 will arrive faster than you think, and compliance requires systematic effort across multiple domains. The companies that begin early will have competitive advantages over those scrambling at the deadline.
Ready to address your AI Act security testing requirements? Get your SXM Hardened certification at scientiaexmachina.co and check one major compliance requirement off your list.
SXM Hardened provides automated security certification that maps directly to EU AI Act compliance requirements. Our 37-test suite covers prompt injection, data exfiltration, and jailbreak resistance with blockchain attestation for regulatory audit trails. Learn more at scientiaexmachina.co.