AI Compliance Audit Checklist: 20 Control Points for the EU AI Act
Conducting an AI compliance audit is no longer optional for organizations operating in the EU. Regulation 2024/1689 (EU AI Act) introduces a tiered obligations framework, and market surveillance authorities are developing audit methodologies aligned with it. Whether you are preparing for an internal compliance review, responding to a customer or investor due diligence request, or preparing for eventual regulatory scrutiny, this checklist provides the 20 control points that matter most.
The checklist is organized in four phases: Build your AI inventory, perform risk classification, verify governance controls, and establish ongoing monitoring. Each control point maps to specific articles in the regulation or to recognized guidance from ENISA, the NIST AI Risk Management Framework (NIST AI RMF), or the EDPB.
Phase 1 — AI Inventory (Controls 1-5)
A compliance audit cannot proceed without a complete and accurate inventory of AI systems in use. This is the most common gap identified in early AI Act readiness assessments: organizations deploy AI tools across departments without central visibility.
Control 1: Completeness of AI inventory
Does the organization maintain a documented inventory of every AI system it develops, deploys, or procures? The inventory should cover both internally developed systems and third-party AI tools (including SaaS platforms with AI features).
- Check: Structured inventory document or database exists
- Minimum fields: System name, vendor (if applicable), version, intended purpose, department, data processed, deployment date
- Red flag: AI tools discovered through interviews that do not appear in the inventory
Control 2: Ownership assignment
Is there a named owner for each AI system responsible for compliance? The AI Act's deployer obligations (Article 26) must be assigned to a competent person.
- Check: Each inventory entry has a named owner with defined responsibilities
- NIST AI RMF alignment: GOVERN 1.1 (policies assigned to accountable individuals)
Control 3: Vendor documentation status
For each third-party AI system, has the organization obtained and reviewed the provider's instructions for use, DPA (where personal data is processed), and available conformity documentation?
- Check: Documentation on file and reviewed within the last 12 months
- Red flag: Vendor cannot provide any conformity documentation for a system classified as high-risk
Control 4: System interconnections mapped
Are dependencies between AI systems documented? An AI system that feeds inputs to another can affect the overall risk profile.
- Check: Data flow diagrams show AI system interconnections
Control 5: AI inventory review cadence
Is the inventory reviewed and updated at a defined frequency — at minimum when new AI systems are introduced or existing systems are substantially modified?
- Check: Review process documented with named responsible party
Phase 2 — Risk Classification (Controls 6-10)
Control 6: Classification methodology documented
Has the organization documented its methodology for classifying AI systems by risk level? The methodology should reference Annex III of Regulation 2024/1689 and any relevant Commission guidance.
- Check: Classification methodology in writing, reviewed by legal or compliance function
Control 7: All systems classified
Is every system in the AI inventory assigned a risk level (prohibited, high-risk, limited risk, minimal risk)?
- Check: Risk level column populated for all inventory entries
- Red flag: Systems classified as "unknown" or "to be determined" without a target date
Control 8: Prohibited practices confirmed absent
Has the organization explicitly reviewed its AI systems against Article 5 prohibited practices and confirmed none are in use?
- Check: Signed-off review against Article 5 list, dated
- ENISA guidance reference: ENISA's AI Security Guidelines (2023) include a prohibited practices checklist
Control 9: High-risk reclassification review
Has the organization considered whether any initially minimal-risk classified system should be reclassified as high-risk based on its actual use context (not just its technical description)?
- Check: Use-context review documented for AI systems with borderline classification
- Note: An AI system used for employee performance monitoring may be high-risk under Annex III regardless of how the vendor marketed it
Control 10: GPAI provider obligations identified
If the organization provides services that include a general-purpose AI model (LLM, multimodal foundation model), has it identified and tracked its obligations under Chapter V?
- Check: GPAI obligations documented or GPAI model use confirmed as deployer-only
Phase 3 — Governance Controls (Controls 11-17)
Control 11: AI literacy program in place
Does the organization have a documented AI literacy program satisfying Article 4? This applies from 2 February 2025 regardless of AI risk level.
- Check: Training materials exist, completion records kept, training is role-specific (not one-size-fits-all)
- Minimum: Staff using AI tools have completed training; management has received regulatory awareness training
Control 12: Human oversight mechanisms
For each high-risk AI system, is there a documented human oversight mechanism specifying who reviews AI outputs, when overrides are permitted, and how disagreements are resolved?
- Check: Human oversight policy or SOP exists per high-risk system (Article 14)
- Red flag: High-risk AI decisions (e.g., recruitment screening, credit scoring) applied without any documented human review step
Control 13: Incident reporting process
Is there a documented process for identifying, assessing, and reporting serious incidents related to AI systems? Article 73 requires deployers to report serious incidents to providers; providers must escalate to market surveillance authorities.
- Check: Incident process documented, contact details for providers on file, reporting timelines understood (15 days for serious incidents, 3 days for death or serious harm)
Control 14: Data governance for AI systems
Are the datasets used to train or fine-tune internal AI systems documented, and are data quality and representativeness assessments conducted?
- Check: Training data documentation exists; data bias assessment conducted or planned
- NIST AI RMF reference: MAP 2.3 (AI risk is examined for the AI system)
Control 15: AI-DPIA coordination
Where high-risk AI systems process personal data, has a Data Protection Impact Assessment (DPIA) been conducted that includes AI-specific risk analysis?
- Check: DPIAs for AI systems exist and include a section on AI-specific risks (automated decision-making, data quality, human oversight)
- EDPB reference: Guidelines 05/2020 on DPIAs provide the framework; the AI Act adds new dimensions to it
Control 16: Procurement criteria for AI vendors
Does the organization's procurement process include AI compliance criteria? Vendors of high-risk AI systems should be able to demonstrate conformity assessment completion.
- Check: AI compliance questions included in vendor due diligence; contract templates include AI-specific clauses
Control 17: Sub-processor chain for AI
Where AI vendors use sub-processors or underlying models (e.g., OpenAI API under a SaaS product), is the full chain documented and compliance obligations flowed down?
- Check: Sub-processor list obtained from AI vendors; obligations documented
Phase 4 — Monitoring and Documentation (Controls 18-20)
Control 18: Post-market monitoring
For high-risk AI systems the organization deploys (or provides), is there a post-market monitoring plan that collects and analyzes operational data to detect performance degradation or emerging risks?
- Check: Monitoring plan documented; KPIs defined (accuracy, bias indicators, error rates)
- Article 72 reference: Providers must implement post-market monitoring systems
Control 19: Regulatory change tracking
Is there a process for tracking updates to the AI Act, Commission delegated acts, harmonized standards, and relevant guidance from the European AI Office and ENISA?
- Check: Named person responsible for regulatory monitoring; review cadence defined (minimum quarterly)
Control 20: Audit trail completeness
Are decisions made by high-risk AI systems logged with sufficient detail to reconstruct the basis for any individual decision and to respond to data subject queries or regulatory investigations?
- Check: Logging capabilities verified; retention period aligned with potential litigation and regulatory timeframes
- Article 12 reference: High-risk AI systems must have logging capabilities enabling reconstruction of events
Audit Summary Template
Use the following format to summarize your findings:
| Phase | Controls | Pass | Partial | Fail | Critical Gaps | |---|---|---|---|---|---| | AI Inventory | 1-5 | — | — | — | — | | Risk Classification | 6-10 | — | — | — | — | | Governance Controls | 11-17 | — | — | — | — | | Monitoring & Docs | 18-20 | — | — | — | — | | Total | 20 | — | — | — | — |
A score of 16/20 or above suggests reasonable compliance posture. Below 12/20 indicates significant gaps that should be addressed before August 2026.
How WarDek Automates This Audit
WarDek's AI compliance audit module maps directly to this 20-control framework. It automates inventory collection, risk classification scoring, documentation gap identification, and generates board-ready audit reports. Continuous monitoring alerts notify you of new guidance from the European AI Office or changes to the prohibited practices list.
Run your AI compliance audit in WarDek — the first audit report is free.
Key Takeaways
An AI compliance audit starts with a complete inventory — without it, classification and governance controls are meaningless. Human oversight documentation and AI literacy training are the two highest-frequency gaps in SME assessments, and both have obligations that apply now. The 20 control points in this checklist cover both current obligations and those entering into force through 2027. Conduct this audit annually and after any material change to your AI system landscape.
For related reading, see our AI Act risk classification guide and AI Act obligations for SMEs.