Compliance

AI Compliance Audit: 20-Point EU AI Act Checklist

AI audit checklist for EU AI Act compliance. AI inventory, risk classification, 20 control points, documentation requirements, and ongoing monitoring.

14 April 20268 min readWarDek Team

AI Compliance Audit Checklist: 20 Control Points for the EU AI Act

Conducting an AI compliance audit is no longer optional for organizations operating in the EU. Regulation 2024/1689 (EU AI Act) introduces a tiered obligations framework, and market surveillance authorities are developing audit methodologies aligned with it. Whether you are preparing for an internal compliance review, responding to a customer or investor due diligence request, or preparing for eventual regulatory scrutiny, this checklist provides the 20 control points that matter most.

The checklist is organized in four phases: Build your AI inventory, perform risk classification, verify governance controls, and establish ongoing monitoring. Each control point maps to specific articles in the regulation or to recognized guidance from ENISA, the NIST AI Risk Management Framework (NIST AI RMF), or the EDPB.

Phase 1 — AI Inventory (Controls 1-5)

A compliance audit cannot proceed without a complete and accurate inventory of AI systems in use. This is the most common gap identified in early AI Act readiness assessments: organizations deploy AI tools across departments without central visibility.

Control 1: Completeness of AI inventory

Does the organization maintain a documented inventory of every AI system it develops, deploys, or procures? The inventory should cover both internally developed systems and third-party AI tools (including SaaS platforms with AI features).

Control 2: Ownership assignment

Is there a named owner for each AI system responsible for compliance? The AI Act's deployer obligations (Article 26) must be assigned to a competent person.

Control 3: Vendor documentation status

For each third-party AI system, has the organization obtained and reviewed the provider's instructions for use, DPA (where personal data is processed), and available conformity documentation?

Control 4: System interconnections mapped

Are dependencies between AI systems documented? An AI system that feeds inputs to another can affect the overall risk profile.

Control 5: AI inventory review cadence

Is the inventory reviewed and updated at a defined frequency — at minimum when new AI systems are introduced or existing systems are substantially modified?

Phase 2 — Risk Classification (Controls 6-10)

Control 6: Classification methodology documented

Has the organization documented its methodology for classifying AI systems by risk level? The methodology should reference Annex III of Regulation 2024/1689 and any relevant Commission guidance.

Control 7: All systems classified

Is every system in the AI inventory assigned a risk level (prohibited, high-risk, limited risk, minimal risk)?

Control 8: Prohibited practices confirmed absent

Has the organization explicitly reviewed its AI systems against Article 5 prohibited practices and confirmed none are in use?

Control 9: High-risk reclassification review

Has the organization considered whether any initially minimal-risk classified system should be reclassified as high-risk based on its actual use context (not just its technical description)?

Control 10: GPAI provider obligations identified

If the organization provides services that include a general-purpose AI model (LLM, multimodal foundation model), has it identified and tracked its obligations under Chapter V?

Phase 3 — Governance Controls (Controls 11-17)

Control 11: AI literacy program in place

Does the organization have a documented AI literacy program satisfying Article 4? This applies from 2 February 2025 regardless of AI risk level.

Control 12: Human oversight mechanisms

For each high-risk AI system, is there a documented human oversight mechanism specifying who reviews AI outputs, when overrides are permitted, and how disagreements are resolved?

Control 13: Incident reporting process

Is there a documented process for identifying, assessing, and reporting serious incidents related to AI systems? Article 73 requires deployers to report serious incidents to providers; providers must escalate to market surveillance authorities.

Control 14: Data governance for AI systems

Are the datasets used to train or fine-tune internal AI systems documented, and are data quality and representativeness assessments conducted?

Control 15: AI-DPIA coordination

Where high-risk AI systems process personal data, has a Data Protection Impact Assessment (DPIA) been conducted that includes AI-specific risk analysis?

Control 16: Procurement criteria for AI vendors

Does the organization's procurement process include AI compliance criteria? Vendors of high-risk AI systems should be able to demonstrate conformity assessment completion.

Control 17: Sub-processor chain for AI

Where AI vendors use sub-processors or underlying models (e.g., OpenAI API under a SaaS product), is the full chain documented and compliance obligations flowed down?

Phase 4 — Monitoring and Documentation (Controls 18-20)

Control 18: Post-market monitoring

For high-risk AI systems the organization deploys (or provides), is there a post-market monitoring plan that collects and analyzes operational data to detect performance degradation or emerging risks?

Control 19: Regulatory change tracking

Is there a process for tracking updates to the AI Act, Commission delegated acts, harmonized standards, and relevant guidance from the European AI Office and ENISA?

Control 20: Audit trail completeness

Are decisions made by high-risk AI systems logged with sufficient detail to reconstruct the basis for any individual decision and to respond to data subject queries or regulatory investigations?

Audit Summary Template

Use the following format to summarize your findings:

| Phase | Controls | Pass | Partial | Fail | Critical Gaps | |---|---|---|---|---|---| | AI Inventory | 1-5 | — | — | — | — | | Risk Classification | 6-10 | — | — | — | — | | Governance Controls | 11-17 | — | — | — | — | | Monitoring & Docs | 18-20 | — | — | — | — | | Total | 20 | — | — | — | — |

A score of 16/20 or above suggests reasonable compliance posture. Below 12/20 indicates significant gaps that should be addressed before August 2026.

How WarDek Automates This Audit

WarDek's AI compliance audit module maps directly to this 20-control framework. It automates inventory collection, risk classification scoring, documentation gap identification, and generates board-ready audit reports. Continuous monitoring alerts notify you of new guidance from the European AI Office or changes to the prohibited practices list.

Run your AI compliance audit in WarDek — the first audit report is free.

Key Takeaways

An AI compliance audit starts with a complete inventory — without it, classification and governance controls are meaningless. Human oversight documentation and AI literacy training are the two highest-frequency gaps in SME assessments, and both have obligations that apply now. The 20 control points in this checklist cover both current obligations and those entering into force through 2027. Conduct this audit annually and after any material change to your AI system landscape.

For related reading, see our AI Act risk classification guide and AI Act obligations for SMEs.

#AI Act#audit#compliance checklist#AI governance#2024/1689#risk management

Scan your site for free

WarDek detects the vulnerabilities mentioned in this article in seconds.

Back to Compliance