EU AI Act readiness, built in.
High-risk enforcement begins August 2026. ShadowIQ maps every EU AI Act article to a live ShadowIQ control and a cryptographic evidence artifact — so readiness is a configuration, not a consulting project.
Summary
The EU AI Act (Regulation (EU) 2024/1689) is the European Union's horizontal regulation of AI systems, with high-risk system enforcement beginning 2 August 2026. ShadowIQ provides pre-mapped controls and cryptographic evidence for Articles 9 (risk management), 10 (data governance), 12 (logging), 13 (transparency), 14 (human oversight), 15 (accuracy), and 17 (quality management).
The crosswalk: article → control → signed evidence.
You've heard this one before.
- Uncertainty over which internal AI systems fall into 'high-risk'.
- Article 12 logging requirements with no existing evidence infrastructure.
- Human-oversight processes that exist in policy but not in production.
- Conformity assessment documentation spread across teams.
Three moves.
- 1Scoped: high-risk or not.
The registry walks you through Annex III; it then classifies each asset and assigns the control set automatically.
- 2Article 12 logging, by default.
Every decision, input, and output is logged with the duration and completeness required by Art. 12 — signed and queryable in the auditor workspace.
- 3Conformity bundle.
Auto-generated technical documentation (Annex IV), risk management records (Art. 9), and post-market monitoring plan (Art. 72) — OSCAL-exportable and signed.
Numbers, not adjectives.
EU AI Act article → ShadowIQ control → signed evidence.
Asked, answered, sourced.
The Act entered into force on 1 August 2024. Prohibitions apply from 2 February 2025. High-risk system obligations apply from 2 August 2026. General-purpose AI model obligations apply from 2 August 2025.
Systems listed in Annex III (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice) or safety components of products covered by Annex I (machinery, medical devices, etc.).
Up to €35M or 7% of worldwide annual turnover (whichever is higher) for prohibited practices; up to €15M or 3% for most other violations; up to €7.5M or 1% for incorrect information to authorities.
Yes. For general-purpose AI models deployed within your enterprise, we produce the technical documentation, training-data summary, and copyright-compliance attestations required under Art. 53.
Keep going.
Your 30-minute demo. A signed audit trail by the end of it.
We'll wire ShadowIQ into one live workload, block a prompt injection in real time, and hand you a cryptographic receipt — before the meeting ends.