EU AI ACT — COMPLIANCE FOR GE¶
OWNER: julian
UPDATED: 2026-03-24
SCOPE: EU AI Act compliance for GE as an AI-driven software development agency, and for client products built by GE
REGULATION: Regulation (EU) 2024/1689 — Artificial Intelligence Act
ENTRY_INTO_FORCE: August 1, 2024
CROSS_REF: domains/eu-regulation/index.md for timeline and overview
PHASED_IMPLEMENTATION_TIMELINE¶
| Date | What enters into force |
|---|---|
| Feb 2, 2025 | DONE: Prohibited AI practices (Art. 5), AI literacy (Art. 4) |
| Aug 2, 2025 | DONE: GPAI model rules (Chapter V), governance structure |
| Aug 2, 2026 | UPCOMING: High-risk AI systems in Annex III, non-Annex III obligations |
| Aug 2, 2027 | FUTURE: High-risk AI in Annex I (harmonized legislation products) |
RISK_CLASSIFICATION_SYSTEM¶
UNACCEPTABLE_RISK (BANNED — Art. 5)¶
IN_FORCE: since February 2, 2025
PROHIBITED_PRACTICES:
- Social scoring by public authorities
- Subliminal manipulation causing harm
- Exploitation of vulnerabilities (age, disability, social situation)
- Real-time remote biometric identification in public spaces (with exceptions for law enforcement)
- Emotion recognition in workplace and education
- Untargeted scraping for facial recognition databases
- Biometric categorization inferring sensitive characteristics
GE_RELEVANCE: GE does not build any of these. Ensure no client project falls into this category.
CHECK: during scoping (aimee), verify no prohibited use case
HIGH_RISK (Annex III — from Aug 2, 2026)¶
CATEGORIES:
1. Biometric identification (non-real-time)
2. Critical infrastructure management
3. Education and vocational training (access, assessment, monitoring)
4. Employment (recruitment, promotion, termination, task allocation, performance monitoring)
5. Access to essential services (credit scoring, insurance, social benefits)
6. Law enforcement
7. Migration, asylum, border control
8. Administration of justice and democratic processes
GE_RELEVANCE:
IF client product involves recruitment/HR automation THEN high-risk (category 4)
IF client product involves credit scoring or insurance THEN high-risk (category 5)
IF client product involves educational assessment THEN high-risk (category 3)
RULE: identify during scoping — high-risk classification triggers HEAVY obligations
LIMITED_RISK (Transparency obligations only)¶
APPLIES_TO:
- AI systems that interact with natural persons (chatbots)
- AI systems that generate synthetic content (deepfakes, AI-generated text/images)
- Emotion recognition systems (non-prohibited contexts)
- Biometric categorization systems (non-prohibited contexts)
OBLIGATIONS:
- Inform users they are interacting with AI (Art. 50(1))
- Label AI-generated content (Art. 50(2))
- Disclose use of emotion recognition/biometric categorization (Art. 50(3))
GE_RELEVANCE:
- GE itself IS an AI system interacting with clients (via dima) → transparency obligation
- Client products using chatbots → must inform end users of AI interaction
- Client products generating content → must label as AI-generated
MINIMAL_RISK (No specific obligations)¶
EXAMPLES: spam filters, AI in video games, inventory management
APPLIES: vast majority of AI systems on the EU market
GE_AS_AN_AI_SYSTEM — CLASSIFICATION¶
THE_QUESTION¶
GE is a multi-agent AI system that builds custom SaaS applications. What risk category does GE itself fall into?
ANALYSIS¶
GE_CORE_FUNCTION: software development (coding, testing, deployment)
OUTPUT: custom SaaS applications for SME clients
INTERACTION: clients interact with dima (chatbot) → limited risk transparency obligation
DECISION_MAKING: GE agents make technical decisions about code, architecture, testing
IMPACT_ON_INDIVIDUALS: GE does not make decisions about natural persons (no hiring, scoring, profiling of end users)
CLASSIFICATION_ASSESSMENT:
- GE is NOT high-risk under Annex III — software development is not a listed category
- GE IS limited-risk for client interaction (chatbot) → transparency obligation applies
- GE may BUILD high-risk systems for clients → provider obligations may apply
IF GE builds high-risk AI for client:
- IF client places system on market under OWN brand THEN client = provider, GE = not directly regulated
- IF GE places system on market under GE brand THEN GE = provider (heaviest obligations)
- IF GE substantially modifies client's existing AI THEN GE may become provider (Art. 25)
GE_CLASSIFICATION_DECISION¶
GE_CORE_SYSTEM: LIMITED RISK (chatbot interaction with clients)
GE_AS_DEVELOPER: depends on what is built per client project
RULE: classify EACH client project independently during scoping (aimee + julian)
TRANSPARENCY_OBLIGATIONS (Art. 50)¶
ART_50(1) — AI_INTERACTING_WITH_HUMANS¶
REQUIRES: persons interacting with AI are informed they are interacting with AI
UNLESS: obvious from circumstances or context
GE_IMPLEMENTATION:
- Dima (client-facing chatbot) MUST disclose AI nature at start of every conversation
- Client products with chatbots MUST include AI disclosure
- Admin-UI chat with agents MUST indicate AI agent (already labeled by agent name)
EVIDENCE: dima greeting includes AI disclosure, client product templates include disclosure
ART_50(2) — AI-GENERATED_CONTENT¶
REQUIRES: providers of AI systems generating synthetic content shall ensure output is marked in machine-readable format
INCLUDES: text, audio, image, video
GE_IMPLEMENTATION:
- Code generated by GE agents: git commit metadata identifies AI authorship
- Client-facing content generated by AI: metadata tagging
- AI-generated images (felice): embed AI provenance metadata (C2PA where feasible)
EVIDENCE: git commit logs with AI attribution, metadata samples
ART_50(4) — DEPLOYER_DISCLOSURE_FOR_DEEPFAKES¶
REQUIRES: deployers of AI that generates deepfakes shall disclose AI generation
GE_RELEVANCE: if client products generate synthetic media, disclosure required
PROVIDER_OBLIGATIONS_FOR_HIGH-RISK_AI (from Aug 2026)¶
IF GE acts as provider of high-risk AI system, ALL of the following apply:
ART_9 — RISK_MANAGEMENT_SYSTEM¶
REQUIRES:
- Continuous, iterative process throughout AI lifecycle
- Identify and analyze known and foreseeable risks
- Estimate and evaluate risks from intended use and reasonably foreseeable misuse
- Adopt risk management measures
GE_IMPLEMENTATION: integrate into existing ISMS risk assessment (clause 6.1), AI-specific risk register
ART_10 — DATA_GOVERNANCE¶
REQUIRES:
- Training, validation, and testing datasets subject to data governance
- Bias detection and mitigation
- Data quality criteria
GE_RELEVANCE: GE uses pre-trained foundation models (Claude, GPT, Gemini) — data governance responsibility primarily on GPAI providers (Anthropic, OpenAI, Google)
IF GE fine-tunes models THEN data governance obligations apply to fine-tuning data
ART_11 — TECHNICAL_DOCUMENTATION¶
REQUIRES: technical documentation per Annex IV BEFORE placing on market
ANNEX_IV_CONTENT:
- General description of AI system
- Detailed description of elements and development process
- Information about monitoring, functioning, and control
- Description of risk management system
- Description of changes throughout lifecycle
- List of applied harmonized standards
- Description of performance metrics
- Description of intended purpose and foreseeable misuse
GE_IMPLEMENTATION: anna (Formal Specification) produces system descriptions; julian adds compliance documentation
ART_12 — RECORD_KEEPING (automatic logging)¶
REQUIRES: high-risk AI systems designed to enable automatic logging of events
LOGGING_MUST_INCLUDE:
- Period of each use
- Input data (or reference)
- Verification of input data
- Events relevant to AI-specific risks
GE_IMPLEMENTATION: existing logging infrastructure (A.8.15) extended with AI-specific events
ART_13 — TRANSPARENCY_TO_DEPLOYERS¶
REQUIRES: instructions for use for deployers including:
- Provider identity and contact
- AI system characteristics, capabilities, limitations
- Performance metrics (accuracy, robustness, cybersecurity)
- Known and foreseeable risks and their mitigation
- Technical measures for human oversight (Art. 14)
- Expected lifetime and maintenance measures
GE_IMPLEMENTATION: comprehensive system documentation delivered with every high-risk product
ART_14 — HUMAN_OVERSIGHT¶
REQUIRES: design AI system to allow effective human oversight
MUST_ENABLE:
- Proper understanding of system capabilities and limitations
- Awareness of automation bias risk
- Correct interpretation of output
- Ability to decide not to use / disregard / override AI output
- Ability to intervene or interrupt (stop button)
GE_IMPLEMENTATION:
- All GE agent decisions can be reviewed, overridden, or halted by human (dirk-jan)
- Client products: build human-in-the-loop for consequential decisions
- Dashboard showing AI reasoning and confidence
ART_15 — ACCURACY_ROBUSTNESS_CYBERSECURITY¶
REQUIRES:
- Appropriate level of accuracy (declared in instructions for use)
- Resilience to errors, faults, inconsistencies
- Cybersecurity measures proportionate to risks
GE_IMPLEMENTATION:
- Anti-LLM pipeline ensures accuracy (anna→antje→devs→koen→marije→jasper→ashley)
- Adversarial testing (ashley) for robustness
- ISO 27001 controls for cybersecurity
ART_17 — QUALITY_MANAGEMENT_SYSTEM¶
REQUIRES: documented quality management system including:
- Strategy for regulatory compliance
- Examination, test, and validation procedures
- Data management procedures
- Risk management system
- Post-market monitoring system
- Incident reporting procedures
- Communication with authorities
GE_IMPLEMENTATION: ISMS serves as quality management system, extended with AI-specific procedures
ART_43 — CONFORMITY_ASSESSMENT¶
REQUIRES: before placing high-risk AI on market, perform conformity assessment
OPTIONS:
- Internal control (Annex VI) — for most Annex III systems
- Third-party assessment — for biometric identification (notified body)
GE_APPROACH: internal control (self-assessment) for Annex III category systems that are not biometric
ART_49 — EU_DATABASE_REGISTRATION¶
REQUIRES: register high-risk AI systems in EU public database BEFORE placing on market
DATABASE: managed by AI Office
CONTENT: system description, provider info, intended purpose, status
GE_IMPLEMENTATION: julian manages registration per high-risk client project
AI_LITERACY (Art. 4)¶
STATUS: IN FORCE since August 2, 2025
APPLIES_TO: ALL providers and deployers (regardless of risk level)
REQUIRES: ensure staff have sufficient AI literacy considering:
- Technical knowledge, experience, education, and training
- Context and persons/groups affected by AI systems
- Specific purpose of the AI system
GE_IMPLEMENTATION:
- Agent profiles ensure technical AI literacy (domain expertise)
- Constitution addresses responsible AI use
- Wiki brain provides continuous AI literacy updates
- Client-facing: aimee educates clients on AI capabilities and limitations during scoping
CHECK: AI literacy documented as part of training programme (A.6.3)
EVIDENCE: training records, agent profiles, client education materials
GPAI_MODEL_OBLIGATIONS (Chapter V)¶
WHAT_ARE_GPAI_MODELS¶
DEFINITION: AI models trained on broad data, capable of performing wide range of tasks
EXAMPLES: Claude (Anthropic), GPT (OpenAI), Gemini (Google)
PROVIDER_OBLIGATIONS (Anthropic, OpenAI, Google)¶
THEIR_OBLIGATIONS (not GE's):
- Technical documentation per Annex XI
- Training data summaries (sufficiently detailed)
- Comply with EU copyright law
- Make information available to downstream providers
SYSTEMIC_RISK_MODELS (>10^25 FLOPS or designated):
- Model evaluation
- Adversarial testing
- Serious incident tracking and reporting
- Adequate cybersecurity
GE_AS_DOWNSTREAM_PROVIDER¶
IF GE integrates GPAI model into own product THEN GE is downstream provider
OBLIGATIONS:
- Use GPAI model in accordance with provider's terms
- Ensure transparency obligations met (Art. 50)
- Maintain documentation on GPAI integration
- Assess risks introduced by own product on top of GPAI
RULE: GE does NOT inherit GPAI model provider obligations (those stay with Anthropic/OpenAI/Google)
RULE: GE IS responsible for how it uses the GPAI model in its products
CODE_OF_PRACTICE¶
STATUS: AI Office developing GPAI Code of Practice
ANTHROPIC: signed Code of Practice (July 2025)
OPENAI: signatory status — check
GOOGLE: signatory status — check
GE_ACTION: monitor provider Code of Practice compliance, factor into supplier assessment
AI-GENERATED_CONTENT_DISCLOSURE¶
GE'S_OWN_OUTPUT¶
GE agents produce code, documentation, designs, and other artifacts.
RULE: all GE-produced artifacts are AI-generated
DISCLOSURE:
- Git commits: author = agent name (AI attribution)
- Documentation: clearly labeled as produced by GE AI agents
- Client deliverables: include AI-generated disclosure in delivery notes
- Marketing: disclose GE's AI-driven nature
CLIENT_NOTIFICATION¶
RULE: clients MUST be informed that their product is built by AI agents
WHERE_DISCLOSED:
- Contract / Statement of Work: "development performed by GE multi-agent AI system"
- Project kickoff: aimee explains GE's AI-driven process
- Ongoing: dima is transparently AI (Art. 50(1))
RULE: this is NOT optional under the AI Act — it is a legal transparency obligation
END_USER_DISCLOSURE¶
IF client product uses AI features (chatbot, recommendation, automation):
- End users must be informed of AI involvement
- High-risk: detailed information about logic and consequences
- Limited-risk: simple AI interaction disclosure
PENALTIES (Art. 99)¶
| Violation | Fine |
|---|---|
| Prohibited AI practices (Art. 5) | EUR 35M or 7% global turnover |
| High-risk non-compliance | EUR 15M or 3% global turnover |
| False information to authorities | EUR 7.5M or 1% global turnover |
| GPAI model non-compliance | EUR 15M or 3% global turnover |
SME_DISCOUNT: for SMEs, the lower of the two amounts applies (fixed vs percentage)
GE_STATUS: SME — lower thresholds apply but still substantial
COMPLIANCE_ROADMAP_FOR_GE¶
ALREADY_DONE (should be)¶
- AI literacy programme (Art. 4) — since Aug 2025
- Prohibited practices screening in project scoping
- AI interaction transparency for dima (Art. 50(1))
- GPAI provider vetting (Anthropic, OpenAI, Google)
BY_AUG_2026¶
- High-risk classification process integrated into scoping
- Provider obligations ready for high-risk client projects
- Conformity assessment process documented
- EU database registration process ready
- Technical documentation templates (Annex IV) ready
- Human oversight mechanisms standardized
- AI-specific risk management integrated into ISMS
ONGOING¶
- Monitor AI Office guidance and codes of practice
- Update risk classification as new case law / guidance emerges
- Monitor provider (Anthropic, OpenAI, Google) compliance
- Review each client project for AI Act classification
- Maintain AI literacy records
INTERACTION_WITH_OTHER_FRAMEWORKS¶
AI_ACT_AND_GDPR:
- AI Act Art. 10(5): special category data processing for bias detection allowed if GDPR safeguards met
- DPIA (Art. 35 GDPR) triggered by AI-driven profiling or automated decisions
- Art. 22 GDPR (automated decisions) overlaps with AI Act human oversight (Art. 14)
RULE: comply with BOTH — GDPR is not replaced by AI Act
AI_ACT_AND_ISO_27001:
- AI Act Art. 15 (cybersecurity) → satisfied by ISO 27001 Annex A controls
- AI Act Art. 17 (QMS) → satisfied by ISO 27001 management system (with AI extensions)
- AI Act Art. 12 (record-keeping) → satisfied by A.8.15 (logging)
AI_ACT_AND_NIS2:
- High-risk AI in critical infrastructure → both AI Act and NIS2 apply
- NIS2 supply chain obligations may extend to AI providers
SEE_ALSO: domains/eu-regulation/index.md, gdpr-implementation.md, iso27001-overview.md, thought-leaders.md