Back to Blog
AI ActDecember 20, 20253 min read

EU AI Act: Implications for Financial Services

The EU AI Act introduces new requirements for AI systems. Learn how financial institutions should prepare for AI governance obligations.

OT

Omnitrex Team

Omnitrex Team

The EU AI Act represents the world's first comprehensive legal framework for artificial intelligence. For financial services organizations, which increasingly rely on AI for everything from credit scoring to fraud detection, understanding and preparing for these requirements is crucial.

AI Act Timeline

The AI Act entered into force on August 1, 2024, with a phased implementation:

  • February 2025: Prohibitions on unacceptable risk AI
  • August 2025: Requirements for general-purpose AI models
  • August 2026: Full application for high-risk AI systems

Risk Classification System

The AI Act categorizes AI systems into four risk levels:

Unacceptable Risk (Prohibited)

These AI applications are banned outright:

  • Social scoring by governments
  • Real-time biometric identification in public spaces (with exceptions)
  • Manipulation techniques causing harm
  • Exploitation of vulnerabilities

High Risk

Most relevant for financial services, high-risk AI includes:

  • Credit scoring and creditworthiness assessment
  • Insurance pricing and claims assessment
  • Employee recruitment and evaluation
  • Fraud detection systems

Limited Risk

AI systems with specific transparency obligations:

  • Chatbots (must disclose AI nature)
  • Emotion recognition systems
  • Biometric categorization
  • Deep fake generators

Minimal Risk

Most AI systems fall here with no specific requirements beyond existing law.

Requirements for High-Risk AI in Finance

Financial institutions using high-risk AI must implement:

1. Risk Management System

  • Identify and analyze known and foreseeable risks
  • Estimate and evaluate risks from intended use and misuse
  • Adopt risk mitigation measures

2. Data Governance

  • Training data must be relevant, representative, and free of errors
  • Examination of possible biases
  • Data quality criteria

3. Technical Documentation

  • Detailed description of the AI system
  • Design specifications
  • Monitoring and logging capabilities

4. Record-Keeping

  • Automatic logging of events
  • Traceability of decisions
  • Retention for appropriate periods

5. Transparency

  • Clear instructions for users
  • Information on capabilities and limitations
  • Contact information for queries

6. Human Oversight

  • Ability for human intervention
  • Override capabilities
  • Understanding of system limitations

Preparing Your Organization

Step 1: AI Inventory

Create a comprehensive inventory of all AI systems in use:

  • What AI systems do you deploy?
  • What decisions do they support?
  • What data do they process?

Step 2: Risk Classification

Assess each AI system against the risk categories:

  • Which systems are high-risk?
  • Do any fall into prohibited categories?
  • What transparency obligations apply?

Step 3: Gap Analysis

For high-risk systems, evaluate compliance gaps:

  • Documentation completeness
  • Data governance practices
  • Human oversight mechanisms

Step 4: Governance Framework

Establish AI governance structures:

  • Roles and responsibilities
  • Review and approval processes
  • Monitoring and audit procedures

Integration with Existing GRC

The AI Act doesn't exist in isolation. Smart organizations will integrate AI governance with:

  • GDPR: Many AI systems process personal data
  • DORA: AI is critical ICT infrastructure
  • Existing risk frameworks: AI risks should feed into enterprise risk management

Need help navigating AI Act compliance? Omnitrex provides integrated AI governance capabilities. Reach out at info@omnitrex.eu to discuss your needs.

AI ActArtificial IntelligenceFinancial ServicesGovernance

Stay Updated

Want to learn more about GRC and compliance? Get in touch with our team.

Contact Us