Answer 8 questions across three dimensions to assess your organisation’s AI readiness and receive a tailored action plan. Benchmarked against insurance organisations across DACH (Germany, Switzerland, Austria).
Assessment Questions
Data Infrastructure
Q1. How complete and accessible is your claims data for ML modelling?
1 — Claims data spread across multiple systems — no unified view
2 — Partial claims data accessible with significant manual effort
3 — Centralised claims data with documented quality for 2+ years
4 — Clean, governed claims data with full lineage across 3+ years
Q2. What is your IFRS 17 data platform maturity?
1 — IFRS 17 data migration not started or very early stage
2 — IFRS 17 migration underway but incomplete
3 — IFRS 17 data platform in production for most lines of business
4 — IFRS 17 fully compliant with automated quality monitoring and audit trail
Governance
Q3. How does your organisation govern AI models in regulated processes?
1 — No formal AI model governance
2 — Some documentation but no systematic governance
3 — Model inventory with risk classification and named owners
4 — Comprehensive AI governance aligned to BaFin, FINMA, and EU AI Act
Q4. How is AI explainability handled for underwriting and claims decisions?
1 — No explainability consideration — black box models
2 — Some explainability for internal review but not customer-facing
3 — SHAP explanations available for all decision models
4 — Customer-facing explanation templates approved by legal; regulatory documentation ready
Capability
Q5. What AI/ML models are currently in production?
1 — None — all decisions are rule-based or human
2 — 1–2 vendor-supplied scoring tools
3 — 3–5 internally validated ML models in fraud or underwriting
4 — 6+ models with automated monitoring and BaFin and FINMA documentation
Q6. How is your AI engineering team structured?
1 — No dedicated ML or data engineering capacity
2 — Data analysts only — no ML engineers
3 — Data scientists and ML engineers; no dedicated platform team
4 — Full AI/data team with actuarial, ML, and data engineering integration
Q7. How advanced is your fraud detection capability?
1 — Rule-based only — threshold and velocity checks
2 — Basic ML scoring on some claim types
3 — ML fraud detection with 80%+ recall on priority lines of business
4 — Real-time ensemble fraud scoring with network analysis and 88%+ recall
Q8. How does your organisation measure claims and underwriting AI ROI?
1 — No formal measurement framework
2 — Cost-per-claim tracking only
3 — Loss ratio impact and processing efficiency measured quarterly
4 — Full AI ROI framework covering loss ratio, efficiency, fraud reduction, and customer NPS
Maturity Levels
Foundational (score 8–14)
Your organisation has critical gaps in data infrastructure, governance, or AI capability that will block production AI deployment. Focus first on data quality, governance framework, and regulatory alignment before committing to AI model development.
Recommended Next Steps
- Conduct a structured AI readiness assessment with a specialist to identify and prioritise critical gaps.
- Appoint a named AI governance owner (AI Model Risk Officer or equivalent) and create an initial model inventory.
- Engage an external AI/data partner to accelerate foundation work — do not wait for internal capacity to develop before starting.
Developing (score 15–21)
You have started the AI readiness journey but have significant gaps in at least one critical dimension. Targeted investment in your weakest area will unlock your first production AI model within 6–9 months.
Recommended Next Steps
- Prioritise closing the largest single gap — data quality or governance — rather than addressing all gaps simultaneously.
- Launch a first AI pilot in your strongest data domain with production-grade MLOps infrastructure from the start.
- Develop an 18-month AI roadmap with regulatory checkpoints and board-visible milestone metrics.
Advancing (score 22–27)
Your organisation has solid AI foundations and at least one model in production. The priority is scaling governance, expanding the model portfolio, and building the platform capacity for 6–12 production models.
Recommended Next Steps
- Establish an AI/Data Centre of Excellence with documented model lifecycle procedures and RACI across functions.
- Conduct EU AI Act gap analysis for all high-risk models — compliance obligations start August 2026.
- Evaluate nearshore delivery partners to accelerate data platform build while internal teams focus on governance and business integration.
Leading (score 28–32)
Your organisation has strong AI maturity across all dimensions. Focus on competitive differentiation — expanding AI into new domains and building the institutional knowledge to maintain leadership as regulatory requirements evolve.
Recommended Next Steps
- Expand AI use cases into revenue-generating domains that complement your existing operational AI portfolio.
- Develop an internal AI talent programme to reduce external dependency for model development.
- Publish an AI governance transparency report to build trust with regulators, customers, and investors.
Ready to Start Your AI & Data Transformation?
mindit.io works with banking, retail, and insurance organisations across DACH, UK, and BENELUX. Talk to our team about your programme. Contact mindit.io →
Related Resources from mindit.io
CHECKLIST — AI Readiness Checklist for Retail Banking — DACH 2026
GUIDE — AI Readiness for Banks: CDO Guide for DACH
CHECKLIST — AI Readiness Checklist for Retail Banking — DACH 2026
COMPARISON — mindit.io vs Endava vs Nagarro: AI Readiness Banking DACH