Unlocking the future of AI in financial services: Insights from industry leaders
Artificial intelligence is moving decisively from experimentation into practical application across financial services. Conversations at a recent NextWave forum, held under Chatham House Rules, brought together senior leaders to share what is actually happening inside institutions, what is working, what is not, and where the real constraints remain.
The discussion reflected an industry in transition. Appetite for AI is high, but execution is uneven. Many organisations are progressing, yet typically through isolated initiatives rather than scaled transformation.
From early experimentation to structured adoption
Over the past two to three years, the focus has shifted materially. Early efforts centred on generative AI pilots, chatbots, and coding assistants. More recently, attention has moved towards agentic AI and workflow integration.
This shift has introduced a different set of priorities. Institutions are increasingly focused on:
- Governance, risk, and control frameworks
- Data foundations and modern infrastructure
- End-to-end process redesign rather than task-level automation
AI risk and ethics, once secondary considerations, now sit at the centre of most programmes. Regulatory expectations continue to intensify, particularly under frameworks such as the EU AI Act, where explainability, traceability, and auditability are becoming mandatory rather than optional.
At the same time, interest in agentic workflows is growing. These systems, which coordinate multi-step processes across functions, are beginning to move from concept into early deployment, albeit within tightly controlled environments.
A gap between ambition and production reality
A consistent theme across the discussion was the gap between experimentation and production.
Many institutions are running large volumes of proofs of concept, but only a small proportion progress into live environments. In some cases, hundreds of pilots exist alongside only a handful of scaled deployments.
This reflects a broader pattern in adoption. Early AI use cases tend to focus on high-friction manual tasks where efficiency gains are easy to demonstrate. These include:
- Customer onboarding and KYC processes
- Elements of compliance and reporting
- Customer support through conversational interfaces
While these deliver value, they are often confined to specific functions. The shift towards integrated, cross-enterprise workflows remains significantly more complex.
A recurring observation was that the most successful programmes are those that design for AI from the outset, rather than attempting to retrofit AI into legacy processes.
Legacy systems and the weight of technical debt
Legacy infrastructure continues to constrain progress.
Many institutions still operate with deeply embedded systems and extensive end-user computing environments. In one example discussed, tens of thousands of spreadsheets remain embedded in core operational processes. This creates both technical and operational barriers to scaling AI.
The challenge is not purely technological. It is also organisational. Long-standing manual processes are often tied to individual accountability and institutional knowledge, making them difficult to replace.
As a result, modernisation efforts around data platforms, workflow automation, and system consolidation are increasingly recognised as prerequisites for meaningful AI adoption.
Data access and the “data-to-value” gap
Access to data remains one of the most immediate constraints on AI delivery.
Business users frequently rely on central teams to provision data, creating bottlenecks that slow innovation. Data is often distributed across multiple platforms, with access controlled through ticket-based governance processes. This limits the ability to experiment and iterate.
This has created what many now describe as a “data-to-value” gap: the distance between having data available and being able to use it effectively in decision-making or automation.
Approaches that introduce a governed business logic layer are gaining traction. These allow data to be accessed, transformed, and contextualised before being exposed to AI models, balancing usability with control in regulated environments.
This is a core design principle behind platforms such as Alteryx, which are increasingly being used to bridge fragmented data estates and enforce governance before AI consumption.
Agentic AI: Opportunity with constraints
Agentic AI was one of the most discussed topics at the forum.
There is clear interest in systems capable of coordinating multi-step workflows such as classification, data extraction, rule application, and response generation. However, most organisations remain cautious about full autonomy.
A practical example discussed in the session was insurance claims automation, specifically First Notification of Loss (FNOL) processing. In this model:
- Incoming claims emails are automatically categorised
- Relevant data is extracted from unstructured text
- Policy validation checks are applied
- Deterministic rules such as fraud indicators are evaluated
- Responses are generated, with escalation where required
This approach combines deterministic logic with generative AI. The distinction is important. Many participants emphasised that not every step benefits from AI, particularly where cost efficiency, explainability, or regulatory control are critical.
It is in this context that solutions such as the FNOL accelerator developed by NextWave in conjunction with Alteryx are gaining traction. The model is deliberately hybrid: deterministic workflows handle structured decisioning, while AI is used for classification, extraction, and communication. The result is a controlled, auditable system that can be deployed in regulated environments without sacrificing transparency.
There was also caution around fully autonomous agent systems. In less structured environments, agents can drift from intended workflows or generate inconsistent outputs. As a result, most institutions are focusing on:
- Clearly defined process boundaries
- Embedded controls and auditability
- Human oversight at key decision points
Governance, explainability, and control
Governance remains a defining requirement for enterprise AI.
Institutions need to understand not only what AI systems produce, but how those outputs are generated. This includes:
- Full traceability of prompts and responses
- Data lineage across systems and transformations
- Clear audit trails for regulatory review
This becomes particularly important when AI is embedded into legacy environments such as spreadsheets or unstructured workflows, where transparency is limited.
A broader point raised during the discussion is that AI should be evaluated against current operational reality rather than theoretical perfection. Many existing manual processes already carry inherent risk and inconsistency. The opportunity is to reduce, not introduce, variability.
Skills, roles, and organisational change
The impact on roles is already becoming visible.
Demand is increasing for professionals who can oversee AI systems, interpret outputs, and manage hybrid human-AI workflows. This represents a shift away from purely technical delivery towards governance, orchestration, and accountability.
However, AI literacy remains uneven across organisations. In some cases, engineering teams are rapidly adopting tools, while other business functions remain early in their understanding of capability and constraint.
Closing this gap will require structured education, clearer operating models, and a stronger link between AI tools and day-to-day business processes.
Cost, value, and expectations
Cost pressure is shaping adoption strategies.
Many organisations are now explicitly linking AI investment to measurable outcomes, whether through productivity gains, cost reduction, or improved customer service. At the same time, scrutiny is increasing around the cost of model usage and infrastructure.
This is reinforcing a more selective approach to AI deployment, with greater emphasis on applying AI where it adds clear value, rather than across every step of a process.
Moving forward: A pragmatic path to scale
The discussion pointed towards a clear set of practical priorities:
- Focus on well-defined, high-value use cases
- Invest in data foundations and governed access models
- Embed governance and auditability from the outset
- Combine deterministic workflows with AI where appropriate
- Build organisational capability alongside technical deployment
Progress will not be uniform. Institutions will move at different speeds depending on legacy complexity, regulatory exposure, and internal capability.
What is clear is that AI is no longer confined to experimentation. The focus has shifted towards building controlled, scalable systems that can operate within the realities of financial services.
The emergence of structured accelerators such as FNOL, delivered through platforms like Alteryx and solution frameworks from NextWave, is an early indication of what practical, governed AI looks like in production.
May 1, 2026