
Artificial intelligence in financial services
AI is reshaping financial services by enhancing efficiency, reducing costs and unlocking new revenue opportunities. With $97 billion in projected investment by 2027, firms must address risks like misinformation and data bias while prioritising governance, regulation and workforce reskilling to ensure responsible, secure and effective AI adoption.
Please login or join for free to read more.

OVERVIEW
Introduction
AI is set to reshape financial services just as digitalisation did, with rapid advancements creating pressure for business leaders to respond. Generative AI (genAI) is viewed as one of the most transformative technologies, impacting banking, insurance, capital markets and payments. While executives must develop strategic AI responses, they are simultaneously required to manage increasing risks and compliance obligations. This paper outlines AI’s expected areas of impact, steps needed for adoption, and risks to mitigate.
AI landscape in financial services
Financial services are leading AI adoption due to the industry’s language- and data-intensive nature. Nearly half of employee work time in the sector is suited to automation or augmentation. In 2023, financial firms invested USD $35 billion in AI, with projections of USD $97 billion by 2027.
AI is used across back-office to front-end functions—improving operations, customer service, fraud detection, underwriting, and software development. Banking-specific use cases include automated underwriting, personalised product offers, and intelligent risk scoring. GenAI has broad applicability across roles, with sector-specific tools being developed by both incumbents and large tech providers.
Seeing early value from AI implementation
Current adoption primarily targets efficiency gains and cost reductions. However, 70% of executives expect AI to directly contribute to revenue growth by enabling product personalisation, new offerings, and improved customer experience.
Common use cases include:
- AI-powered assistants offering 24/7 support or augmenting human agents.
- More targeted advisory and product recommendations.
- Enhanced risk and fraud detection through real-time monitoring.
- Automated KYC and compliance processes.
Moving towards an AI-powered future
Over the next decade, customer experiences and service delivery will be transformed through advanced data synthesis and automation. Key enabling technologies include:
- Small language models (SLMs): task-specific models offering efficiency and speed.
- Retrieval-augmented generation (RAG): improves response accuracy by referencing internal knowledge bases.
- AI agents: able to interpret and act autonomously on customer requests.
- Quantum computing: allows faster data processing for complex tasks like fraud detection.
Leaders are encouraged to maintain flexible AI strategies, monitor emerging technologies, and selectively partner with fintechs to manage investment risk while accelerating innovation.
AI in the workforce
Success with AI depends on workforce readiness. Firms must reskill talent, address concerns over job displacement, and foster a culture of human-machine collaboration. Ninety per cent of leaders believe significant workforce transformation is needed.
There is growing demand for AI-literate roles such as prompt engineers, as well as upskilling across all levels. Executives are also expected to gain deeper understanding of AI’s business value.
AI risks and challenges
AI introduces risks including data bias, cybersecurity threats, and misinformation. Deepfakes, in particular, pose serious threats to trust and security. For example, synthetic video was used in a USD $25 million fraud involving a fabricated executive.
Despite these risks, AI is also deployed for threat detection using watermarks, metadata, and autonomous identification of malicious content.
Prioritising responsible AI
Responsible AI practices are essential and include governance, ethical standards, and explainability. Eighty-four per cent of financial firms have or plan to implement frameworks to audit and govern AI systems. These frameworks focus on operational, technical, reputational, and organisational safeguards. Cross-sector coalitions such as FINOS also support policy development.
AI regulation challenges
Key regulatory issues remain unresolved, including the pace of policy development, what aspects of AI to regulate, and who should regulate. Differences between regional approaches (e.g. EU vs US) create uncertainty. While existing financial regulations cover many AI use cases, there is a need for clarity to support responsible innovation without stifling investment.
Conclusion
AI offers substantial opportunities for innovation and customer value but also raises complex challenges. Success depends on flexible strategies, strong governance, cross-functional integration, and measurable outcomes. Leaders must ensure responsible adoption while maintaining trust and promoting broader economic benefits.