
Regulating AI in the financial sector: Recent developments and main challenges
The report outlines AI’s growing use in finance—especially in underwriting, fraud detection, and customer support—highlighting regulatory challenges around explainability, governance, and data security. It discusses evolving global guidance and the need for risk-based, proportionate oversight, particularly as generative AI gains traction in high-impact applications.
Please login or join for free to read more.

OVERVIEW
Introduction
The expansion of generative AI (gen AI) since 2022 has renewed global focus on artificial intelligence (AI), with financial institutions increasing adoption across business lines. However, there is no globally accepted definition of AI for regulatory purposes. While the OECD definition is widely referenced, national interpretations vary, creating regulatory inconsistencies.
Financial authorities have generally adopted a technology-neutral stance and issued limited AI-specific regulations. Existing regulatory frameworks often cover AI-related risks, but implementation remains complex due to AI’s evolving nature.
Overview of AI use cases in the financial sector
AI is used across productivity, compliance, and customer service functions. Examples include chatbots (e.g., Bank of America’s Erica), fraud detection (e.g., Société Générale’s MOSAIC), and underwriting (e.g., MUFG, ICICI Prudential).
Spending on AI in finance is projected to rise from USD 35 billion in 2023 to USD 97 billion by 2027, with gen AI alone expected to grow from USD 3.86 billion in 2023 to USD 85 billion by 2030. Despite this, gen AI remains cautiously deployed in high-risk or customer-facing applications due to regulatory uncertainty and operational risks.
Customer support chatbots, fraud detection, and compliance-related tools dominate usage. Underwriting use cases, particularly in insurance, leverage AI for assessing complex risks using unstructured data. AI enhances efficiency and enables broader access to financial services, though deployment varies by institution.
Risks arising from banks’ and insurers’ AI use cases
AI introduces microprudential, conduct, and systemic risks. Model risk is heightened by lack of explainability, data quality issues, and overfitting. Cyber risks include data poisoning and model theft. Operational risk arises from increased reliance on third-party vendors and complex IT infrastructure.
Conduct risks include discriminatory decision-making, financial exclusion, and price collusion. Systemic risks stem from widespread model use, market concentration in AI service providers, and opaque model behaviour.
Regulators are increasingly concerned about AI’s dual role in cyber resilience—both strengthening and weakening it depending on use. Gen AI is expected to benefit attackers more in the near term.
Overview of cross-sectoral AI-specific guidance
AI policy guidance generally follows either a principles-based or rules-based approach. Jurisdictions such as the EU, Brazil, and China favour rules-based frameworks, while others (e.g., Singapore, UK) adopt non-binding principles.
Core policy themes include reliability, accountability, transparency, fairness, and data privacy. Recent guidance also incorporates safety, sustainability, intellectual property, and consumer redress. Authorities emphasise risk-based approaches allowing proportional application of rules.
Human oversight is critical across governance models, especially for high-impact decisions. AI systems must be explainable, auditable, and interpretable internally and externally. Disclosure obligations apply to decisions affecting customers.
Practical issues in implementing cross-sectoral AI guidance to the financial sector: The case of credit and insurance underwriting
AI use in underwriting may require regulatory clarification due to its high risk. Governance should delineate responsibilities for model developers, users, and owners. Board and senior management must maintain oversight and ensure staff have sufficient AI-related skills.
Explainability is a key challenge, especially with gen AI, where variability and complexity make transparency difficult. Tools like Shapley values, surrogate models, and feature-importance techniques are used but vary in effectiveness. Authorities could standardise explainability expectations.
Third-party AI services, commonly used, introduce data security and operational resilience concerns. While financial institutions are responsible for outsourced models, full visibility into vendor models is limited. A shared responsibility model is proposed, with regulators encouraged to increase direct oversight of critical third-party providers.
Emerging business models, including Banking-as-a-Service and fintech partnerships, complicate attribution of accountability. Regulators should assess whether current rules adequately address risks from these new arrangements.
Conclusion
AI’s benefits to the financial sector are clear but do not eliminate existing risks—instead, they amplify them. While comprehensive AI-specific financial regulations may not yet be necessary, targeted updates in areas such as governance, expertise, model risk, data management, and third-party oversight are recommended.
Regulators should collaborate internationally to align definitions and frameworks, and monitor evolving use cases and risk management practices across financial institutions.