Governance of AI adoption in central banks
This BIS report outlines central banks’ AI use cases, associated strategic, operational, cyber and reputational risks, and advocates adapting existing risk-management and three-lines-of-defence frameworks, supported by an adaptive AI governance model and ten practical actions, to balance innovation with security, compliance, data privacy and organisational resilience.
Please login or join for free to read more.
OVERVIEW
Foreword
The report explains how artificial intelligence (AI) supports central bank functions such as data analysis, forecasting, payments, supervision and banknote production, while creating risks around data security, confidentiality and reputation. It aims to guide AI implementation through a governance and risk management framework grounded in established models such as the three lines of defence.
Use of AI
AI is defined, following the OECD, as machine-based systems that infer how to generate outputs such as predictions, content, recommendations or decisions, with differing autonomy and adaptiveness after deployment. AI applications rely on models based on rules, code, knowledge and data, including machine learning, deep learning, generative AI, natural language processing, large language models and generative pre‑trained transformers.
AI benefits for central banks and use cases
Central banks use AI to automate processes, analyse large data sets, solve complex problems and support innovation across core functions. Examples include GDP nowcasting, inflation forecasting, regulatory complexity analysis, payment anomaly detection, supervision and banknote demand forecasting, as well as anomaly detection for data quality and cyber security, and customer and corporate services via chatbots and research assistance.
Risks and their impact
AI adoption introduces or amplifies strategic, operational, legal, compliance, data governance, information security, cyber, ICT, third‑party, model, environmental, ethical, social and reputational risks. The report highlights issues such as legal uncertainty, lack of explainability, skills gaps, expanded attack surfaces, data leakage, model hallucinations and biases, high energy use and potential reputational damage.
Risks associated with the adoption of AI
Operational risks include process failures, inadequate staff capabilities and weak data governance, while information security and privacy risks involve possible disclosure of confidential data, integrity issues and availability disruptions. Cyber risks include prompt injection, training‑data poisoning, model denial‑of‑service and model theft; third‑party risks stem from vendor access to sensitive data, concentration and incidents; model risks cover poor data quality, overfitting, hallucinations, repeatability issues, overconfidence in outputs and limited transparency, with environmental and ethical risks tied to carbon emissions and biased or inappropriate outputs.
Broader considerations and impacts
Beyond defined risks, the report notes unplanned uses of generative AI, uneven job impacts and new collaborative demands on information, process and technology teams. Integration complexity, resource needs for tailored models, evolving tools, infrastructure strain and growing dependency on AI can delay expected benefits and increase costs.
Risk management for AI
A comprehensive AI risk management strategy should align with institutional objectives and risk appetite, embedding AI risks into existing enterprise, operational and ICT risk frameworks rather than treating them separately. Central banks are encouraged initially to deploy AI in lower‑criticality internal processes while capabilities, controls and understanding mature.
Risk management strategy
Suggested steps include defining an AI risk profile and governance arrangements, mapping and prioritising use cases by criticality and data sensitivity, and using multidisciplinary teams for assessment. Risk identification should inform control design, infrastructure adjustments and continuous monitoring across the AI life cycle, including retirement of tools when appropriate.
Information security, privacy and cyber security risks
The report recommends adapting standards such as ISO/IEC 27001 and the NIST Cybersecurity Framework to AI, with strong data classification, access management, logging and incident response. Institutions should scrutinise where data are stored and processed, ensure encryption and segmentation, and align third‑party and cloud usage with corporate risk appetite and data protection rules.
Specific actions to mitigate gai risks
For generative AI, the report calls for targeted governance, clear acceptable‑use policies, classification of training and input data, inventories of tools including “shadow AI”, sandboxes, rigorous validation, human‑in‑the‑loop review and output labelling. It also stresses staff training and careful review of provider terms, data flows and retention.
Governance
AI governance should sit within existing organisational frameworks, aligning AI initiatives with strategy, low risk appetite and transparency obligations. Principles of security, privacy, explainability, reliability, ethics, responsibility and accountability are emphasised, supported by adaptive governance that responds to technological and regulatory developments.
Current industry frameworks
Non‑binding international and industry frameworks on trustworthy AI provide guidance on data governance, human oversight, risk‑based controls and impact assessments that can be tailored to central bank mandates and structures.
Proposed actions for AI governance at central banks
Ten actions are proposed: form an interdisciplinary AI committee; define responsible‑AI principles; establish an AI governance framework and update guidance; maintain an AI tools inventory; map tools to stakeholders and processes; perform detailed risk and control assessments; monitor regularly; report incidents; build workforce skills; and periodically review and adapt the framework.
Conclusions
The report concludes that AI is becoming important for central bank operations and analysis but heightens existing risks and introduces new ones. Safe adoption requires holistic governance and risk management grounded in established frameworks, clear AI risk appetite and systematic implementation of the recommended actions.