Integrating ESG and AI: A comprehensive responsible AI assessment framework
The report introduces an ESG-AI framework enabling investors to assess AI-related environmental, social, and governance risks. Drawing on insights from 28 companies, it provides use-case materiality analysis, governance indicators, and deep-dive assessments to support transparent, responsible AI evaluation and investment decisions.
Please login or join for free to read more.
OVERVIEW
Introduction
The report analyses how Environmental, Social and Governance (ESG) frameworks can be extended to evaluate responsible AI (RAI). It highlights rising AI adoption, emerging regulation (e.g., EU AI Act), and gaps in existing ESG assessments, which often overlook AI-specific risks such as energy use, bias, discrimination, privacy, cybersecurity, and transparency failures. The study identifies investor demand for practical tools to assess AI risks and integrates insights from 28 companies to form the ESG-AI framework.
Background and literature review
ESG frameworks guide assessment of environmental, social, and governance impacts, and AI presents both opportunities and risks across these dimensions. Prior studies identify limited standardisation in ESG metrics, especially environmental impacts, and lack of actionable AI governance assessment methods. Existing AI ethics frameworks offer principles but rarely operational approaches for investors. The literature recognises overlaps between ESG concerns and RAI themes—fairness, transparency, data governance, and human rights—but notes a gap in comprehensive, integrated tools.
Methodology
The study follows collaborative research across three phases: pre-engagement research, engagement with 28 companies across eight sectors, and framework development. Companies were selected based on AI maturity and willingness to share insights. Interviews were jointly conducted by ESG investors and AI researchers, with data analysed independently before synthesis through workshops. Iterative testing over five months refined the framework and toolkit.
ESG-AI framework
Framework Design
Six insights shaped the framework: the need for employee engagement; strengthening board capability; embedding RAI into existing systems; balancing AI risks and opportunities; addressing supply chain risks; and ensuring broader consideration beyond privacy. The framework comprises three components—AI use-case analysis, RAI governance indicators, and RAI deep-dive assessment—with scoring aligned to high/medium/low risk classifications.
AI use case
Twenty-seven AI use cases across nine sectors were analysed by regulatory risk, impact level, and impact scope. Most were medium-risk; none were unacceptable or low risk. Energy and healthcare sectors contained two high-risk use cases each due to critical infrastructure and biometric applications. Four use cases had high environmental and social impact, particularly in information technology and materials. Eight use cases posed systemic-level risks, such as credit scoring and clinical care.
RAI governance indicators
Ten indicators assess governance maturity across board oversight, RAI commitment, implementation, and metrics. Requirements include board accountability, public RAI policy, defined targets, responsible roles, employee awareness, system integration, incident reporting, and external disclosure. Scores classify companies as high, medium, or low risk.
RAI deep dive assessment
This module contains 42 questions aligned with eight AI ethics principles and 43 metrics. A six-point scoring scale determines whether a company’s RAI practice is unacceptable, weak, moderate, or strong. Foundation models receive dedicated assessment with 13 mandatory and 8 optional questions covering environmental impact, data governance, model performance, and transparency.
Validation of the framework
Benchmarking showed existing frameworks partially cover ESG or AI ethics but lack integration and practical decision tools. 53% of assessment questions derive from the EU AI Act and 41% from NIST. Investor testing confirmed usability and relevance.
Discussion
Findings highlight materiality differences across sectors and reinforce investor roles in AI governance. Limitations include sample representativeness; broader testing is recommended.
Conclusion
The framework provides a structured tool for integrating ESG and AI considerations into investment analysis, supporting responsible and transparent AI adoption.