
Investors' expectations on responsible artificial intelligence and data governance
This report outlines responsible AI and data governance principles and engagement framework for investors across multiple sectors. The six core principles aim to enhance machine learning, auditability, explainability, and transparency, while taking into account legal, regulatory, ethical, and reputational risks.
Please login or join for free to read more.

OVERVIEW
This report delves into investors’ expectations regarding the responsible use of artificial intelligence (AI) and data governance, focusing on environmental, social, and governance (ESG) considerations. The study reveals crucial findings shaping the landscape of AI and data governance as emerging ESG concerns for investors, impacting businesses, regulators, and society as a whole.
One key highlight is the identification of inherent biases within AI, manifesting in input data bias, process bias, and outcomes bias. These biases arise due to the application of data in specific contexts, emphasising the need for clear accountability and ethical design in AI systems to ensure data security, non-bias, and transparency.
The report proposes six core principles for investor engagement with companies on AI and data governance. These principles include enhancing machine learning, ensuring auditability, explainability, and transparency. The engagement approach suggests two strands: a risk factor approach and a process-driven approach. Sectors such as financial services, healthcare, technology, and utilities are singled out for particular regulatory considerations.
Acknowledging the significance of diverse perspectives, the report underscores the necessity for diverse teams in the development, implementation, and testing of AI products. Additionally, it recommends companies establish internal AI guidelines and integrate AI impact and governance awareness into their corporate culture.
Building on the key findings, the report offers practical recommendations. Investors are advised to engage companies based on the six core principles, while boards are urged to take accountability for responsible AI use and establish internal data governance mechanisms. Companies are encouraged to stay informed about the regulatory framework and financial impacts related to AI, including emerging policy contexts. Furthermore, they should establish frameworks to address unintended outcomes tied to biased or discriminatory scoring.
To enhance the responsible use of AI, specific recommendations are outlined. If sequential data is utilised, AI models should incorporate methods to mitigate translation out of context and overfitting. In cases involving deep vision techniques, companies are advised to provide explanations of the applied network architecture. Emphasising the importance of diversity, the report suggests companies ensure a varied mix of views, beliefs, and perspectives within teams working on AI products.
The report concludes by stressing the integration of trustworthy AI values into organisational culture through the design and use of AI systems, supported by training and continuous education. In summary, the report provides a comprehensive overview, identifies key considerations, and offers actionable recommendations to navigate the responsible use of AI and data governance in line with investors’ expectations and ESG principles.