Library | ESG issues

Cyber Security

Refine
Resource type
Things to learn
Actions to take
ESG issues
SDGs
SASB Sustainability Sector
Finance relevance
Asset Class
Location
26 results
REFINE
SHOW: 16

Intelligent financial system: How AI is transforming finance

Bank for International Settlements
The report explores the transformative role of AI in the financial sector, focusing on financial intermediation, insurance, asset management, and payments. It highlights both opportunities and challenges, including implications for financial stability and the need for upgraded financial regulation to manage the risks associated with AI's growing influence.
Research
11 June 2024

The intersection of Responsible AI and ESG: A framework for investors

CSIRO
This report provides actionable insights for investors exploring the integration of Responsible AI (RAI) in their investment decisions. It offers a framework to assess the environmental, social, and governance (ESG) implications of Artificial Intelligence (AI) usage by companies. The report includes case studies of globally listed companies and a set of templates to support investors in implementing the framework.
Research
23 April 2024

Concrete problems in AI safety

This paper explores practical research issues associated with accidents in machine learning and artificial intelligence (AI) systems, due to incorrect objectives, scalability, or choice of behaviour. The authors present five research problems in the field, suggesting ways to mitigate risks in modern machine learning systems.
Research
25 July 2016

Toward a G20 framework for artificial intelligence in the workplace

Centre for International Governance Innovation (CIGI)
This report advocates for creating a high-level, G20 framework using a set of principles for the introduction and management of big data and AI in the workplace. The paper identifies main issues and suggests two paths towards adoption.
Research
29 June 2018

Top 10 principles for ethical artificial intelligence

UNI Global Union
This report provides 10 principles for ethical artificial intelligence. From transparency in decision-making to ensuring a just transition and support for fundamental freedoms and rights, the report aims to empower workers and maintain a healthy balance of power in the workplace.
Research
16 November 2017

The state of AI in 2022 - and a half decade in review

McKinsey & Company
The adoption of AI has more than doubled, with a peak of 58% in past years. The report highlights the importance of best practices and investing in AI as it is shown to bring financial returns. However, the majority of organisations are not mitigating risks associated with AI despite increasing use.
Research
31 December 2022

The Japanese society for artificial intelligence ethical guidelines

Japanese Society for Artificial Intelligence (JSAI)
The Japanese Society for Artificial Intelligence has released ethical guidelines that aims to protect basic human rights and promote the peace, welfare, and public interest of humanity. The eight guidelines include: contributing to humanity, abiding by laws and regulations, respecting others' privacy, being fair, maintaining security, acting with integrity, being accountable and socially responsible, and communicating with society and self-development.
Research
3 May 2017

The implications of AI across sectors and against 6 key ESG considerations

CSIRO
AI offers great positive impacts and risks. This report helps to understand the risks associated with developing and using AI tech. Scoping exercise identifies opportunities and threats across sectors. Six core ESG considerations including trust and security, data privacy, and sentience are evaluated for potential impact.
Research
17 May 2023

Report of COMEST on robotics ethics

United Nations Educational, Scientific and Cultural Organization (UNESCO)
COMEST has released a report on robotics ethics which covers the history, development, and social impact of robots. It also offers recommendations for the ethical use of robotics.
Research
14 September 2017

Montreal declaration for a responsible development of artificial intelligence

This report outlines a framework for responsible development of artificial intelligence. It provides principles that should guide ethical use of AI for the well-being of sentient beings, respect for autonomy, protection of privacy and intimacy, solidarity, democratic participation, equity, diversity inclusion, caution, responsibility, and sustainable development.
Research
10 July 2019

Investors' expectations on responsible artificial intelligence and data governance

Federated Hermes
This report outlines responsible AI and data governance principles and engagement framework for investors across multiple sectors. The six core principles aim to enhance machine learning, auditability, explainability, and transparency, while taking into account legal, regulatory, ethical, and reputational risks.
Research
25 April 2019

Generative artificial intelligence in finance: Risk considerations

International Monetary Fund
Generative AI is a subset of AI/ML that creates new content. It offers enhancements to efficiency and customer experience, as well as advantages to risk management and compliance reporting. However, the deployment of GenAI in the financial sector requires the industry to recognise and mitigate the technology's risks comprehensively; financial institutions must strengthen their cybersecurity and regulatory oversight capacities.
Research
22 August 2023

Engaging the ICT sector on human rights: Privacy and data protection

Investor Alliance for Human Rights
This report provides sector-wide risk assessment on privacy and data protection in the Information and Communications Technology (ICT) industry. It includes international standards and salient issues to consider when engaging with ICT companies, the "business case" for privacy and data protection, and investor guidance for engaging ICT companies.
Research
6 March 2020

Engaging the ICT sector on human rights: Conflict and security

Investor Alliance for Human Rights
This report provides an overview of the main human rights instruments and adverse impacts of the ICT sector in conflict-affected areas, emphasising its role in promoting security and other human rights while highlighting the potential risks of new technologies in this context. It also includes investor guidance to help evaluate if companies are meeting their human rights responsibilities.
Research
6 March 2020

Digital safety risk assessment in action: A framework and bank of case studies

World Economic Forum
This report contains a framework and case studies for digital safety risk assessment. The case studies cover topics such as trust and safety best practices, human rights due diligence, and child safety in gaming and immersive worlds.
Research
23 May 2023

Beyond explainability: A practical guide to managing risk in machine learning models

Future of Privacy Forum (FPF)
This report offers a comprehensive guide for effectively managing risk in machine learning models. It presents a framework that enables data science and compliance teams to create better, more accurate, and more compliant models. The report stresses the importance of understanding the data used by models and implementing three lines of defence to assess and ensure their safety.
Research
22 June 2018
PREV
1 of 2
NEXT