Library | ESG issues
Cyber Security
Refine
26 results
REFINE
SHOW: 16
Intelligent financial system: How AI is transforming finance
The report explores the transformative role of AI in the financial sector, focusing on financial intermediation, insurance, asset management, and payments. It highlights both opportunities and challenges, including implications for financial stability and the need for upgraded financial regulation to manage the risks associated with AI's growing influence.
The intersection of Responsible AI and ESG: A framework for investors
This report provides actionable insights for investors exploring the integration of Responsible AI (RAI) in their investment decisions. It offers a framework to assess the environmental, social, and governance (ESG) implications of Artificial Intelligence (AI) usage by companies. The report includes case studies of globally listed companies and a set of templates to support investors in implementing the framework.
Concrete problems in AI safety
This paper explores practical research issues associated with accidents in machine learning and artificial intelligence (AI) systems, due to incorrect objectives, scalability, or choice of behaviour. The authors present five research problems in the field, suggesting ways to mitigate risks in modern machine learning systems.
Toward a G20 framework for artificial intelligence in the workplace
This report advocates for creating a high-level, G20 framework using a set of principles for the introduction and management of big data and AI in the workplace. The paper identifies main issues and suggests two paths towards adoption.
Top 10 principles for ethical artificial intelligence
This report provides 10 principles for ethical artificial intelligence. From transparency in decision-making to ensuring a just transition and support for fundamental freedoms and rights, the report aims to empower workers and maintain a healthy balance of power in the workplace.
The state of AI in 2022 - and a half decade in review
The adoption of AI has more than doubled, with a peak of 58% in past years. The report highlights the importance of best practices and investing in AI as it is shown to bring financial returns. However, the majority of organisations are not mitigating risks associated with AI despite increasing use.
The Japanese society for artificial intelligence ethical guidelines
The Japanese Society for Artificial Intelligence has released ethical guidelines that aims to protect basic human rights and promote the peace, welfare, and public interest of humanity. The eight guidelines include: contributing to humanity, abiding by laws and regulations, respecting others' privacy, being fair, maintaining security, acting with integrity, being accountable and socially responsible, and communicating with society and self-development.
The implications of AI across sectors and against 6 key ESG considerations
AI offers great positive impacts and risks. This report helps to understand the risks associated with developing and using AI tech. Scoping exercise identifies opportunities and threats across sectors. Six core ESG considerations including trust and security, data privacy, and sentience are evaluated for potential impact.
Report of COMEST on robotics ethics
COMEST has released a report on robotics ethics which covers the history, development, and social impact of robots. It also offers recommendations for the ethical use of robotics.
Montreal declaration for a responsible development of artificial intelligence
This report outlines a framework for responsible development of artificial intelligence. It provides principles that should guide ethical use of AI for the well-being of sentient beings, respect for autonomy, protection of privacy and intimacy, solidarity, democratic participation, equity, diversity inclusion, caution, responsibility, and sustainable development.
Investors' expectations on responsible artificial intelligence and data governance
This report outlines responsible AI and data governance principles and engagement framework for investors across multiple sectors. The six core principles aim to enhance machine learning, auditability, explainability, and transparency, while taking into account legal, regulatory, ethical, and reputational risks.
Generative artificial intelligence in finance: Risk considerations
Generative AI is a subset of AI/ML that creates new content. It offers enhancements to efficiency and customer experience, as well as advantages to risk management and compliance reporting. However, the deployment of GenAI in the financial sector requires the industry to recognise and mitigate the technology's risks comprehensively; financial institutions must strengthen their cybersecurity and regulatory oversight capacities.
Engaging the ICT sector on human rights: Privacy and data protection
This report provides sector-wide risk assessment on privacy and data protection in the Information and Communications Technology (ICT) industry. It includes international standards and salient issues to consider when engaging with ICT companies, the "business case" for privacy and data protection, and investor guidance for engaging ICT companies.
Engaging the ICT sector on human rights: Conflict and security
This report provides an overview of the main human rights instruments and adverse impacts of the ICT sector in conflict-affected areas, emphasising its role in promoting security and other human rights while highlighting the potential risks of new technologies in this context. It also includes investor guidance to help evaluate if companies are meeting their human rights responsibilities.
Digital safety risk assessment in action: A framework and bank of case studies
This report contains a framework and case studies for digital safety risk assessment. The case studies cover topics such as trust and safety best practices, human rights due diligence, and child safety in gaming and immersive worlds.
Beyond explainability: A practical guide to managing risk in machine learning models
This report offers a comprehensive guide for effectively managing risk in machine learning models. It presents a framework that enables data science and compliance teams to create better, more accurate, and more compliant models. The report stresses the importance of understanding the data used by models and implementing three lines of defence to assess and ensure their safety.