Library | Finance relevance
General / All Finance
Refine
450 results
REFINE
SHOW: 16
Toward a G20 framework for artificial intelligence in the workplace
This report advocates for creating a high-level, G20 framework using a set of principles for the introduction and management of big data and AI in the workplace. The paper identifies main issues and suggests two paths towards adoption.
The state of AI governance in Australia
This report reveals that Australian organisations lack structured governance around AI systems. Corporate leaders should invest in expertise, create a comprehensive AI strategy, implement addressing risks and support a human-centered culture. The appropriate governance of AI systems is critical for corporate leaders to mitigate risks.
The implications of AI across sectors and against 6 key ESG considerations
AI offers great positive impacts and risks. This report helps to understand the risks associated with developing and using AI tech. Scoping exercise identifies opportunities and threats across sectors. Six core ESG considerations including trust and security, data privacy, and sentience are evaluated for potential impact.
The impact of digital technology on human rights in Europe and Central Asia
This report examines the impact of digital technology and artificial intelligence on human rights in Europe and Central Asia, with a particular focus on the use of data protection and legislative frameworks. It provides an overview of the relevant international and regional initiatives, and analyses the applicable legal, regulatory, and institutional frameworks.
The global governance of artificial intelligence: Next steps for empirical and normative research
This analytical essay outlines an agenda for research into the global governance of artificial intelligence (AI). It distinguishes between empirical research, aimed at mapping and explaining global AI governance, and normative research, aimed at developing and applying standards for appropriate global AI governance.
Statement on artificial intelligence, robotics and 'autonomous' systems
This statement from the European Group on Ethics in Science and New Technologies emphasises the need for a shared international ethical and legal framework for the design and governance of artificial intelligence, robotics, and 'autonomous' systems. It also proposes ethical principles based on EU values to guide the framework's development.
Rights-respecting investment in technology companies
This briefing highlights the potential human rights impact of technological advancements and the responsibility of institutional investors to mitigate these risks. Based on the UN Guiding Principles, investors should implement human rights policies, assess risks and divest from companies with inadequate human rights practices.
Montreal declaration for a responsible development of artificial intelligence
This report outlines a framework for responsible development of artificial intelligence. It provides principles that should guide ethical use of AI for the well-being of sentient beings, respect for autonomy, protection of privacy and intimacy, solidarity, democratic participation, equity, diversity inclusion, caution, responsibility, and sustainable development.
Investors' expectations on responsible artificial intelligence and data governance
This report outlines responsible AI and data governance principles and engagement framework for investors across multiple sectors. The six core principles aim to enhance machine learning, auditability, explainability, and transparency, while taking into account legal, regulatory, ethical, and reputational risks.
Generative artificial intelligence in finance: Risk considerations
Generative AI is a subset of AI/ML that creates new content. It offers enhancements to efficiency and customer experience, as well as advantages to risk management and compliance reporting. However, the deployment of GenAI in the financial sector requires the industry to recognise and mitigate the technology's risks comprehensively; financial institutions must strengthen their cybersecurity and regulatory oversight capacities.
Ethics guidelines for trustworthy AI
The European Commission's AI High-Level Expert Group has released their Ethics Guidelines for Trustworthy AI. The report provides a framework for creating lawful, ethical, and robust AI systems throughout the system's life cycle. The guidelines focus on respect for human autonomy, prevention of harm, fairness, and explicability.
Engaging the ICT sector on human rights: Political participation
This ICT sector-wide risk assessment examines potential impacts on the salient human rights issue of political participation. It presents international standards, discusses the use of ICT in politics, and offers human rights guidance for businesses to follow. Additionally, the report highlights risks and offers stakeholder-engagement suggestions and investor efforts to mitigate negative impacts.
Engaging the ICT sector on human rights: Freedom of opinion and expression
This report assesses freedom of opinion (FOE) and expression risks in the ICT sector. It identifies negative impacts and provides guidance for companies and investors on how to respect and promote FOE.
Engaging the ICT sector on human rights: Conflict and security
This report provides an overview of the main human rights instruments and adverse impacts of the ICT sector in conflict-affected areas, emphasising its role in promoting security and other human rights while highlighting the potential risks of new technologies in this context. It also includes investor guidance to help evaluate if companies are meeting their human rights responsibilities.
Engaging the ICT sector on human rights: Child rights
This briefing explores the risks and opportunities Information and Communication Technologies (ICT) companies face in relation to children's rights. It highlights the importance of adhering to international standards and implementing internal policies and practices that prioritise the most severe impacts on children. Investors are encouraged to hold companies accountable.
Digital safety risk assessment in action: A framework and bank of case studies
This report contains a framework and case studies for digital safety risk assessment. The case studies cover topics such as trust and safety best practices, human rights due diligence, and child safety in gaming and immersive worlds.