Library | ESG issues
Technology & Online Harm
Technology & online harm refers to the risks and challenges linked to existing and emerging digital technologies such as AI, blockchain, and cryptocurrencies. While these innovations can enhance efficiency and productivity, they also introduce risks like fraud, misinformation, regulatory uncertainty, and ethical dilemmas, requiring careful oversight and responsible adoption.
Refine
91 results
REFINE
SHOW: 16
Ulula
Ulula offers innovative solutions for responsible supply chains and worker engagement. Their platform provides real-time data collection and analysis to help organisations monitor human rights risks and ensure ethical sourcing. Ulula enables businesses to drive transparency, reduce modern slavery risks, and meet global sustainability and compliance standards effectively.
Intelligent financial system: How AI is transforming finance
The report explores the transformative role of AI in the financial sector, focusing on financial intermediation, insurance, asset management, and payments. It highlights both opportunities and challenges, including implications for financial stability and the need for upgraded financial regulation to manage the risks associated with AI's growing influence.
Drivers of change: Meeting the energy and data demands of AI adoption in Australia and New Zealand
The report explores the energy challenges posed by AI adoption, highlighting concerns among IT managers about increased energy consumption and uncertainty regarding its impact on sustainability. The research underscores the need for enhanced energy efficiency and green energy solutions to meet ESG goals without hindering AI deployment.
Artificial intelligence and human rights investor toolkit
This toolkit aims to provide investors with guidance on how to navigate the intersecting terrain of AI and human rights. It covers the various aspects of AI implementation that have potentially significant implications for human rights, and how investors can engage with companies on these issues. Its focus is on emerging risks and opportunities for investors in the context of rapidly evolving technologies and the ethical challenges they pose.
Investing in stakeholder engagement for improved digital technologies
This report explores the importance of stakeholder engagement for tech sector investors. It shows how engaging with affected stakeholders helps identify, assess, and mitigate human rights risks. It provides recommendations for investors to fund more rights-respecting companies.
The intersection of Responsible AI and ESG: A framework for investors
This report provides actionable insights for investors exploring the integration of Responsible AI (RAI) in their investment decisions. It offers a framework to assess the environmental, social, and governance (ESG) implications of Artificial Intelligence (AI) usage by companies. The report includes case studies of globally listed companies and a set of templates to support investors in implementing the framework.
The global risks report 2024: 19th edition
This report outlines global risks in 2024 and 2034, in an effort to provide insight to government and business leaders about the potential threats of the future. The report highlights potential global risks ranging from false information, economic uncertainty, climate change, AI dominance, to an increase in conflict and organised crimes.
Artificial intelligence risk management framework (AI RMF 1.0)
This framework is a guide to promote safe, secure and transparent use of AI systems. The Framework provides four key functions – govern, map, measure and manage - with further categories and subcategories for risk management in AI systems.
Human rights and technology: Final report
The report highlights how Australia can achieve innovation while upholding human rights. It offers recommendations on how to protect and promote human rights while responsibly using new technologies. The Commission consulted industry, government, civil society, academia, and leading experts worldwide, producing a template for further accountability and human rights protection.
Do androids dream of responsible investment? Exploring responsible investment in the age of information
This report provides insight into the emerging responsible investment risks surrounding technology. The report covers four key areas of concern; bias and discrimination, manipulation and influencing behaviour, big tech and market dominance, and automation and the future of work, alongside case studies and recommended questions for asset owners.
Omidyar Network
Omidyar Network is a philanthropic investment firm that seeks to bring about social impact by supporting and investing in innovative initiatives. With a focus on areas like financial inclusion, education, and civic engagement, Omidyar Network works to create positive change globally through a combination of strategic grants and impact investments.
Concrete problems in AI safety
This paper explores practical research issues associated with accidents in machine learning and artificial intelligence (AI) systems, due to incorrect objectives, scalability, or choice of behaviour. The authors present five research problems in the field, suggesting ways to mitigate risks in modern machine learning systems.
Toward a G20 framework for artificial intelligence in the workplace
This report advocates for creating a high-level, G20 framework using a set of principles for the introduction and management of big data and AI in the workplace. The paper identifies main issues and suggests two paths towards adoption.
The state of AI in 2022 - and a half decade in review
The adoption of AI has more than doubled, with a peak of 58% in past years. The report highlights the importance of best practices and investing in AI as it is shown to bring financial returns. However, the majority of organisations are not mitigating risks associated with AI despite increasing use.
The state of AI governance in Australia
This report reveals that Australian organisations lack structured governance around AI systems. Corporate leaders should invest in expertise, create a comprehensive AI strategy, implement addressing risks and support a human-centered culture. The appropriate governance of AI systems is critical for corporate leaders to mitigate risks.
The implications of AI across sectors and against 6 key ESG considerations
AI offers great positive impacts and risks. This report helps to understand the risks associated with developing and using AI tech. Scoping exercise identifies opportunities and threats across sectors. Six core ESG considerations including trust and security, data privacy, and sentience are evaluated for potential impact.