Library | ESG issues

Technology & Online Harm

Technology & online harm refers to the risks and challenges linked to existing and emerging digital technologies such as AI, blockchain, and cryptocurrencies. While these innovations can enhance efficiency and productivity, they also introduce risks like fraud, misinformation, regulatory uncertainty, and ethical dilemmas, requiring careful oversight and responsible adoption.

Refine
Resource type
Sustainable Finance Practices
ESG issues
SDGs
SASB Sustainability Sector
Finance relevance
Asset Class
Location
TAG
84 results
REFINE
SHOW: 16

Artificial intelligence risk management framework (AI RMF 1.0)

National Institute of Standards and Technology (NIST)
This framework is a guide to promote safe, secure and transparent use of AI systems. The Framework provides four key functions – govern, map, measure and manage - with further categories and subcategories for risk management in AI systems.
Research
25 January 2023

Human rights and technology: Final report

Australian Human Rights Commission
The report highlights how Australia can achieve innovation while upholding human rights. It offers recommendations on how to protect and promote human rights while responsibly using new technologies. The Commission consulted industry, government, civil society, academia, and leading experts worldwide, producing a template for further accountability and human rights protection.
Research
8 March 2021

Do androids dream of responsible investment? Exploring responsible investment in the age of information

ShareAction
This report provides insight into the emerging responsible investment risks surrounding technology. The report covers four key areas of concern; bias and discrimination, manipulation and influencing behaviour, big tech and market dominance, and automation and the future of work, alongside case studies and recommended questions for asset owners.
Research
16 April 2020

Omidyar Network

Finance / Corporate Focused NGOs & Think Tanks
Omidyar Network is a philanthropic investment firm that seeks to bring about social impact by supporting and investing in innovative initiatives. With a focus on areas like financial inclusion, education, and civic engagement, Omidyar Network works to create positive change globally through a combination of strategic grants and impact investments.
Organisation
1 research item

Concrete problems in AI safety

This paper explores practical research issues associated with accidents in machine learning and artificial intelligence (AI) systems, due to incorrect objectives, scalability, or choice of behaviour. The authors present five research problems in the field, suggesting ways to mitigate risks in modern machine learning systems.
Research
25 July 2016

Toward a G20 framework for artificial intelligence in the workplace

Centre for International Governance Innovation (CIGI)
This report advocates for creating a high-level, G20 framework using a set of principles for the introduction and management of big data and AI in the workplace. The paper identifies main issues and suggests two paths towards adoption.
Research
29 June 2018

The state of AI in 2022 - and a half decade in review

McKinsey & Company
The adoption of AI has more than doubled, with a peak of 58% in past years. The report highlights the importance of best practices and investing in AI as it is shown to bring financial returns. However, the majority of organisations are not mitigating risks associated with AI despite increasing use.
Research
31 December 2022

The state of AI governance in Australia

University of Technology Sydney
This report reveals that Australian organisations lack structured governance around AI systems. Corporate leaders should invest in expertise, create a comprehensive AI strategy, implement addressing risks and support a human-centered culture. The appropriate governance of AI systems is critical for corporate leaders to mitigate risks.
Research
31 May 2023

The implications of AI across sectors and against 6 key ESG considerations

CSIRO
AI offers great positive impacts and risks. This report helps to understand the risks associated with developing and using AI tech. Scoping exercise identifies opportunities and threats across sectors. Six core ESG considerations including trust and security, data privacy, and sentience are evaluated for potential impact.
Research
17 May 2023

The impact of digital technology on human rights in Europe and Central Asia

United Nations Development Programme (UNDP)
This report examines the impact of digital technology and artificial intelligence on human rights in Europe and Central Asia, with a particular focus on the use of data protection and legislative frameworks. It provides an overview of the relevant international and regional initiatives, and analyses the applicable legal, regulatory, and institutional frameworks.
Research
17 February 2023

The global governance of artificial intelligence: Next steps for empirical and normative research

This analytical essay outlines an agenda for research into the global governance of artificial intelligence (AI). It distinguishes between empirical research, aimed at mapping and explaining global AI governance, and normative research, aimed at developing and applying standards for appropriate global AI governance.
Research
4 September 2023

The geography of Australia’s digital industries: Digital technology industry clusters in Australia’s capital cities and regions

CSIRO
This report documents the location of 96 digital technology industry clusters in Australia’s capital cities, regions, and suburbs. The report draws attention to the variables that affect industry growth and development, from company profit growth to housing affordability and quality of life.
Research
7 March 2023

Technology tools in human rights

The Engine Room
This report explores technology tools available to human right defenders for collecting, managing, analysing, communicating and archiving data. HRDs should prioritise simplicity, familiarity, and ease of use when choosing a tool, and be mindful of potential security risks. Strategic partnerships and obtaining second opinions can also aid decision making.
Research
19 November 2016

Statement on artificial intelligence, robotics and 'autonomous' systems

European Commission
This statement from the European Group on Ethics in Science and New Technologies emphasises the need for a shared international ethical and legal framework for the design and governance of artificial intelligence, robotics, and 'autonomous' systems. It also proposes ethical principles based on EU values to guide the framework's development.
Research
9 March 2018

Safety by design: Investment checklist

eSafety Commissioner
This investment checklist is a concise guidance document, aimed at investors and venture capitalists considering whether to invest in tech companies. The checklist presents a 12-point criteria touching on design and provision of services, community guidance, safety reviews, user tools, and proactive steps to inform users about safety policies.
Research
18 January 2021

Rights-respecting investment in technology companies

Office of the United Nations High Commissioner for Human Rights
This briefing highlights the potential human rights impact of technological advancements and the responsibility of institutional investors to mitigate these risks. Based on the UN Guiding Principles, investors should implement human rights policies, assess risks and divest from companies with inadequate human rights practices.
Research
27 January 2021
PREV
3 of 6
NEXT