Library | ESG issues
Technology & Online Harm
Technology & online harm refers to the risks and challenges linked to existing and emerging digital technologies such as AI, blockchain, and cryptocurrencies. While these innovations can enhance efficiency and productivity, they also introduce risks like fraud, misinformation, regulatory uncertainty, and ethical dilemmas, requiring careful oversight and responsible adoption.
Refine
86 results
REFINE
SHOW: 16
Safety by design: Investment checklist
This investment checklist is a concise guidance document, aimed at investors and venture capitalists considering whether to invest in tech companies. The checklist presents a 12-point criteria touching on design and provision of services, community guidance, safety reviews, user tools, and proactive steps to inform users about safety policies.
Rights-respecting investment in technology companies
This briefing highlights the potential human rights impact of technological advancements and the responsibility of institutional investors to mitigate these risks. Based on the UN Guiding Principles, investors should implement human rights policies, assess risks and divest from companies with inadequate human rights practices.
Report of COMEST on robotics ethics
COMEST has released a report on robotics ethics which covers the history, development, and social impact of robots. It also offers recommendations for the ethical use of robotics.
Montreal declaration for a responsible development of artificial intelligence
This report outlines a framework for responsible development of artificial intelligence. It provides principles that should guide ethical use of AI for the well-being of sentient beings, respect for autonomy, protection of privacy and intimacy, solidarity, democratic participation, equity, diversity inclusion, caution, responsibility, and sustainable development.
Investors' expectations on responsible artificial intelligence and data governance
This report outlines responsible AI and data governance principles and engagement framework for investors across multiple sectors. The six core principles aim to enhance machine learning, auditability, explainability, and transparency, while taking into account legal, regulatory, ethical, and reputational risks.
Human rights risks in tech: Engaging and assessing human rights risks arising from technology company business models
This tool outlines strategies for investors to assess technology companies’ responsibility to respect human rights. It includes questions addressing engagement on specific business model features that may create human rights risks and an evaluation framework to assess company responses.
Generative artificial intelligence in finance: Risk considerations
Generative AI is a subset of AI/ML that creates new content. It offers enhancements to efficiency and customer experience, as well as advantages to risk management and compliance reporting. However, the deployment of GenAI in the financial sector requires the industry to recognise and mitigate the technology's risks comprehensively; financial institutions must strengthen their cybersecurity and regulatory oversight capacities.
Engaging the ICT sector on human rights: Privacy and data protection
This report provides sector-wide risk assessment on privacy and data protection in the Information and Communications Technology (ICT) industry. It includes international standards and salient issues to consider when engaging with ICT companies, the "business case" for privacy and data protection, and investor guidance for engaging ICT companies.
Engaging the ICT sector on human rights: Political participation
This ICT sector-wide risk assessment examines potential impacts on the salient human rights issue of political participation. It presents international standards, discusses the use of ICT in politics, and offers human rights guidance for businesses to follow. Additionally, the report highlights risks and offers stakeholder-engagement suggestions and investor efforts to mitigate negative impacts.
Engaging the ICT sector on human rights: Freedom of opinion and expression
This report assesses freedom of opinion (FOE) and expression risks in the ICT sector. It identifies negative impacts and provides guidance for companies and investors on how to respect and promote FOE.
Engaging the ICT sector on human rights: Discrimination
This report examines the risks of discrimination in the Information and Communication Technologies sector and its impact on human rights. It provides company guidance on eliminating discrimination and promoting inclusion, as well as investor guidelines for holding companies accountable.
Engaging the ICT sector on human rights: Conflict and security
This report provides an overview of the main human rights instruments and adverse impacts of the ICT sector in conflict-affected areas, emphasising its role in promoting security and other human rights while highlighting the potential risks of new technologies in this context. It also includes investor guidance to help evaluate if companies are meeting their human rights responsibilities.
Digital safety risk assessment in action: A framework and bank of case studies
This report contains a framework and case studies for digital safety risk assessment. The case studies cover topics such as trust and safety best practices, human rights due diligence, and child safety in gaming and immersive worlds.
Artificial intelligence: The public policy opportunity
The artificial intelligence (AI) opportunity is here, and it's transforming industry and society. Governments must create public policy environments that encourage AI innovation, while mitigating negative consequences. This report by Intel outlines several key recommendations necessary to realise the potential of AI and to prepare for this transformative technology.
AI policy principles
This report outlines the responsibility of industry and governments in promoting responsible development and use of artificial intelligence. The policy principles focus on the integration of principles into the design of AI technologies, investment in AI research and development, and collaboration through public-private partnerships.
AI act: Laying down harmonised rules on artificial intelligence and amending certain union legislative acts
The EU Commission has published a regulation that establishes harmonized rules on artificial intelligence (AI) while amending certain union laws. Stakeholders mostly agree on the need for action in the field of AI, but warn the Commission to avoid duplication and overregulation. The regulation will require an appropriate level of human and financial resources.