Library | ESG issues
Technology & Online Harm
Technology & online harm refers to the risks and challenges linked to existing and emerging digital technologies such as AI, blockchain, and cryptocurrencies. While these innovations can enhance efficiency and productivity, they also introduce risks like fraud, misinformation, regulatory uncertainty, and ethical dilemmas, requiring careful oversight and responsible adoption.
Refine
98 results
REFINE
SHOW: 16
The twin transition century
This paper argues that Europe’s green transition depends on aligning digital transformation with sustainability goals. It outlines how digital research can both reduce its own environmental footprint and enable climate action, calling for long-term, interdisciplinary research investment and coordinated EU policy.
Engaging the ICT sector on human rights series
This is a series of sector-wide risk assessment briefings for the Information and Communications Technology (ICT) sector. It examines salient human rights issues linked to ICT business models and technologies, providing a consistent analytical framework to support investor assessment, engagement, and governance analysis across multiple thematic areas.
Salient Issue Briefing: Artificial intelligence based technologies
This briefing examines human rights risks from AI-based technologies in the ICT sector, outlines business, legal, and financial implications, and provides investor-oriented guidance grounded in international standards to support rights-respecting AI development, deployment, and oversight.
Responsible investing in defence, security and resilience
The NATO Innovation Fund advocates removing financial exclusions on defence to bolster European security. The report recommends reforming procurement for rapid dual-use technology adoption and implementing a ‘Responsible Use Framework’ to ensure ethical development of emerging capabilities like AI and autonomous systems.
The use of the Lavender in Gaza and the law of targeting: AI-decision support systems and facial recognition technology
The report analyses Israel’s alleged use of the ‘Lavender’ AI decision-support system and facial recognition in Gaza, assessing compliance with international humanitarian law. It highlights risks from inaccuracy, bias, automation and opacity, concluding that commanders must retain judgement and verification to meet targeting obligations.
Preparing for next-generation information warfare with generative AI
The report analyses how generative AI reshapes information warfare by enabling scalable manipulation, behavioural influence and dual-use knowledge diffusion. It highlights heightened risks to civilians, military operations and international law, stressing gaps in protection and the need for anticipatory, whole-of-society resilience strategies.
Integrating ESG and AI: A comprehensive responsible AI assessment framework
The report introduces an ESG-AI framework enabling investors to assess AI-related environmental, social, and governance risks. Drawing on insights from 28 companies, it provides use-case materiality analysis, governance indicators, and deep-dive assessments to support transparent, responsible AI evaluation and investment decisions.
Responsible Digital Finance Ecosystem (RDFE): A conceptual framework
The report outlines a framework for a Responsible Digital Finance Ecosystem, urging holistic, collaborative consumer protection amid rising digital finance risks. It defines ecosystem actors and four pillars—customer centricity, collaboration, capability, and commitment—to strengthen regulation, improve outcomes, and reduce harms in rapidly evolving digital financial services.
Fake friend: How ChatGPT betrays vulnerable teens by encouraging dangerous behavior
This report examines how ChatGPT can expose teenagers to harmful content, including self-harm, disordered eating and substance abuse guidance. Researchers posing as 13-year-olds found safeguards were easily bypassed, with over half of tested prompts generating unsafe outputs. The report calls for stronger age controls, transparency, and safety enforcement.
On YouTube, a Shift from Denying Science to Dismissing Solutions
This article dives into an analysis of over 12,000 YouTube videos and finds that while outright climate-change denial is dropping, content undermining climate solutions and trust in scientists is rising sharply. It also highlights concerns over YouTube’s ad policies, which still allow monetisation alongside videos that downplay impacts or spread misleading claims about climate policy.
2025 World investment report: International investment in the digital economy
This report summarises international investment trends in the digital economy, focusing on data, digital infrastructure, and technology services. It highlights uneven global distribution, the role of multinational enterprises, and policy implications for sustainable development, emphasising the need for balanced regulatory frameworks and equitable access to digital opportunities worldwide.
Center for disaster management and risk reduction technology
Center for Disaster Management and Risk Reduction Technology (CEDIM) is an interdisciplinary research centre at Karlsruhe Institute of Technology, enhancing disaster resilience. Focusing on natural and human-made hazards—such as earthquakes, droughts, heatwaves and floods—it develops early warning systems, risk mapping and forensic disaster analysis. Ideal for innovators in disaster risk science.
Unlocking value from technology in banking: An investor lens
The report outlines how banks can link technology investments to value creation. It presents a framework to improve returns through strategic allocation, outcome-based execution, and transparency. It identifies five tech-enabled themes that align with shareholder value drivers such as revenue growth, fee income, and risk mitigation.
Starting up: Responsible investment in venture capital
This report examines how environmental, social, and governance (ESG) factors are being adopted in venture capital. It outlines current practices, challenges, and industry-specific considerations, and highlights the need for tailored guidance, collaboration, and early-stage engagement to advance responsible investment across the venture capital ecosystem.
Artificial intelligence in financial services
AI is reshaping financial services by enhancing efficiency, reducing costs and unlocking new revenue opportunities. With $97 billion in projected investment by 2027, firms must address risks like misinformation and data bias while prioritising governance, regulation and workforce reskilling to ensure responsible, secure and effective AI adoption.
Regulating AI in the financial sector: Recent developments and main challenges
The report outlines AI’s growing use in finance—especially in underwriting, fraud detection, and customer support—highlighting regulatory challenges around explainability, governance, and data security. It discusses evolving global guidance and the need for risk-based, proportionate oversight, particularly as generative AI gains traction in high-impact applications.