Salient Issue Briefing: Artificial intelligence based technologies
This briefing examines human rights risks from AI-based technologies in the ICT sector, outlines business, legal, and financial implications, and provides investor-oriented guidance grounded in international standards to support rights-respecting AI development, deployment, and oversight.
Please login or join for free to read more.
OVERVIEW
Artificial intelligence and generative AI: An introduction
The briefing defines artificial intelligence (AI) as machine-based systems capable of autonomous and adaptive decision-making that influence physical or digital environments. Generative AI (genAI), which produces text, images, video, and other content, gained prominence in late 2022 with large language models. While AI delivers economic and social benefits, including efficiency gains and improved access to services, it also creates significant risks if developed or deployed without safeguards. These risks necessitate balanced governance to ensure inclusive and equitable outcomes.
How does AI impact human rights?
AI systems rely on large-scale data collection, often without informed consent, creating material risks to privacy and data protection under international human rights law. Technologies such as facial recognition enable profiling, tracking, and surveillance, altering expectations of privacy and limiting recourse. Algorithmic content moderation can chill freedom of expression by removing legitimate content, amplifying harassment, or enabling censorship, particularly in authoritarian contexts.
AI deployment in conflict-affected settings heightens risks to life, liberty, and security, including through surveillance, predictive policing, and disinformation. Bias embedded in datasets and development processes leads to discriminatory outcomes, disproportionately affecting women and marginalised communities. Evidence cited includes higher misidentification rates for people of colour, gendered misuse of deepfakes, and biased tools in policing, employment, housing, and credit. These impacts undermine non-discrimination and political participation rights, including through election interference, micro-targeting, and suppression of dissent.
The business case for rights respecting AI
The report links inadequate AI governance to reputational, financial, and legal risks. Examples include regulatory fines for privacy violations, investor divestment threats, and market value losses following flawed AI product launches. Litigation related to algorithm-driven hate speech and surveillance illustrates escalating liability exposure. Conversely, proactive human rights due diligence can reduce these risks, strengthen public trust, and support long-term business sustainability. Investors increasingly view responsible AI as material to value preservation and risk management.
Human rights guidance for rights respecting AI development and deployment
Grounded in the UN Guiding Principles on Business and Human Rights, the guidance outlines practical actions for companies. These include adopting public human rights and responsible AI policy commitments endorsed by senior leadership; embedding safeguards across the AI lifecycle; and conducting ongoing human rights impact assessments, particularly for high-risk use cases and operating environments.
Companies are expected to integrate assessment findings into business processes, improve transparency of AI systems, and ensure human oversight. Monitoring performance through qualitative and quantitative indicators, post-market audits, and stakeholder engagement is emphasised. Where harms occur, companies should provide or cooperate in remediation through accessible grievance mechanisms and publicly communicate actions taken.
Guiding questions on rights respecting AI
The briefing provides investor-focused questions covering governance, strategy, risk assessment, and remediation. These address board oversight, alignment of lobbying with human rights commitments, stakeholder consultation, bias mitigation, supply chain responsibility, and access to remedy. The questions are intended to support informed engagement with companies developing or deploying AI technologies.
Annex A: International & regional standards related to development and deployment of AI
The annex summarises key global frameworks, including GDPR, OECD AI Principles, UNESCO’s AI ethics recommendation, and the EU AI Act adopted in 2024. These standards emphasise transparency, accountability, human oversight, and risk-based regulation, with extra-territorial implications for companies and investors.
Annex B: Investor efforts
Investors are increasingly engaging through collaborative initiatives, shareholder proposals, and regulatory advocacy. Recent actions include multiple AI-related shareholder proposals and investor support for the EU AI Act, highlighting AI governance as a material investment issue.
Annex C: Resources
The report concludes with references to investor, policy, and research resources supporting ongoing assessment of AI-related human rights risks.