
Engaging the ICT sector on human rights: Artificial intelligence-based technologies
This report examines the human rights risks associated with artificial intelligence in the ICT sector. It offers guidance for rights-respecting AI development, outlines regulatory frameworks, presents the business rationale for ethical AI, and supports investor engagement with practical tools and questions for assessing AI-related corporate practices
Please login or join for free to read more.

OVERVIEW
Artificial intelligence and generative AI: An introduction
Artificial intelligence (AI), including generative AI (genAI), is growing rapidly with potential for both economic and social benefit. Defined by the EU Commission as machine-based systems operating with autonomy and adaptability, AI generates content, predictions, or decisions impacting physical or virtual environments. GenAI, according to the OECD, creates new content such as text or images and presents significant challenges for policymakers. This report addresses both AI and genAI where relevant.
How does AI impact human rights?
AI affects a range of human rights. The UN’s framework, including the UDHR, ICCPR, and ICESCR, outlines the legal basis for assessing AI’s human rights implications. Risks include privacy breaches through data collection without consent, surveillance via facial recognition technologies (FRTs), and profiling by AI systems. GenAI models often use copyrighted data without permission. These technologies can also interfere with freedom of expression through content moderation, online censorship, and disinformation, often affecting journalists, activists, and vulnerable groups.
- AI in conflict zones raises security concerns, including arbitrary surveillance and violence. AI-driven content moderation can amplify extremist content, especially in fragile contexts.
- Discriminatory impacts are also evident: AI tools have misidentified women and people of colour, leading to wrongful arrests.
- Predictive policing reinforces biases, and facial recognition disproportionately targets marginalised groups.
- Datasets commonly reflect “WEIRD” (Western, Educated, Industrialised, Rich, Democratic) cultural biases, leading to skewed outcomes in areas such as credit scoring, hiring, and housing.
The business case for rights-respecting AI
Companies face legal, reputational, and financial risks for failing to manage human rights impacts. Amazon was fined €35 million in France for privacy violations related to employee surveillance. Google’s Bard AI incident led to a $100 billion market value loss after inaccurate information was shared publicly. Meta faces a $1.6 billion lawsuit in Kenya over hate speech moderation failures linked to violence.
- To mitigate risks, companies should carry out ongoing human rights impact assessments, engage stakeholders, ensure transparency, and align with evolving regulations such as the EU AI Act. Reputational risk also includes investor divestment threats, such as ASN Impact Investors’ warning to TKH Group over surveillance in East Jerusalem.
- Human rights guidance for rights-respecting AI development and deployment
companies should adopt AI-specific human rights policies endorsed by senior leadership, aligned with international frameworks, and communicated throughout value chains. These policies should be implemented through training, oversight, and adequate resources. - Impact assessments must focus on actual and potential adverse effects, particularly on marginalised groups. Ongoing assessments are recommended, especially for high-risk AI uses or operations in conflict-affected areas. Results should inform internal processes, and companies should cease or mitigate impacts where they are responsible or complicit.
- Monitoring must involve quantitative and qualitative indicators, including stakeholder feedback. Companies should publish detailed reporting on AI use, governance, and risks. Remediation should be available through accessible grievance mechanisms, with outcomes informing future risk assessments.
Guiding questions on rights respecting AI
Investors are encouraged to assess company efforts in areas such as public commitments to human rights, governance oversight, risk assessments, stakeholder engagement, and grievance mechanisms. Specific questions address Board expertise, policy scope, supplier responsibility, and alignment of lobbying with human rights commitments.