Artificial intelligence and human rights investor toolkit
This toolkit aims to provide investors with guidance on how to navigate the intersecting terrain of AI and human rights. It covers the various aspects of AI implementation that have potentially significant implications for human rights, and how investors can engage with companies on these issues. Its focus is on emerging risks and opportunities for investors in the context of rapidly evolving technologies and the ethical challenges they pose.
Please login or join for free to read more.
OVERVIEW
Section 1: Artificial intelligence overview
There is no universally applied definition of ‘artificial intelligence’. However, this description encompasses multiple sorts of AI systems, outputs and use cases. Two important subsets of AI are:
- Machine learning systems – trained on pre-existing, often unstructured, data and apply lessons from the past to new data, to make predictions for the future.
- Expert systems – that solve complex problems by applying ‘if-then’ and logical reasoning to a knowledge base to mimic human decision-making processes.
Key investment risks include companies’ social licence to operate and grow, as well as potential legal risks. In considering these and other human rights-related investment risks associated with the use of AI, including future regulatory developments that may impact companies (and investors), a useful starting point is to draw on the existing internationally recognised human rights frameworks:
- The International Bill of Rights (consisting of the Universal Declaration of Human Rights and two binding treaties to which Australia is a party; the International Covenant on Civil and Political Rights; and the International Covenant on Economic, Social and Cultural Rights)
- Seven other core international human rights treaties, each focused on a specific collection of rights (e.g. right to freedom from torture) or groups of people (e.g. rights of women, children and persons with disabilities).i 3. The fundamental conventions of the International Labour Organization (ILO).
- In a business context, the UNGPs establishes a framework of principles for governments and business enterprises to “protect, respect and remedy human rights impacts
Section 2:Why should investors care about the human rights impacts of artificial intelligence
- Reputational and Operational Risk: Low public trust in AI can harm companies’ reputation, data breaches (e.g., Medibank) can lead to financial losses and lawsuits and failing to manage AI risks can lead to CEO departures (e.g., Optus).
- Regulatory Risk: New laws may prioritise user safety and human rights, requiring expensive compliance. Regulations may cover fairness, accountability, and transparency in AI. Cross-border regulations can create challenges for companies, and lawsuits challenging AI use based on human rights are emerging (e.g., Finland credit case).
- Financial Risk: AI failures can lead to financial losses (e.g., Google’s Bard demo), data breaches can impact company value due to privacy concerns, reputational and regulatory risks can translate into financial risks including reduced revenue from customer loss, increased compliance costs, direct costs from fines, remediation, and legal fees, and lower company valuations due to investor loss of confidence.
Section 3: Integration – Assessing AI-related human rights impacts
Investors should take a closer look at the harmful impacts of Artificial Intelligence (AI) on human rights to assess the associated risks in their investments. This toolkit provides guidance on identifying and assessing human rights impacts in AI-related investments. Investors are encouraged to consider the severity and likelihood of potential human rights risks, emphasising the risk to people first and identifying “salient” human rights risks.
Section 4: Stewardship – How investors can engage companies on AI risks
The report emphasises that, given the pace of AI innovation, investors should prioritise and effectively manage AI risks through engagement with companies. Investors should consider and apply engagement strategies focused on governance, policies, best practices, and investment in systems and employees to design, assess, and monitor the use of cases. Collaborative engagement among shareholder groups may result in improved access and engagements with companies to develop effective engagement plans. Investors are urged to prioritise engagement plans based on the risks outlined in Section 4.1, focusing on companies most exposed to adverse human rights impacts.
Disclosure and reporting
Investors could use best practice frameworks to set expectations on disclosure and reporting when assessing relevant issues. Investors can also urge companies to disclose their AI ethics and governance frameworks, including those with human rights considerations. Additionally, investors can leverage industry collaborations and prioritise social and human rights-related shareholder proposals to advocate on the effects of AI on human rights.