AI and ESG: An introductory guide for ESG practitioners
This guide outlines how artificial intelligence intersects with environmental, social and governance practice, highlighting opportunities to scale ESG outcomes alongside material risks. It introduces responsible AI principles, regulatory context, assessment frameworks and practical examples to support informed, ethical AI adoption by ESG practitioners.
Please login or join for free to read more.
OVERVIEW
A practical guide for ESG practitioners
The report explains that artificial intelligence (AI) is becoming embedded across organisations and is directly relevant to environmental, social and governance (ESG) functions. AI can support ESG objectives by improving scale, speed and accuracy in areas such as climate action, biodiversity protection, accessibility and disaster response. At the same time, AI introduces material risks, including bias, privacy breaches, surveillance, human rights impacts, workforce disruption and increased environmental footprints from energy, water use and e-waste. ESG practitioners are positioned as well suited to balance these opportunities and risks due to their existing governance, risk and stakeholder management expertise.
Why you?
ESG teams are described as having transferable skills that support responsible AI deployment, including systems thinking, cross-functional collaboration and supplier oversight. Trust is highlighted as critical: Australians are identified as among the least trusting of AI globally, making transparent and ethical AI governance a source of competitive advantage. AI can also significantly reduce administrative workloads, particularly data gathering and ESG reporting, which can account for more than half of practitioners’ time. However, AI may also be misused by bad actors, including organised crime linked to modern slavery and illegal trade. Regulatory expectations are increasing, with Australia’s Voluntary AI Safety Standard and international developments such as the EU AI Act shaping future compliance requirements.
What is AI and responsible AI?
AI is defined as machine-based systems that generate predictions, recommendations, decisions or content from data. The report differentiates between narrow AI, general-purpose AI and generative AI. Responsible AI (RAI) refers to developing and using AI in ways that deliver social benefit while minimising harm. Australia’s AI Ethics Principles are outlined, covering human and environmental wellbeing, human-centred values, fairness, privacy and security, reliability, transparency, contestability and accountability. The report demonstrates how ESG frameworks provide a practical way to operationalise these principles across environmental, social and governance topics.
How do you assess AI in your sector?
The report draws on the CSIRO and Alphinity framework developed through interviews with 28 companies across eight sectors. The framework includes case studies, ten key insights and an ESG–AI assessment model to help evaluate whether AI is implemented responsibly and whether related risks and opportunities are material. Evidence suggests companies with strong ESG performance, credible governance and quality disclosures are more likely to adopt AI in a measured and responsible way. Sector-specific guidance is provided, covering likely AI uses, regulatory considerations and environmental and social impacts.
Ideas to enhance your ESG solutions
Practical examples illustrate AI-enabled ESG outcomes. These include accessibility tools supporting people with vision impairment, Commonwealth Bank’s use of AI to block approximately 400,000 abusive transactions annually linked to financial abuse, and AI-driven optimisation of energy grids that could free up to 100 gigawatts of capacity in the United States. Environmental examples include AI-supported monitoring of deforestation, bushfires and pollution, and partnerships using AI to restore Australia’s giant kelp forests, which can sequester up to 20 times more carbon per acre than land forests. Common features across examples include partnerships, scalability and applicability across multiple ESG domains.
Where to from here?
Rather than a standalone recommendations chapter, the report outlines practical next steps. These include engaging internally on AI–ESG linkages, reviewing relevant resources, undertaking introductory AI training and applying the Voluntary AI Safety Standard. ESG practitioners are encouraged to lead impact-focused discussions using the proposed AI Impact Navigator, which emphasises transparency, workforce impacts, community trust and consumer rights. The report concludes that AI is here to stay, and ESG practitioners should proactively guide its responsible use.