AI governance behind the scenes: Emerging practices for AI impact assessments
The report outlines emerging organisational practices for AI impact assessments, highlighting common process steps, information gathering challenges, evolving risk-assessment methods, and difficulties evaluating mitigation effectiveness. It notes increasing cross-functional governance, reliance on third-party transparency, and the need for stronger metrics, education, and executive support.
Please login or join for free to read more.
OVERVIEW
Introduction
The report examines how organisations are operationalising AI impact assessments amid rapid technological and regulatory developments. Based on research involving more than 60 companies, workshops, and interviews, it highlights emerging practices, challenges, and evolving governance expectations. It notes that Chief Privacy Officers increasingly lead AI governance functions, but cross-functional support remains necessary. Organisations face uncertainty regarding assessment frameworks, alignment with privacy processes, and measuring the effectiveness of risk management strategies.
What are AI impact assessments?
AI impact assessments help organisations identify, analyse, and mitigate risks linked to AI systems. Definitions vary globally, and distinctions between AI impact assessments, AI risk assessments, and data protection impact assessments remain unsettled. The report aligns with the NIST AI Risk Management Framework, which describes assessments as evaluating accountability, bias, safety, liability, and security considerations. Assessments sit within broader AI governance programmes involving privacy, engineering, legal, HR, product, and external partners. They support trust-building, surface risks such as intellectual property exposure, and guide decisions about adoption or procurement.
Legislative approaches to AI impact assessments
Jurisdictions globally are introducing mandatory or voluntary requirements. Examples include the EU AI Act’s risk-based framework, Colorado’s provisions requiring annual assessments for high-risk systems, Singapore’s governance models, and Australia’s voluntary AI safety standard. Regulatory divergence persists, but most frameworks emphasise due diligence, transparency, human oversight, and lifecycle monitoring. Organisations prepare for these differing obligations by developing baseline governance structures and anticipating eventual oversight.
Key AI governance steps in the AI impact assessment process
Four core steps recur across organisations: initiating an assessment; gathering model and system information; conducting risk-benefit analysis; and identifying and testing risk management strategies. These may not occur sequentially and often repeat throughout the lifecycle, including development, fine-tuning, deployment, and post-deployment monitoring.
Step 1: Initiating an AI impact assessment
Catalysts for assessments include legal requirements, new use cases, design changes, and the need to manage business, ethical, or reputational risks. Generative AI has heightened internal risk sensitivity, with some organisations adopting single intake processes to triage all risk assessments. Many firms undertake multiple assessments across the lifecycle, especially following substantial system modifications. Deployers and developers differ in focus: deployers prioritise data governance and protection, while developers focus on model development risks. Resistance from business units may arise where risk managers recommend restrictions, increasing reliance on senior-level decision-making.
Step 2: Gathering model and system information
Organisations seek detailed information about training data, capabilities, limitations, deployment context, intended and unintended uses, impacted individuals, and operational environments. Data-related questions dominate. Many face challenges obtaining information from third-party developers due to technical constraints, proprietary concerns, black-box models, or limited transparency. Some sectors, such as banking, require regulatory engagement with model providers. Model or system cards are an emerging practice but remain inconsistent. Cross-functional collaboration involving product, engineering, privacy, and legal teams is increasing. Organisations sometimes use one assessment for comparable use cases, although the criteria for comparability remain unclear.
Step 3: Assessing risks and benefits
Risk-benefit analysis considers legal requirements, potential harms, system accuracy, bias, misuse, operational environment, and benefits such as efficiency, scalability, or improved predictions. Many firms integrate AI assessments with existing enterprise risk processes and update privacy assessments to include AI-specific considerations. Challenges persist in anticipating all AI risks due to the vast risk landscape, general-purpose models’ multiple uses, and dynamic environments. Risk appetite varies by organisation, activity, regulatory exposure, and internal governance structures. Risk-benefit matrices are increasingly used, but escalation pathways for high-risk use cases are not standardised. Internal ownership of AI risk differs, with some adopting three lines of defence and others assigning responsibility to legal, ethics, or business teams.
Step 4: Identifying and testing risk management strategies
Risk management measures include human oversight, guardrails on use, secure data handling, accuracy and bias testing, and continuous monitoring. Organisations use qualitative and quantitative evaluations, but assessment is hindered by subjective risk metrics, lack of standardised measurement methods, and differences between testing and operational environments. Engagement with engineering teams, third-party developers, and user feedback supports evaluation, though transparency limitations remain
Conclusion: The state of play and looking ahead
Organisations have converged on several emerging practices but face difficulties obtaining sufficient system information, anticipating risks, and determining whether mitigation is effective. The report suggests improving processes for engaging third-party developers, enhancing internal risk education, and strengthening measurement of mitigation strategies. AI governance training and executive sponsorship are identified as essential for maturing assessment capabilities.