Introduction
The report analyses the use of artificial intelligence decision-support systems (AI-DSS) and facial recognition technology (FRT) in military targeting, using the Israeli military’s alleged deployment of the system known as ‘Lavender’ in Gaza as a case study. It examines how such technologies interact with international humanitarian law (IHL), particularly the law of targeting, in the context of the escalation of the Israel–Hamas conflict after 7 October 2023.
The use of technology in armed conflicts
This section outlines how FRT and AI-DSS operate and why they are increasingly adopted by militaries. FRT identifies or verifies individuals through biometric analysis, producing probabilistic matches rather than definitive results. AI-DSS process large volumes of data to generate recommendations, often using non-deterministic machine-learning models. While these systems can increase speed and efficiency and identify patterns beyond human capacity, they inherently operate with margins of error, rely heavily on training data, and often lack transparency and explainability.
The challenges to the use of FRT and AI in armed conflict
The report identifies four major challenges. First, FRT accuracy is significantly reduced in uncontrolled environments such as dense urban conflict zones, where lighting, angles and movement cannot be controlled. Even low error rates can result in large numbers of misidentifications when applied to large populations.
Second, automation bias may lead human operators to over-trust algorithmic outputs, particularly in time-pressured environments, reducing independent verification.
Third, technical and cognitive biases embedded in training data can disproportionately affect certain groups, increasing the likelihood of civilians being misidentified as lawful targets.
Fourth, AI opacity limits users’ ability to understand, trace and challenge recommendations, creating accountability gaps and complicating investigations into potential IHL violations.
The law of targeting
The principle of distinction
The principle of distinction requires parties to distinguish between civilians and lawful military targets. The report argues that while FRT may assist in verifying identity, it cannot determine an individual’s legal status under IHL. In densely populated areas such as Gaza, AI-DSS combined with FRT increase the risk of false positives. Reporting cited suggests that ‘Lavender’ generated approximately 37,000 target recommendations in the early phase of the conflict, with an estimated error rate of around 10%, implying that several thousand individuals may have been misclassified. Such risks challenge compliance with the obligation to protect civilians.
The principle of proportionality
Proportionality requires that expected civilian harm not be excessive in relation to anticipated military advantage. AI-DSS may support proportionality assessments by estimating collateral damage, but the report stresses that proportionality involves qualitative and subjective judgement that cannot be fully operationalised through algorithms. Evidence cited indicates that high civilian casualty thresholds were sometimes authorised, raising concerns about how AI-supported assessments were applied in practice. Responsibility for proportionality decisions remains with military commanders, regardless of AI input.
The principle of precautions in attack
The obligation of constant care and precautions in attack requires feasible measures to verify targets and minimise civilian harm. The report highlights concerns that the speed and scale of AI-generated recommendations can compress review times. In some instances, target approvals were reportedly conducted within seconds. The analysis suggests that compliance with IHL requires slowing decision-making processes, ensuring thorough human review, improving technical literacy among commanders, and using representative training data to reduce bias and error.
Conclusion
The report concludes that AI-DSS and FRT can support military decision-making but pose significant legal and humanitarian risks when used at scale and speed. The ‘Lavender’ case illustrates the dangers of over-reliance on opaque systems in high-civilian-density environments. Ensuring lawful use requires meaningful human judgement, sufficient verification time, greater transparency, and limits on AI-DSS roles in targeting to reduce civilian harm and maintain accountability under IHL.