About the author
Eleonore Pauwels is an expert on AI convergence, dual-use technologies and conflict, advising international bodies such as the UN and World Bank. Her work focuses on cyberthreat prevention, governance and anticipatory intelligence.
Introduction
Information warfare is expanding as digital and physical battlefields merge. AI — particularly generative AI — increases the power and accessibility of disinformation by tailoring, scaling and automating influence. AI’s convergence with biotechnology, neurotechnology and nanotechnology diffuses sensitive expertise to diverse actors, widening knowledge asymmetries and transforming conflict.
The paper examines implications for civilian and military security, outlines emerging misuse scenarios and highlights legal gaps in current governance.
Framing section
Digital transformation has created an environment where information itself is weaponised. Social media accelerates psychological influence and erodes the distinction between fact and fabrication.
Russia, China and Iran have used large-scale information operations to shape geopolitical narratives. During COVID-19, these states disseminated conspiracy theories and cast doubt on Western vaccines while promoting their own, exploiting mistrust and geopolitical tensions.
Civilians increasingly experience targeted disinformation obstructing humanitarian access and fuelling polarisation. Hybrid warfare blurs cyber, kinetic and informational tactics, while privatised and proxy-led information operations proliferate across borders.
Technical section
Foundational and generative AI models analyse large datasets, classify behaviours and produce realistic synthetic media. These capabilities enable behavioural profiling, bespoke influence operations and deepfake impersonations that complicate verification and undermine public trust.
Examples include AI-generated audio influencing Slovakia’s 2023 election and campaigns targeting US lawmakers.
Open-source and dark-web ecosystems accelerate the industrialisation of information warfare. Tools such as WormGPT and FraudGPT show how generative AI can be repurposed for cyberattacks. Proxy groups, mercenaries and criminal networks increasingly access and outsource such capabilities.
Targeting primarily civilian populations
Civilians face hyper-localised disinformation that disrupts life-saving decisions. In Ukraine, Russian-affiliated actors manipulated evacuation routes, fabricated imminent attacks and influenced population movements.
In the Israel–Hamas conflict, AI-generated images and rapid online dissemination undermined trust and misled journalists and civilians.
Authoritarian regimes may combine AI surveillance, biometrics and behavioural data to track dissenters and minorities. In Syria, the SEA used hacking to identify activists, contributing to arrests and violence.
Targeting military personnel and operations
Generative AI can mislead forces through forged intelligence, deepfaked orders and adversarial attacks on AI-enabled systems. These tactics degrade situational awareness, command cohesion and decision-making.
Psychological operations may target soldiers’ families, fuel anxiety and reduce morale. AI-enabled profiling can also support recruitment of youth into cyber or proxy activities, including via gaming platforms.
Scenario: Information warfare on biological threats
The report outlines a scenario where AI-enabled disinformation fabricates a biological attack. Generative models produce synthetic scientific reports, deepfaked officials and forged samples, triggering panic and overwhelming health services. Vulnerable groups face heightened impacts due to disrupted support systems.
Legal section and concluding thoughts
IHL provides limited clarity on information operations that cause indirect or psychological harm. Some actions may violate prohibitions on incitement or terrorisation, but attribution challenges persist, particularly with proxy actors. International criminal law may apply when operations inflict severe mental harm or contribute to international crimes.
The report underscores the need for anticipatory, cross-sector resilience; improved early-warning systems; enhanced governance of dual-use technologies; and stronger cooperation between governments, industry and civil society.