Synthetic content: Exploring the risks, technical approaches, and regulatory responses
Generative AI enables the rapid creation of synthetic content, offering both opportunities and risks. This report examines challenges like disinformation and fraud, outlines technical and regulatory strategies, and explores trade-offs with privacy. Techniques discussed include watermarking, provenance tracking, and legal frameworks, aiming to enhance transparency while safeguarding privacy.
Please login or join for free to read more.
OVERVIEW
Introduction
Generative AI (GenAI) has advanced the creation of synthetic content, making it increasingly indistinguishable from authentic media. While synthetic content drives innovation in healthcare, marketing, and education, it introduces significant risks. These include malicious impersonation, disinformation, non-consensual imagery, financial fraud, and erosion of trust in media. Policymakers, academics, and civil society are developing technical and regulatory strategies to mitigate these harms while ensuring safeguards do not compromise privacy and security.
Synthetic content, Or AI-generated content, can create or exacerbate risks
Synthetic content amplifies risks like malicious impersonation, where AI-generated deepfakes mimic individuals for fraud or defamation. Deepfakes have resulted in financial scams, such as a $25 million fraud using an AI-generated video of a corporate executive. Women and marginalised groups are disproportionately targeted by these technologies, with deepfakes eroding their ability to engage in civic and political activities.
AI-generated disinformation has undermined trust in elections and health systems. For example, a deepfake video of Ukrainian President Zelensky calling for surrender and synthetic health misinformation about unproven treatments illustrate the dangers. Phishing emails created by AI are more compelling than human-written ones, increasing the likelihood of scams.
Synthetic child sexual abuse material (CSAM) and non-consensual intimate imagery (NCII) further highlight the risks. AI tools have made it easier to create and distribute this harmful content, leading to significant physical, emotional, and reputational harm. The financial sector is also vulnerable, with GenAI enabling scams that bypass traditional fraud detection.
Policymakers, scholars, And technologists are creating frameworks for technical and organisational approaches
Technical strategies for mitigating synthetic content risks include watermarking, provenance tracking, metadata recording, and content detection. Provenance tracking records a content’s creation and modification history, while metadata offers detailed information about its origins. Synthetic content detection tools, such as machine-readable labelling, aim to distinguish between synthetic and authentic content.
Legal measures are gaining traction. For instance, the California AI Transparency Act mandates AI system providers to disclose synthetic content, while the DEFIANCE Act prohibits creating harmful deepfakes. Collaboration between governments and industries is essential to enforce these standards effectively.
The financial industry can benefit from these measures. For instance, watermarking and provenance tracking can prevent fraud by verifying content authenticity. Content labelling also helps individuals and organisations identify misleading material and mitigate its impact.
Safeguards against synthetic content harms can both support and be in tension with privacy and security
Transparency techniques like watermarking and provenance tracking can enhance privacy by deterring fraud and ensuring compliance with privacy laws. However, these techniques may inadvertently expose personal data. For example, individualised watermarks could track user behaviour without consent, raising concerns about data protection.
Balancing privacy with effectiveness is challenging. Watermarks can be tampered with or removed, while metadata tracking may conflict with data minimisation principles. International collaboration on standards and safeguards is necessary to ensure consistent protection across jurisdictions.
Other factors may limit the effectiveness of techniques for combating harmful synthetic content, or raise new problems
Technical measures alone are insufficient. Filtering content involves trade-offs, such as reduced AI model performance or over-blocking legitimate content. Detection tools require significant coordination between developers, platforms, and regulators, and lack of standardisation undermines their effectiveness.
Transparency measures, such as content labelling, need public trust and literacy to succeed. Studies show that users are sceptical of AI-labelled content, even when it is truthful. Without standardised labelling systems, bad actors may evade detection while good-faith content faces increased scrutiny. Education on media literacy and clearer definitions of “significant” AI modifications are vital.
Conclusion
The proliferation of synthetic content presents opportunities but also significant risks. Techniques like watermarking, provenance tracking, and labelling offer potential solutions but require careful implementation to balance transparency with privacy. Effective mitigation demands international collaboration, legal reforms, and public education to address these challenges comprehensively. A holistic strategy combining technical, organisational, and legal approaches is critical for managing synthetic content responsibly while fostering trust and innovation.