
The impact of digital technology on human rights in Europe and Central Asia
This report examines the impact of digital technology and artificial intelligence on human rights in Europe and Central Asia, with a particular focus on the use of data protection and legislative frameworks. It provides an overview of the relevant international and regional initiatives, and analyses the applicable legal, regulatory, and institutional frameworks.
Please login or join for free to read more.

OVERVIEW
Unpacking the international and regional initiatives, this report discusses the governance of digital technologies. Its focus is on the legal environment for privacy and data protection in Europe, which is analysed through the framework of international human rights law.
The report highlights that existing international human rights frameworks can provide some guidance for the use and governance of digital technologies. Nevertheless, human rights are often left off the agenda, even though the early stage of adopting advanced AI presents a critical opportunity to ensure an inclusive digital transformation that benefits all and respects human rights.
Key findings
Legislative and institutional frameworks
While there has been progress in developing legal frameworks for privacy and data protection in all the countries under examination, the implementation of strategic and legal measures tends to lag behind and fails to address the complex outcomes that stem from the use of these technologies. The establishment of adequate oversight and effective regulatory bodies, the encouragement of a law-abiding culture and practices, and the education of citizens and other stakeholders is needed.
Digital transformation and governance of AI
There are rapid changes happening in digital transformations and the governance of AI with potential consequences on human rights. As digital technologies and AI lead to bettering the lives of people, it can also be a threat to human rights in certain sectors, such as public services, financial entities, and criminal justice. Therefore, a comprehensive risk assessment framework, transparency about goals and when the algorithms are used, and robust accountability mechanisms are essential to identify actual positive and negative impacts.
Specific sectors at higher risk of human rights impacts a human rights-based approach
While the use of technology and AI can lead to improving the lives of people, it can be a threat to human rights in areas like law enforcement, national security, criminal justice, and border control. Therefore, the Centre for Data Ethics and Innovation urges governments to establish a set of global “Red Lines” to prohibit the development and use of AI in specific applications that might pose an ethical or existential threat to humanity and the planet.
Recommendations
The report recommends strategies and commitments to ensure that human rights are protected in practice. The recommendations are directed towards all relevant stakeholders, recognising that building a responsible ecosystem for digital technologies requires cooperation between various sectors.
The report suggests measures to improve domestic institutional, policy, and regulatory frameworks, responses to building a culture of respect and accountability for human rights, democracy, and the rule of law in a digital environment. It focuses on technical assistance and developing and disseminating relevant expertise, methodologies, and best practices, and facilitating mechanisms for improved transparency and participation by diverse stakeholders. They are also designed to promote regional linkages and international cooperation.