AI act: Laying down harmonised rules on artificial intelligence and amending certain union legislative acts
The EU Commission has published a regulation that establishes harmonized rules on artificial intelligence (AI) while amending certain union laws. Stakeholders mostly agree on the need for action in the field of AI, but warn the Commission to avoid duplication and overregulation. The regulation will require an appropriate level of human and financial resources.
Please login or join for free to read more.
OVERVIEW
The European Union (EU) has introduced a proposal for a regulation that lays down harmonised rules on artificial intelligence (AI), aiming to address potential risks and harm associated with AI technology. The regulation aligns with the EU’s commitment to preserving technological leadership and ensuring that citizens benefit from new technologies in line with Union values and fundamental rights. The proposal outlines various aspects, including the context, reasons, and objectives of the regulation, as well as harmonised rules, impact assessment, stakeholder consultation, and recommendations.
Context and objectives
The proposal recognises AI’s evolving nature and potential benefits across sectors, emphasising the need for a balanced approach to avoid negative consequences. It aims to regulate AI in specific areas such as finance, mobility, climate change, environment, and health, ensuring adherence to social values and fundamental human rights. The primary goal is to provide a consistent approach to deploying AI technologies within the EU.
Harmonised rules on AI
The regulation categorises high-risk AI systems, specifying requirements for notification, transparency, data governance, record-keeping, and conformity assessment. These rules are particularly applicable to AI systems in critical infrastructure, health, and workplace management. Additional provisions cover testing, privacy, and oversight measures.
Impact assessment
The regulation acknowledges the costs involved in providing human and financial resources for deployment and implementation. This resource allocation is essential for Member States’ supervisory arrangements to ensure compliance with the regulation.
Stakeholder consultation
Stakeholders participated in an online consultation, revealing a general consensus on the need for legislative action. While over 80% of business and industry representatives acknowledged legislative gaps, concerns were raised about potential duplication, conflicting obligations, and overregulation.
Evaluation and review
The Commission plans to review and evaluate the regulation five years after its entry into force, reporting findings to relevant EU bodies.
Recommendations
Several recommendations emerge from the proposal, including the need for supervisory mechanisms, regular reviews, adequate resource allocation, a technology-neutral regulatory framework, clear AI definitions, and designated supervisory authorities. The proposal also outlines the establishment of a system for registering high-risk AI applications in a public EU-wide database.
ESG issues
The proposal addresses critical Environmental, Social, and Governance (ESG) issues, emphasising the importance of respecting fundamental human rights, ensuring AI system safety in critical areas, and implementing monitoring and control measures to mitigate risks.
Conclusion
The proposed regulation aims to safeguard citizens’ fundamental rights, establish ethical frameworks for AI deployment, and minimise societal risks. It strives for a balance between AI benefits and accountability, transparency, and governance. Stakeholders can request updates to regulations to address evolving AI technologies and changing risks. Overall, the proposed regulation sets an ambitious framework for AI deployment while maintaining a commitment to fundamental human rights and ethical considerations.