EU's AI Standards Strategy Promotes Protectionism, Potentially Hindering AI Aspirations
The European Union's approach to establishing technical standards for its Artificial Intelligence Act (AIA) is a critical factor in shaping the future of AI in the bloc. The AIA, designed to regulate AI systems in a risk-based framework, aims to ensure safety, transparency, and respect for fundamental rights while minimising the impact on innovation[1][3].
**AI Uptake and Safety:**
The AIA categorises AI systems into risk levels, with tailored requirements for each category to strike a balance between safety and innovation[1]. Technical standards play a pivotal role in compliance, particularly for high-risk AI systems, which require robust data governance, transparency, and continuous monitoring[5]. These standards can facilitate the development and deployment of trustworthy AI.
The European AI Office and various governance bodies are working to implement and enforce the AIA, aiming to create a cohesive regulatory environment that fosters confidence among developers and users[3].
**Concerns about the Current Process and Timeline:**
A significant challenge is the delay in finalising many technical standards, which complicates compliance efforts for companies, especially small and medium enterprises[1]. To address this issue, the European Commission is considering delaying some complex rules related to high-risk AI systems to allow time for standards development and better preparation by stakeholders[1].
Compliance under the AIA is an ongoing operational demand, requiring real-time monitoring and adaptability. Organisations face difficulties translating regulatory language into actionable technical requirements, highlighting the need for clear, up-to-date standards and guidance[5].
**Addressing the Challenges:**
While the EU's methodical, risk-based approach aims to ensure both safe AI and foster innovation, the delay in finalising technical standards and the complexity of compliance are current hurdles that may slow AI uptake in the short term. Addressing these concerns promptly is crucial for the EU to achieve its objectives of safe, trustworthy, and widely adopted AI technologies[1][5].
Critics argue that the EU's rush to meet the two-year deadline compromises the quality of the standards. Furthermore, the EU's approach excludes global experts, which may lead to lower quality standards[2]. The EU has also excluded the European Telecommunications Standards Institute (ETSI), which has been working on AI standards, from its standards development[2].
The EU's approach to AI standards development fails to acknowledge the complexity of the field and deserves careful consideration from global experts. Critics also argue that the Commission is undermining the principles of transparency, openness, impartiality, and consensus, effectiveness and relevance, coherence, and the development dimension as set by the World Trade Organization[2].
The AI Act sets unrealistic goals for AI standards, such as ensuring AI systems respect health and fundamental rights, which are technically challenging or potentially impossible[2]. This raises concerns that the current approach may hinder innovation rather than foster it.
In conclusion, the EU's approach to AI standards development is a complex issue that requires careful consideration. Balancing the need for safety, innovation, and global expertise is crucial for the EU to achieve its goals of safe, trustworthy, and widely adopted AI technologies.
- The AIA's categorization of AI systems into risk levels includes specific requirements for each category to promote both safety and innovation.
- Technical standards, especially for high-risk AI systems, are essential for compliance, ensuring trustworthy AI deployment and development.
- The delay in finalizing many technical standards complicates compliance efforts for companies, especially small and medium enterprises, necessitating a reconsideration of the timelines and rules for high-risk AI systems.
- Critics argue that the EU's current approach to AI standards development may compromise the quality of the standards, potentially undermining the principles of transparency and innovation in the field.