Czech EU presidency paves way for AI law talks –

The future Czech presidency of the EU Council has shared a working document with other EU governments to gather their views on the definition of artificial intelligence (AI), high-risk measures, governance and national security.

The document obtained by EURACTIV will serve as a basis for discussion in the Telecommunications Working Group on 5 July, with the aim of providing an updated text of the agreement by 20 July. Member states will then be invited to submit written comments on the new agreement by September 2.

“The Czech presidency has identified four major unresolved issues that require further discussion and where it will be crucial to receive instructions from member countries to take the talks to the next level.”Indicates the document.

This document is the first of the Czech presidency, which will only officially launch in July. The project demonstrates the continuity of the guidelines adopted by the French Presidency of the EU (PFUE) Council and provides key themes that will focus on checks.


Internal documents note that a “ Large numbers EU countries have questioned the definition of what constitutes an AI-based system, saying the current definition is too broad and vague, and therefore vulnerable to general software.

In addition, a related question is to what extent the Commission will be able to refine, through the Secondary Law, Appendix I of the Regulations, which defines the techniques and methods of artificial intelligence.

The Czech presidency proposes various solutions to address these concerns

A more moderate option is to uphold the Commission’s proposal or to select the wording proposed by PFUE, adding some elements of clarification, including references such as education, reasoning and modeling.

In this situation, the EU executive retains its delegated powers, and changes can only be made through a simple legal process.

Other possibilities include a narrow definition covering involved AI systems that are made entirely by machine learning techniques or by machine learning and knowledge-based approaches.

In this case, Annex I has been removed and AI techniques have been integrated directly with the text, either in the preamble to the law or in the corresponding article. The Commission will have the power to enact effective legislation only to clarify the existing divisions.

High risk system

Schedule III of the AI ​​Act contains a list of AI applications that are considered high risk for human well-being and fundamental rights. However, for some member states the term is too broad and should be covered only for those for whom an impact has been assessed.

In this case, the most prudent option is to maintain the text according to the French compromise.

EU countries, on the other hand, can argue for the removal or addition of specific uses or for the term to be more specific.

The Czech presidency has also proposed the addition of additional components, such as high-level standards, to assess what is, in fact, a significant risk. Providers will then self-assess whether their systems meet these criteria.

Another way to narrow down the classification is to isolate whether the AI ​​system provides fully automated decision-making, which would automatically be high risk, or if it only informs human decisions.

In the latter case, the system would be considered high risk only if the information generated by AI makes sense in decision making. However, the Commission needs to clarify what significant contributions are made through secondary law.

EU countries have been asked to consider whether the Commission should retain the power to add new high-risk cases to the Annex, whether it should be able to remove them under certain conditions or whether these powers should be removed.

Governance and implementation

Several countries in the European Union have expressed concern “Highly decentralized governance structure at the national level” Regulations may limit the effective application of the law, especially because they fear that the AI ​​does not have sufficient capacity and efficiency to enforce the rules.

At the same time, the Czech presidency notes that the law should be proposed “A certain level of flexibility for legislation and national characteristics”And “Careful practical and budgetary considerations are required to delegate enforcement powers at a more central level”.

As clarified by the French Presidency, the current regulatory framework follows EU market surveillance regulations, with the responsibility of the national authority, an AI committee for coordination and the intervention of the Commission limited to extreme cases.

Another way to do this is to provide more assistance to member states by setting up an EU test center, an expert team and an emergency system to speed up assistance.

The AI ​​Committee may be strengthened to assist national authorities and may have a more explicit mandate based on Medical Device Regulation by conducting and coordinating market surveillance activities.

Finally, the Commission may be empowered to open direct investigations in exceptional circumstances, but it is not “It simply came to our notice then. A

National Security Exemption

The document indicates that a large majority of EU countries want AI applications related to national security and military use to be explicitly excluded from the AI ​​regulation, but believe that this concept has not yet been adequately defined.

“This regulation does not apply to AI systems created or used exclusively for military or national security purposes.”Can we read the existing agreement, which can be amended to remove the term“Only”Although it can be a source of ambiguity.

Another solution is to exclude the development phase and change the terms used only to refer to marketed AI systems or to serve in military or national security purposes.

Leave a Comment