Ahead of a working group meeting on Tuesday (May 10), another compromising text of the Artificial Intelligence (AI) Act was circulated among EU Council diplomats by the French EU presidency.
The new text, seen by EURACTIV, makes significant changes to the textbooks related to the European Artificial Intelligence Council (EAIB), market surveillance, guidelines and code of conduct.
An EU diplomat told EURACTIV that the member states were generally satisfied with the direction given by the French EU presidency in Dossier.
European Council on Artificial Intelligence
The structure of the board has been changed to include one representative for each member state instead of the national supervisory authority. This representative will be appointed for a term of three years, renewable once.
Eight independent experts have been added to the board, two in each department, representing SMEs and start-ups, large companies, academia and civil society. These experts will be selected by the national representatives “Under a fair and transparent selection system”Compromise indicates text.
The European Data Protection Supervisor has been demoted from full member to mere observer. The role of the commission has also been significantly reduced, from chairman of the board of directors to non-voting participants.
The French proposal states that the rules of procedure will be adopted by national representatives with only a two-thirds majority. These rules of procedure will define the process of selecting independent experts as well as the selection, order and functions of the chairman of this board, who must be a national representative.
A new article has been launched by the European Commission on its own initiative or at the request of the management board, to provide guidelines on how to implement the AI regulation, in particular compliance with the requirements of high-risk systems. , Prohibited practices, and how significant changes to existing systems can be implemented.
The guidelines will also cover identification criteria and use for high-risk AI systems, how transparency obligations can be applied in practice, and how AI regulation will interact with other EU laws.
“In issuing these guidelines, the Commission pays special attention to the requirements of SMEs, including start-ups, and to the sectors that may be affected by these regulations.”Adds text.
This part of the text has changed “To clarify the powers of the market surveillance authority and the manner in which these powers are to be used, as well as access to their relevant data and information, especially the source code.”
The market surveillance authority must provide full access to the source code of a high-risk AI system. “Reasonable Request”The code that is required to evaluate system compatibility, such as data and documents, is considered inadequate.
The MoU states that the market watchdog is responsible for overseeing the high-risk systems used by financial institutions. The national authority must immediately inform the European Central Bank of any information relevant to its oversight activities.
Significant changes have been made to the notifications to other member states and to the Commission for measures taken against non-compliant AI systems. These cases now relate to systems that do not comply with the restriction of prohibited practice, their need to comply with high-risk systems and failure to comply with the given transparency obligations. deepfakes And emotional recognition.
In general, the Commission and EU countries will have three months to oppose this action. In the case of suspected cases of violation of the Prohibition of Prohibited Practice, the period has been reduced to 30 days.
If any objections are raised, the Commission consults with the appropriate national authority. The EU executive will decide within nine months whether the decision is justified, but the deadline for banned practice is 60 days.
The Commission may overturn the decision of the National Authority. However, if the Commission deems the measures appropriate, all other authorities will have to replicate them, including the withdrawal of AI systems from the market.
Code of Conduct
The Code of Conduct article has been revised to make it clear that they are voluntary tools for AI systems that do not fall into the category of high-risk systems. The Code of Conduct has been expanded to cover the obligations of users of AI systems.