AI standards will be co-created by European standards bodies –

Technical standards enabling the implementation of European legislation on artificial intelligence (AI) will be jointly developed by the three European standards bodies in accordance with a draft standardization request obtained by EURACTIV.

The European Committee for Standardization (CEN), the European Committee for Electronic and Electrotechnical Standardization (CENELEC) and the European Telecommunication Standards Institute (ETSI) will all be responsible for developing the technical standards of the Intelligence Act. Artificial.

The three standard bodies are required to provide a work program with the timeline responsible for each standard requested by the flagship AI regulation and details of the technical companies. They are required to submit a progress report to the European Commission every six months.

The technical standards will play a key role in enforcing AI law, as the companies that apply them will be deemed to be in compliance with EU rules by default. Standards play an important role in reducing compliance costs as they are defined “Actual control” In an influential article on EU AI regulation.

The draft Request Appendix details the standards to be developed, including risk management systems, governance and quality of datasets, record keeping, transparency and user information, human resource monitoring, accuracy specification, post-market monitoring and quality management. , And cyber security.

In addition, companies need to define validation methods and procedures to evaluate whether AI systems are suitable for the purpose and meet European standards. Depending on the regulations, this adjustment may be assessed by the AI ​​provider itself or by a third party.

European companies need to consider interdependencies between different requirements and clarify them when creating technical standards.

In addition, they need to pay special attention to the harmonization of values ​​with the needs of SMEs, so as to engage civil society in consensus-building practices.

For Chris Srisak, a technology researcher at the Irish Council for Civil Liberties (ICCL), civil society involvement is a positive development, as quality organizations may not be equipped to deal with problems such as reducing bias.

“This question goes beyond the technical aspects. It depends on who uses the AI ​​system, the context of use and who is affected.”Mr Sreesanth lamented that it was not clear what would happen if civil society was not adequately involved.

China and the United States are increasingly politicizing technological standards by investing huge resources in an effort to influence the debate in international forums in line with their strategic interests and those of their companies.

To address the stagnant decline of European companies in defining technological standards, the European Commission has recently launched a standardization strategy that is consistent with the EU’s digital sovereignty program, reducing foreign influence on European standards and further enhancing EU interests. International Certification Agency.

Therefore, the values ​​need to be aligned with the policy objectives “Respect the values ​​of the Union and strengthen the digital sovereignty of the Union, investing and innovating in AI as well as encouraging competition and growth in the Union market, strengthening global standardization cooperation in the field of AI, in line with the Union’s values ​​and interests.”.

The reference to the interests of SMEs can also be interpreted in this sense, as the Commission seeks to enlarge European technology companies that are still relatively small compared to their international competitors.

At the last summit of the EU-US Trade and Technology Council (TCC) on 16 May, both sides pledged to develop a common roadmap on AI and assessment and measurement tools for trusted risk management. This roadmap is expected for the next TCC meeting in December.

In this initial version, the certification request is valid until August 31, 2025, and the three certification bodies are expected to submit their joint final report by October 31, 2024.

However, the request will be finalized once the AI ​​law is integrated through inter-institutional negotiations, which is not expected to start before 2023.

Leave a Comment