Artificial intelligence: The key to ensuring the structure of faith

ICreating the necessary confidence in systems based on artificial intelligence (AI) that is only able to reap the benefits and advantages is fundamental. But today, he observes, artificial intelligence questions the control and evolution of learning through transparency, searchability, data, reliability, bias, security and privacy. Therefore, how can a purposeful assessment be made of the design, perception and phasing of these systems, their effectiveness, enrichment and maintenance? This question was posed by the working group in charge of AI and Trust at the Academy of Accounting and Financial Sciences and Techniques.

Launched in 2017, the group is made up of a number of professionals, lawyers, professors, consultants, agencies, associations and stakeholder organizations. His goal?
Think about the regulations, the problems, the risks, and how to audit to be able to certify an algorithm as “practiced.” For example, they studied the draft European regulations as well as the edited work and the reference systems available internationally. They analyzed texts and situations to identify limitations, biases, risks, and responsibilities, and thus identified the best practices and structures required for auditability and certification. Book No. 38 “Artificial Intelligence and Faith: Regulations, Challenges, Risks, Audits and Certifications” is the result of this work. Among the experts invited to present their contributions to this webinar on the emergence of the trusted AI are Camille Rosenthal-Sabrox, Emeritus Professor (Systems Analysis and Modeling Laboratory for Decision Making) at the University of Paris Dowfine-PSL, and Alain Bensoussan, Lexing Alain Bensoussan Presents direction.

The human dimension

For Camille Rosenthal-Sabrox, an Emeritus professor at the University of Paris Dowfine-PSL (Laboratory of Analysis and Modeling of Decision Support Systems), considering the technical dimensions of intelligent robots, it is not at all likely that human dimensions will disappear. It stretches. In fact, it frees up more time for humans, thus increasing its availability and potential. “You need to be aware of what AI-based systems are capable of, what their limitations are, and how they can help people. For that, the latter must be at the center of the reflection, ”he explained. People must understand and master AI-based systems. To do this, the professor defended a multidisciplinary approach. Also, auditing algorithms by an expert is a key factor in confidence and is an aid in understanding these systems.

According to the professor, we need to simultaneously improve our knowledge of AI-based systems. Companies also need to inform employers, executives, decision makers and provide training to build user confidence. Finally, good practice needs to be applied to the design and operation of AI-based systems.

The perspective of a legal expert

“We have taken legal action for the betterment of a good part of the country, for the sake of moral reflection,” explained Aline Bensussan of Lexing Allen Bensusan Avocatus Firm. Which specifies that there are almost a hundred regulations in the world on law and AI, while a draft regulation as important as the GDPR is being prepared at the European level.

The risk-based approach (a level that varies by sector) exists in most countries of the world and the EU is part of this framework. Therefore, there is no prohibition except in special cases. “We test on the one hand and we certify on the other. All of our work focuses on this certified need. In fact, in today’s technology, AIs are autonomous, they avoid their creator “, underlines Aline Bensusan. The goal of this independence-based approach is therefore to strike a balance between the need for innovation and the need to protect against the dangers posed by AI.

A true concentration of moral experience existing in the world, this European regulation tends to be minimal. It seeks to establish some restrictions in a limited way over the oversight of high-risk systems in the AI’s independence policy. All at once a strong need. “AI must be designed, produced, implemented and maintained in accordance with fundamental rights,” recalls Alain Bensusan, noting that it is a question of building by human-robot interaction and not against one another.

The text first provides for a ban on the use of AI to change public confidence, “to expose people who cannot go to the bottom of their privacy to discover things they do not want to disclose. Clearly,” the lawyer insisted. Using AI on vulnerable people is also prohibited. In fact, they do not have enough prudence to oppose the “power” of this AI. Another prohibition, particularly famous through its treatment in a series, relates to social ratings. In addition to protecting victims or fighting serious crimes against terrorism, real-time remote biometrics in public spaces should also be banned. “These restrictions would make it possible for an AI to design ‘by dignity’, in accordance with Article 1 of the EU Charter of Fundamental Rights,” explained Alain Bensussan.

A second category relates to AIs implanted at the center of security agency systems. In this case, a safety plan and a compliance plan must be established. When AI communicates with people, it is important that people understand that they are in front of an AI, for Alain Bensoussan. So they have an obligation to provide this information.

Finally, in relation to high-risk AI, stringent requirements have also been set, and in particular the obligation to keep certain components (archives of the data sets used to train and test AI systems, even to keep the data themselves). Documentation related to programming techniques). Administrative sanctions, stronger than those related to GDPR, could be imposed (30 million euros fine or 6% of global turnover for implementing a banned AI system). Finally, Alain Bensusan calls for appropriate human control to minimize risk as much as possible. “The idea is not just to abandon high-risk systems to machines, but to allow human intervention from design to implementation and use,” the expert continued. A standardized approach (including certification) should be made to prevent cyber attacks.

Alain Bensoussan concludes: “We bet that this rule will become a key element of the 21 for all countries in the world.e Centuries of faith through AI. “

Leave a Comment