Algorithms for artificial intelligence, decision support and therapy, how should human guarantees be implemented? What are the requirements for professional healthcare users?

Marguerite Brac de la Pierre, Monday, May 16, 2022

In this first part, Me Marguerite Brac de La Perrière presented the obligations of professionals using artificial intelligence (AI) tools, presented in the context of prevention, diagnosis or care.

Artificial Intelligence (AI) devices have entered the hospital, improved the provision of care and the state of care with a variety of applications: medical knowledge framework, diagnostic support, decision support, personalized treatment, triage, surgical planning, supportive intervention, remote monitoring, smart prostheses, administration. And automatic adjustment, etc.

Of these applications, the ones that crystallize the most fear are the ones that can replace the doctor in the performance of diagnostic and therapeutic medical procedures. Indeed, how can it be ensured that the results provided by the algorithm are used only as an aid and do not result in loss of physicians’ autonomy and / or poverty in medical law? How to give doctors a way to evaluate the results, and, if necessary, to cancel them?

It is intended to provide a first framework for dealing with these risks as the provisions of the Bioethics Act were adopted (introduced in the industry. L4001-3 CSP) to impose human supervision, or human guarantee:

– On the one hand, the healthcare professional informs the patient about the use of an AI tool in prevention, diagnosis or care and, where applicable, on the outcomes,

– On the other hand, to the designer of the algorithm to ensure the usability of its operation for the users.

The first requirement is an obligation for the patient to be notified by a healthcare professional. It thus allows an exchange to be established between the doctor and his patient regarding the use of an AI and the associated outcome and, therefore, invites the physician to follow the AI’s recommendations to motivate the patient about his or her preferences. Or to withdraw from it, and to suit the Diagnostic or Therapeutic Medical Act.

The goal of the second requirement is to ensure that the doctor has the necessary and sufficient information to understand the operation of the AI ​​system and thus enable him to adapt the result and deviate from it.

Finally, the text places an obligation on the searchability of extended decisions, aimed at enabling care professionals to be able to assess the relevance of the “extended” law by AI, to a physician, and to confirm that. He has benefited from the necessary autonomy in the field of algorithms.

The draft European Regulation (which will be adopted soon) aims to develop a single market for legal, secure and trusted AI applications, regulating this humanitarian oversight, especially in the context of the use of high-risk AI – the creation of medical devices with special AI or Assemble – set requirements for:

Provide transparency and user information “To enable users to interpret the results of the system and use it appropriately”, and through a user manual specifying the destination of its use, functionality, reliability, limitations and limitations;

Human control Through design to enable effective control by natural persons while using AI system. In particular, users must be able to understand the capabilities and limitations of a high-risk AI system so that its results can be accurately interpreted, its activities can be accurately monitored, and automation bias can be avoided.

The result is an obligation for users to use the AI ​​systems in accordance with the instructions for use with the systems, to exercise control over the input data, and to ensure that the latter are relevant to high-end destinations. Risk AI system.
It also results in an obligation to establish, which will be a problem in the future …

Note, however, that in the meantime, texts received for violating this requirement will be fined up to 20 million euros.


Author

Marguerite Brac de La Perrière is a lawyer, partner at Lerins & BCW, a digital health expert. It helps healthcare players in their regulatory compliance, development and growth, especially in the areas of data processing, data recycling and contracting. mbracdelaperriere@lerinsbcw.com

# Therapeutic# Patient# Doctor


Leave a Comment