When Artificial Intelligence Treats You: Any Responsibility (s)?

Whether robotic surgeons or establishing algorithm diagnostics, artificial intelligence has enabled the emergence of a new paradigm in the traditional patient-doctor relationship. An artificial intelligence based on algorithms such as Deep Mind or IBM Watson, capable of evaluating patient clinical data to offer more appropriate treatment options than current practice, can now support or oppose conventional medical diagnoses. Previously, when it was easy to diagnose the injured patient or to identify the person responsible for the prescribed treatment with whom to contact, this concept has actually become more complicated with the use of artificial intelligence in medical diagnosis.

The emergence of mechanisms such as machine learning, a process by which robots learn from the data they receive, and even deep learning, which gives robots a “neural” system similar to humans, allows robotics to autonomously mimic decisions. Matters of intellectual power. Thus, considering now that other actors, outside of the medical profession, and such robots themselves can cause harm, as opposed to the meaning of Asimov’s first law *, opens up a new field of reflection around a redefining medical responsibility.

The purpose of this article is to present some thoughts on the responsibilities associated with the practice of medicine by “intelligent” robots. He concentrates on tort obligations, as well as French and European regulations. Only in a completely autonomous way will damage caused by the robot be considered excluding the fault of the doctor or any other actor.

Current French legal system: “traditional” tort liability

Liability for medical malpractice is subject to torture law. In this way, the patient may seek compensation from doctors, medical institutions, pharmaceutical companies or manufacturers for damages caused by wrong and unjust actions.

It should be noted that there is a legal vacuum for autonomous robots in France. European resolutions are not applicable regulations, only the recommendation and draft definition for the moment “intelligent robot”. So the robot is considered an object just like a pen. The closest liability to our framework is the custody liability provided in Article 1243 of the Civil Code. This article specifies that in principle the owner is considered the guardian of the thing and therefore will be responsible for the damage caused by it. So the owner of the robot can be blamed on the doctor on the basis of his custody.

Liability also applies to defective products provided for compliance with Article 1245 and the Civil Code, but is limited to manufacturers and designers, not physicians.

According to Mark Chinen (3), a professor at the Seattle University School of Law, one of the problems with integrating artificial intelligence with current civic accountability regimes is its ascending autonomy. Robotics has indeed reached a level of independence whose boundaries are constantly being pushed back. As the artificial intelligence system gains autonomy, other actors, such as doctors, medical institutions, system designers or software developers, gradually lose control of the autonomous object, thus complicating the assignment of responsibilities.

With regard to the current autonomy and decision-making freedom of certain robots, the myriad of actors involved in its design, production, programming or even its use, as well as the possible future development of artificial intelligence, seems to be extremely difficult. Define a responsible team.

What responsibility system should be imagined to protect the patient from the biased work of fully autonomous artificial intelligence?

We can then consider various legal measures to overcome the difficulties of assigning responsibilities to robots equipped with artificial intelligence. It covers both ethical and legal considerations.

It will be possible to apply animal rule or parent / child rule to robots. Our thinking is focused on the actions of robots in the context of medical procedures, we will omit these rules. So we will limit ourselves to the rules that apply to medical procedures. Therefore, there are some interesting legal measures in place to establish an effective legal framework.

Without a legal personality, the robot is considered non-human.

Compulsory insurance, as it exists for cars today, can make it possible to protect patients from a financial standpoint. However, this system will not be able to take into account the evolution of artificial intelligence whose harmful actions would then be considered a danger. This will allow the robot to take away all responsibility for making decisions in a completely autonomous manner. Physicians will be appointed only if they have the responsibility to verify the robot’s recommendations and actions. The applicability of this scheme should be considered according to the reality of effectiveness of medical services. It should be noted that a robot is already approved in the United States to operate its own diagnostics (4).

So within the framework of such an arrangement a question arises: Who should bear the insurance financing? If the doctor has the responsibility to verify the recommendations and functions of the robot, it would seem logical that the doctor should be the actor on whom the largest part of the financing of such insurance depends. But the more freedom the medical system will release to robots with increased freedom, the more the system will have to be funded in a shared way between the various actors involved (doctors, producers, owners, etc.).

Robots are thought to be human

Artificial intelligence can be thought of as an individual in its own right, mimicking the activity of the brain. In this way he will be independent in the eyes of the law and will be considered as a human being. So the artificial intelligence system will have its own obligations and will be sued directly for any claim. This system therefore implies that the robot simply joins our legal framework as a human being. He could then be a doctor, possess his assets, and be subject to the tort action currently provided in our Civil Code.

The first step towards such a regime was made by Saudi Arabia: although Sophia did not receive any rights or obligations, she was granted citizenship in this country, ensuring that such an option could not be completely blown up internationally (5). However, treating a robot as a human being means giving it rights and therefore respecting them: enforcing labor codes by regulating and regulating its employment, giving it access to decent housing – like human habitation (although unfortunately not all humans have it). Access 7) – Right to freedom of expression and movement. In light of these issues, the application of human rights to robots by constructing existing economic, legal and geopolitical systems would go against the moral and deontological boundaries.

Other alternative foods

Another solution is to draw inspiration from certain legal regimes for humans to create a new species subject to certain rights and obligations. The proposal was made by the European Parliament, following a 31st revision of a resolution “to create a certain legal personality for robots, so that at least the most sophisticated autonomous robots can be considered electronic persons with certain rights and duties, to repair any damage caused by third parties.” With the right to do so; any robot that makes intelligent decisions autonomously or communicates independently with third parties will be considered an electronic person ”(6).

This would then allow an artificial intelligence to be subject to the regulations applicable to a physician because it could be a practitioner without a human and therefore it would take away basic human rights. By defining the classification and legal affiliation of artificial intelligence, the law could evolve naturally and adapt to technological advances that are still very little expected. However, the definition and construction of such a government requires substantial and highly complex work on the part of the legislature, as well as substantial human and financial expenditure.

On these various issues, 220 experts – via an open letter (7) – have already analyzed the risks of such regulations, which may exclude the doctor’s liability even in situations where the latter controls artificial intelligence.

Chronicle “Law, Lawyer and Extended Law Practice”

The purpose of this column is to address current issues related to this transformation. In a context where digital, big data and data analytics, machine learning and artificial intelligence are profoundly and sustainably transforming the practice of law, creating “augmented lawyers” but calling for “enhanced legislation” on challenges and new business models. Digital
The EDHEC Business School has two resources to contribute to the discussion of this topic. On the one hand, its LegalEdhec research center, whose work – recognized – at the crossroads between law and strategy, and related to legal risk management and legal effectiveness, today managed to launch its new A3L project (Advanced Law, Lawyer and Lawyer). On the other hand, its students and especially its business law and management program (in partnership with the law faculty of the Catholic University of Lille) and its LLM law and tax management, whose training and purpose professionals put them here are at the heart of this digital problem.

Leave a Comment