Opinion
Architecture and Hardware Viewpoint

Medical Artificial Intelligence: The European Legal Perspective

Although the European Commission proposed new legislation for the use of "high-risk artificial intelligence" earlier this year, the existing European fundamental rights framework already provides some clear guidance on the use of medical AI.
Posted
  1. Introduction
  2. Fundamental Rights as Legal Guidelines for Medical AI
  3. Human Oversight as a Key Criterion
  4. Explainability, Privacy by Design, and Non-Discrimination
  5. Obligation to Use Medical AI?
  6. The Open Question of Liability
  7. Conclusion
  8. References
  9. Authors
  10. Footnotes
medical icon

In late February 2020, the European Commission published a white paper on artificial intelligence (AI)a and an accompanying report on the safety and liability implications of AI, the Internet of Things (IoT), and robotics.b In the white paper, the Commission highlighted the "European Approach" to AI, stressing "it is vital that European AI is grounded in our values and fundamental rights such as human dignity and privacy protection." In April 2021, the proposal of a Regulation entitled "Artificial Intelligence Act" was presented.2 This Regulation shall govern the use of "high-risk" AI applications which will include most medical AI applications.

Back to Top

Fundamental Rights as Legal Guidelines for Medical AI

Referring to the above-mentioned statement, this Viewpoint aims to show European fundamental rights already provide important legal (and not merely ethical) guidelines for the development and application of medical AI.7

As medical AI can affect a person's physical and mental integrity in a very intense way and any malfunction could have serious consequences, it is a particularly relevant field of AI in terms of fundamental rights. In this context, it should be stressed that fundamental rights (a.k.a. human rights) not only protect individuals from state intervention, but also oblige the state to protect certain freedoms from interference by third parties. These so-called "obligations to protect" are of particular importance in medicine: For example, the European Court of Human Rights has repeatedly stated that fundamental rights entail an obligation on the state to regulate the provision of health services in such a way that precautions are taken against serious damage to health due to poorly provided services. On this basis, the state must, for example, oblige providers of health services to implement quality-assurance measures.


Fundamental rights thus constitute a binding legal framework for the use of AI in medicine.


Fundamental rights thus constitute a binding legal framework for the use of AI in medicine. In line with the European motto "United in diversity," this framework is distributed across various legal texts, but quite uniform regarding its content: Its core component is the European Charter of Fundamental Rights (CFR), which is applicable to the use of medical AI because the provision of medical services is covered by the freedom to provide services under European law. For its part, the CFR is strongly modeled on the European Convention on Human Rights (ECHR), which is also applicable in all E.U. states. The fundamental rights of the constitutions of the individual E.U. states also contain similar guarantees. For this reason, we focus this Viewpoint on the CFR.

Back to Top

Human Oversight as a Key Criterion

It has already been emphasized by the Ethics Guidelines of the HLEG4 that "European AI" must respect human dignity (Art. 1 CFR), which means medical AI must not regard humans as mere objects. From this, it can be deduced the demands for human oversight expressed in computer science1 are also required by E.U. fundamental rights (see also Art. 14 of the proposed AI Act). Decisions of medical AI require human assessment before any action is taken on their basis. The E.U. has also implemented this fundamental requirement in the much-discussed provision of Art. 22 GDPR, which allows "decisions based solely on automated processing" only with considerable restrictions. In other words: European Medical AI legally requires human oversight (a.k.a. "a human in the loop"5).

Back to Top

Explainability, Privacy by Design, and Non-Discrimination

Even more important for medical AI, however, is Art. 3 para. 2a) CFR, which requires "free and informed consent" of the patient. This points to a "shared decision-making" by doctor and patient where the patient has the ultimate say. Medical AI can therefore only be used if patients have been informed about its essential functions beforehand—admittedly in an intelligible form. This makes it clear, however, that the European fundamental rights basically require the use of explainable AI in medicine (see also Art. 13 para. 1 of the proposed AI Act).

Recent research in the medical domain6,8 as well as legal research from a tort law perspective very much confirms this conclusion.3,9 Consequently, European Medical AI should not be based on a "machine decision," but much rather on "an AI supported decision, diagnostic finding or treatment proposal." We conclude: European medical AI requires human oversight and explainability.

That (not only medical) European AI must be developed and operated in accordance with the requirements of protection of data and privacy (Arts. 8 and 7 CFR) and thus with the GDPR, is well acknowledged and does not require further discussion. Still, it is worth mentioning that Art. 25 GDPR not only requires controllers ("users") of AI, but indirectly also developers of AI to take these requirements into account when designing AI applications ("privacy by design").

There is a rich body of fundamental rights provisions requiring equality before the law and non-discrimination, including gender, children, the elderly and disabled persons in the CFR (Arts. 20–26). From these provisions, further requirements for the development and operation of European medical AI can be deduced: Not only must training data be thoroughly checked for the presence of bias, also the ongoing operation of AI must be constantly monitored for the occurrence of bias. If medical AI is applied to certain groups of the population that were not adequately represented in the training data, the usefulness of the results must be questioned particularly critically.


European medical AI requires human oversight and explainability.


At the same time, care must be taken to ensure useful medical AI can nevertheless be made available to such groups in the best possible way. In other words: European medical AI must be available for everyone. The diversity of people must always be taken into account, either in programming or in application, in order to avoid disadvantages.

Back to Top

Obligation to Use Medical AI?

However, if a medical AI application meets the requirements described here, it may become necessary to explicitly impose its use for the benefit of all. European fundamental rights—above all the right to protection of life (Art. 2 CFR) and private life (Art. 7 CFR)—give rise to an obligation on the part of the state, as previously mentioned, to ensure work in health care facilities is carried out only in accordance with the respective medical due "standard of care" (a.k.a. "state of the art"). This also includes the obligation to prohibit medical treatment methods that can no longer be provided in the required quality without the involvement of AI.10 This will, in the near future, probably hold true for the field of medical image processing.

Back to Top

The Open Question of Liability

Does this mean the existing fundamental rights framework can answer all legal questions arising for the use of medical AI? Unfortunately, there is one major exception, involving questions of liability law, which particularly unsettle the AI community. Who will be legally responsible when medical AI causes harm? The software developer, the manufacturer, the maintenance people, the IT provider, the hospital, the clinician? It is true that strict liability—liability without fault—is not unknown under European law, especially for dangerous objects or activities. Such an approach is neither required nor prohibited by the CFR, so questions of civil liability cannot be answered conclusively from a fundamental rights perspective. The European Commission is aware of this challenge and has announced in its previously mentioned report on the safety and liability implications of AI that it will evaluate the introduction of a strict liability system together with compulsory insurance for particularly hazardous AI applications—which will presumably cover most medical AI. Such a system could certainly help to eliminate many existing ambiguities regarding the liability of medical AI applications.


The European Commission wishes to further promote the development and use of AI in Europe.


Back to Top

Conclusion

The European Commission wishes to further promote the development and use of AI in Europe. In its white paper on AI published in 2020 it highlights the "European Approach" to AI, particularly referring to fundamental rights in the European Union. The authors argued, by using the example of medical AI, that many of these fundamental rights coincide with demands of computer scientists, above all human oversight, explainability and avoidance of bias. At the same time, it is likely that medical AI will soon not only be used voluntarily, but will also have to be used by health care providers to meet the due standard of care. This makes answers to the remaining uncertainties regarding liability for defective medical AI applications more urgent. In this regard, the Commission has announced that it will soon provide clarity by proposing a strict liability approach paired with an obligatory insurance scheme for malfunctions of AI. Despite some open questions, it should nevertheless be stressed that legal requirements for the use of medical AI are already clearer today than is often assumed in computer science.

    1. Etzioni, A. and Etzioni, O. Designing AI systems that obey our laws and values. Commun. ACM 59, 9 (Sept. 2016), 29–31.

    2. European Commission: 'Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)'; https://bit.ly/3AcDsCa

    3. Hacker, P. et al. Explainable AI under contract and tort law: Legal incentives and technical challenges. Artificial Intelligence and Law 2020 28, 415–439.

    4. High-Level Expert Group on Artificial Intelligence: 'Ethics Guidelines for Trustworthy AI'; https://bit.ly/3Ad2vov

    5. Holzinger, A. Interactive machine learning for health informatics: When do we need the human-in-the-loop? Brain Informatics 3, 2 (2016), 119–131.

    6. Holzinger, A. et al. Causability and explainability of artificial intelligence in medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 9, 4 (2019).

    7. Mueller, H. et al. The Ten Commandments of ethical medical AI. IEEE Computer 54, 7 (2021), 119–123.

    8. O'Sullivan, S. et al. Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. The International Journal of Medical Robotics and Computer Assisted Surgery 15, 1 (2019), 1–12.

    9. Price, W.N. Medical malpractice and black box medicine. In Cohen, G. et al. Eds., Big Data, Health Law and Bioethics (2013), 295–306.

    10. Schönberger, D. Artificial intelligence in healthcare: A critical analysis of the legal and ethical implications. International Journal of Law and Information Technology 27, 2 (2019), 171–203.

    a. See https://bit.ly/393uYkM

    b. See https://bit.ly/3k7y11J

    The authors are very grateful for the comments of the reviewers and the editors. Parts of this work have received funding from the Austrian Science Fund (FWF) through Project: P-32554 "A Reference Model of Explainable Artificial Intelligence for the Medical Domain."

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More