Artificial Intelligence (AI)
CPME Rapporteur: Prof. Dr Christian LOVIS (CH)
CPME Secretariat: Ms Sara RODA
CPME monitors the developments and the implementation of the Artificial Intelligence Act (AIA), having issued several related policies.
CPME welcomes the AI risk-based approach, the creation of the European Artificial Board, the development of the EU database for high-risk AI and the proposed risk management system. The list of stand-alone high-risk AI of Annex III should still include the use of AI for assessing medical treatments and for health research. Certain systems cannot be deployed without clear validation as there can be misuse leading to discrimination and harm. Having AI external auditors should be considered an ‘appropriate practice’ and the information given to users should be clear and understandable for non IT-specialists. An independent authority or third party should have access to the algorithm in case of complaints or questions, taking due account for commercial secrecy. Human oversight should be of high quality and appropriately resourced. Medical obligations need to be supervised by medical regulators to guarantee the quality of healthcare. The CE marking should only be given to those AI systems that comply with EU law, including the General Data Protection Regulation. The possibility of exercising data subject rights must be provided in the AI system from the very beginning.
If a doctor uses AI according to the training provided and in adherence with the guidelines or instructions for use, he/she should be fully indemnified against adverse outcomes. CPME supports applying the strict liability regime for AI systems (as there is no need for the victim to prove fault) and mandatory insurance for high-risk AI systems, which should include “tail” cover.