Artificial intelligence systems for medical diagnostics
The Italian Ministry of Health has published on its website a document entitled “Artificial intelligence systems as a diagnostic aid tool”, which has been developed by the Superior Council of Health in order to explore the subject of systems of intelligence (“AI”) applied to medical diagnosis, in light of the growing diffusion of AI-based technologies in the healthcare world.
1. Definition of AI
AI is defined as “software or programs capable of carrying out, with greater or lesser autonomy, operations similar to the human activity of learning and decision-making in order to achieve specific objectives, through the use of technologies based on machine learning processes, deep learning and the use of neural networks programmed to operate on the model of the human brain”.1
2. Use and regulation of AI for medical diagnosis
AI and related technologies are increasingly prevalent in contemporary society and play an increasingly important role, including in the context of healthcare.
Currently, AI-based technologies control large imaging equipment (scanner or MRI), standardizing acquisition protocols and reducing exam acquisition times.
These technologies have the potential to transform many aspects of patient care.
AI is already used as a diagnostic support, for example, in the following circumstances:
- risk prediction and diagnosis of various diseases, in particular oncological, in their types, characteristics and levels of complexity;
- identification of potential clusters, biomarkers or clinical phenotypes as risk predictors;
- identification of genomic and molecular elements sensitive to existing or innovative treatments to predict adverse events;
- identification of new associations between diseases and their triggers.
Many studies suggest that AI can anticipate diagnoses, even improve them, and allow faster, more targeted and more efficient patient care.
However, the use of AI systems in an ordinary care setting cannot be done without their scientific validation. Tests and clinical studies are therefore necessary to prove, for example, that a diagnosis made by an AI system is just as reliable as that made by a specialized doctor.
All of this necessitates the need for rigorous governance by regulatory agencies to enable due diligence on the reliability of these technologies.
In the United States of America (US) and the European Union (EU), AI systems applied to the medical sector have been subject to the rules applicable to medical devices which require their prior authorization and certification respectively.
In particular, AI systems for medical purposes in the United States have been subject to specific regulation by the competent authority, the Food and drug administration. In the EU, Regulation (EU) 2017/745 regulates medical devices2 in general and also applies to software for medical purposes (which clearly includes AI systems with diagnostic functions3). There was also a new proposal for Regulation of the European Parliament and of the Council, presented by the Commission in April 2021, on the European approach to AI (the “proposed regulation”).
Within the framework of this proposal for a regulation, it has been established that human diagnostics and decision support systems, which are increasingly sophisticated, must necessarily be reliable and precise.
In fact, while there are many studies whose results seem to provide evidence for the reliability of AI systems used in a diagnostic context, there are also some analyzes that question the scientific validity and the methodology used. to achieve such results.
Some claim that there are few direct comparative clinical studies, that’s to say studies that compare the diagnosis made using an AI or machine learning system with the diagnosis made by a healthcare professional; and that in any case, many of the clinical studies carried out would be retrospective, that’s to say on the basis of previously acquired data, and not, on the contrary, of prospective studies carried out in a “real” context and based on the model of randomized controlled clinical trials.
3. Risks and implications of using AI in healthcare
The Superior Health Council recalls in its document mentioned in the introduction how an uncontrolled development of AI is not without potential risks, arising for example from the following aspects:
- the use of AI systems without rigorous scientific validation;
- possible breaches of user privacy;
- the lack of preparation of healthcare personnel to use AI systems correctly;
- discrimination (eg, race and/or gender) introduced by programming algorithms;
- by the absence of rules on the professional responsibility of doctors when they interact with algorithms.
The potential risks arising from the use of AI have also been highlighted by the European Commission which:
- in his White paper on artificial intelligence ohf19e of February 2020, recalls that one of the main problems linked to the use of AI is the uncertainty as to the distribution of responsibilities between the various economic operators involved in its development and use;
- in the proposed regulation, uses the risk factor to distinguish between different AI systems and regulate their marketing and use.
In particular, the proposed regulation provides that:
- that the placing on the market of a high-risk AI system is subject to the performance of a series of prior checks to ensure the security of the system, through a conformity assessment (Articles 6 to 51 );
- that high-risk systems are subject to effective and efficient monitoring by natural persons when the system is in use (Article 14);
- that the obligations of operators in the distribution chain (that’s to say suppliers, importers, distributors, users or other third parties) are proportionate according to their role in relation to the high-risk impact assessment system (Articles 16 to 29).4
The level of risk of AI systems is also taken into account by the European Commission when, with reference to AI systems with a specific risk profile, it invites stakeholders to express their views on the possibility of introduce strict liability regimes, to which compulsory insurance could possibly be associated, so as to guarantee compensation for damages regardless of the solvency of the person responsible and contribute to reducing the costs of damages. This happened on the occasion of the Report on Safety and Liability Implications of Artificial Intelligence, Internet of Things and Robotics published in 2020.
Finally, it should be noted that the provisions set out in the proposed regulation are substantially in line with the recommendations expressed in 2017 by the European Parliament in its Resolution containing “recommendations to the Commission concerning civil law rules on robotics” (the ”Parliament resolution”), in which it called on the Commission to draw up a proposal for a directive to regulate the use of robotics5 in the health sector.
The recommendations of the Parliament’s resolution include, among others, the following:
- there should be no limits on the type or extent of damages that can be compensated;
- the responsibility must be proportional to the level of instruction given to the robot and the degree of autonomy of the robot; thus, the longer a robot’s training time and the greater the robot’s capacity for autonomy, the greater the responsibility of its trainer must be (to date, according to the applicable rules, the responsibility must always be attributed to a human being and not a robot);6
- a possible solution to the problem of liability arising from the use of robots could be a compulsory insurance scheme.
AI systems have great potential and therefore represent a great opportunity, also in the health sector, including for diagnostic applications.
However, it is necessary to subject these systems to more specific regulation, which could be based on the rules currently applicable to traditional medical devices, to which additional specific rules should be added to take into account the particular risk profiles, and therefore also liability profiles, AI systems.