AI in Clinical Trials & Compliance with the EU AI Act

August 21, 2025

Content

AI’s Value Proposition in Drug Development and Clinical Research

Bringing a new drug to market can take over a decade and billions of euros—yet most candidates still fail. Artificial intelligence is changing this by accelerating discovery, optimizing trial design, and delivering real-time patient insights. Beyond data analysis, AI enhances trial efficiency and engagement: identifying eligible participants more precisely, monitoring safety and adherence in real time, and enabling adaptive trial designs that support both innovation and patient trust.

Pharma and biotech companies employing AI in connection with clinical trials whether in trial design, administration, patient recruitment, use of devices or IVDs, generation of synthetic data, or data analysis should recognize that regulators are likely to scrutinize these applications, both in relation to the conduct of the trial and its participants, as well as the resulting outputs submitted for authorization.

The Impact of the EU AI Act on Clinical Trials

Adding to this already complex regulatory landscape applicable to clinical trials, the EU AI Act introduces a comprehensive framework that governs the development and use of artificial intelligence within the European Union, including its application in life sciences and clinical research.

To make matters more complicated, the EU AI Act’s reach is extraterritorial: it applies not only to companies established in the EU, but also to those based outside the Union whenever their AI systems are placed on the EU market or used within the EU. This means that pharmaceutical and biotech companies developing or deploying AI abroad may still fall under its scope if their technologies are intended for EU trials, patients, or regulatory submissions.

Importantly, the EU AI Act does not exempt sponsors from complying with relevant legislation or guidance provided by the EMA. The EU AI Act acknowledges that AI-enabled medical devices may pose risks not fully covered within its scope. For sponsors, this means that even as the EU AI Act introduces new rules, EMA guidance remains essential. From clinical trial design to data integrity and safety monitoring, sponsors must ensure that any use of AI aligns with EMA requirements to secure approvals and maintain patient safety.

Complementary application of the EU AI Act and Sector-specific Laws

The EU AI Act is designed as a horizontal regulation, which means that it sets general rules for the safe and trustworthy use of AI across sectors. However, lawmakers recognized that certain industries, like pharmaceuticals and medical devices, already operate under stringent, sector-specific regulations. To avoid duplicating or conflicting requirements, the Act exempts some AI systems from its direct scope where other EU frameworks already provide comprehensive oversight.

  • Medical Devices & IVDs:

If AI is part of a medical device or diagnostic tool, it falls primarily under the Medical Devices Regulation (MDR) or the In Vitro Diagnostic Regulation (IVDR). These laws already prescribe strict requirements on risk management, clinical evidence, safety, and post-market surveillance, making it unnecessary for the AI Act to layer on additional rules.

  • Clinical Trials:

AI tools used within the framework of the Clinical Trials Regulation (CTR), such as for patient recruitment, trial monitoring, or data integrity, are subject to trial-specific safeguards enforced by the European Medicines Agency (EMA) and national authorities. These existing obligations ensure patient safety, ethical conduct, and scientific reliability.

In short, the exemption exists because the risks posed by AI in these contexts are already addressed through highly specialized legislation. Instead of replacing those rules, the AI Act works alongside them, acknowledging that compliance with MDR, IVDR, CTR, EMA guidance, and data protection laws like GDPR already provides “equivalent guardrails.”

When AI in Clinical Trials Falls Outside the EU AI Act?

Under the EU AI Act, Article 2(6) and Recital 25 carve out an important exemption for AI systems developed and used solely for scientific research and development. This reflects the EU’s intent to safeguard innovation and respect the freedom of science, ensuring that the Act does not place unnecessary burdens on early-stage experimentation. EFPIA considers that this exemption applies to AI-based drug development tools used in the research and development of medicines because the sole use of these tools is in the R&D of medicine development. 

However, the exemption comes with a crucial caveat: only AI systems created exclusively for scientific R&D fall outside the Act’s scope. Any AI system with a dual purpose, such as one being developed for both research and potential commercial use, remains subject to the AI Act’s requirements.

In practice, this means that while academic or exploratory research may benefit from regulatory breathing space, sponsors and companies must carefully assess whether their AI tools are truly exempt or whether their broader applications bring them back under the Act’s obligations.

Context

Relevant Provision

Key Principle

Pure scientific R&D use in clinical research

Recital 25 + Article 2(6)

Exempted from AI Act regulation

AI in medical devices/IVDs

MDCG-AIB guidance (non-binding)

Requires dual compliance under AI Act and MDR/IVDR (complementary application)

Does AI in Clinical Trials Count as High-Risk or Limited-Risk?

Under the EU AI Act, AI systems are classified according to the level of risk they pose, ranging from minimal risk to limited and high risk, through to outright prohibited uses. We explained prohibited AI systems and explored in more detail the obligations for categories of high-risk AI, as well as what they mean in practice. Building on that foundation, we now apply this risk-based classification specifically to the context of clinical trials, examining how the Act’s framework shapes the use of AI in trial design, conduct, and oversight.

The high-risk category represents the most stringent level of AI permitted under the EU AI Act, carrying with it extensive compliance obligations. In the context of clinical research, several types of AI applications are likely to fall into this bracket.

These include systems that support:

  • patient recruitment,
  • treatment allocation,
  • diagnostic processes,
  • data handling and integrity,
  • synthetic data generation, and
  • decision-making in trial conduct.

Likewise, AI embedded in medical devices is explicitly recognized as high risk under the Act.

These systems directly affect trial conduct and participant well-being, which triggers the stricter compliance obligations.

That said, not all AI in clinical trials will automatically be considered high risk. Some tools serve supportive or administrative functions. For instance, AI that helps with workflow scheduling, document drafting, or non-critical data visualization is more likely to be categorized as limited risk. These systems must meet transparency obligations (e.g., making it clear when outputs are AI-generated) but do not face the extensive conformity assessments and oversight required of high-risk AI.

In practice, sponsors should assume that AI-touching trial design, patient outcomes, or regulatory submissions will be high risk, while peripheral or back-office applications may fall into the limited-risk tier.

Where the EU AI Act Defines AI in Clinical Trials as High-Risk AI Systems?

The EU AI Act explains in Article 6 and Annex III when an AI system is considered high-risk. In practice, this usually happens in three situations:

 

a. When AI is built into medical devices or diagnostics

If the AI forms part of a product already regulated under the MDR or IVDR, it automatically falls into the high-risk category (Article 6(1)).

 

b. When AI is used in healthcare functions listed in Annex III

These are activities that could directly affect health, safety, or fundamental rights, such as diagnostic systems or clinical decision support tools (Article 6(2), Annex III).

 

c. When AI directly shapes clinical trial outcomes

For example, systems used in patient recruitment, treatment allocation, diagnostics, monitoring, data management, or trial-related decision-making are very likely to be treated as high-risk (Article 6(2), Annex III).

 

Although Annex III doesn’t list clinical trial activities directly, it does cover broader healthcare-related and impactful AI functions, such as healthcare diagnostics, decision support, and patient triage, that map closely to many trial-related uses. This makes it clear that most clinical trial applications will be considered high-risk.

Limited-Risk AI in Clinical Trials

Not every AI tool used in research will be classed as high-risk. Some fall into the limited-risk category, which comes with lighter obligations focused mainly on transparency:

 

a. Administrative support tools

AI that helps with scheduling, workflow management, or document handling but doesn’t impact patient safety (Article 52(1)).

 

b. Non-critical data analysis

Systems that visualize trial data for efficiency or reporting purposes, without making decisions that affect trial outcomes (Article 52(1)).

 

c. Communication aids

Chatbots or AI tools that answer routine questions for trial staff or participants, provided they are clearly identified as AI (Article 52(1)).

 

In these cases, sponsors must ensure users are informed when interacting with AI or receiving AI-generated outputs, but they are not required to go through the strict conformity assessments that apply to high-risk systems.

Practical Considerations for Using AI in Clinical Trials

AI has the potential to transform clinical trials, but only if it is applied responsibly. Sponsors and CROs need to balance innovation with safeguards that ensure accuracy, compliance, and patient trust. Here are some key points to keep in mind when evaluating AI tools in research:

  • Model design – Understand how the system was built and trained.
  • Data quality – Check that datasets are reliable, representative, and well-managed.
  • Validation – Make sure the tool is properly tested and re-evaluated over time.
  • Interpretability – Ensure outputs are clear enough for clinicians and researchers to use confidently.
  • Adaptability – Confirm the system can handle new or unexpected trial data.
  • Bias awareness – Look for risks of biased results that could affect patient outcomes.
  • Compliance – Verify alignment with existing regulatory standards.
  • Practical impact – Consider effects on timelines, costs, and trial efficiency.
  • People first – Provide staff with the right training and support to use AI effectively.
  • Privacy – Protect patient data in line with GDPR and ethical standards.

At its best, AI doesn’t replace human judgment — it amplifies it, helping researchers run safer, faster, and more reliable trials.

To better understand how these practical considerations connect with regulatory requirements, it helps to map them directly against the EU AI Act. This side-by-side view shows how common steps in evaluating AI for clinical trials align with the Act’s obligations, giving sponsors a clearer picture of where compliance and best practice meet.

Here is the breakdown of these obligations and the relevant EU AI Act provisions:

Practical Step

Relevant EU AI Act Obligation

Model design and training – Clarify how the AI was built, its algorithms, and training methods

Technical documentation & risk management

Data sources – Review the origin, scope, and representativeness of training and analysis datasets

Data governance & quality requirements

Quality controls – Ensure procedures exist to detect and correct data errors, gaps, or inconsistencies

Accuracy, robustness & cybersecurity obligations

Pre-processing safeguards – Verify measures to protect data accuracy and integrity before AI processing

Record-keeping & transparency

System validation – Check how the AI is tested initially and reassessed regularly

Conformity assessment & post-market monitoring

Interpretability – Confirm that clinicians and staff can understand and explain outputs

Transparency & human oversight

Adaptability – Assess how the system handles new or unexpected data

Risk management obligations

Bias risks – Evaluate whether the model could introduce or amplify bias

Fairness & non-discrimination requirements

Compliance checks – Align AI use with MDR/IVDR, CTR, and sector-specific frameworks

Sectoral law alignment (complementary application)

Operational impact – Consider implications for timelines, efficiency, and costs

Not mandated by AI Act, but critical for feasibility

User enablement – Confirm training and support resources for staff

Human oversight obligations

Data protection (GDPR) – Safeguard patient rights, ensure lawful processing, and secure health data

Complements AI Act via GDPR obligations

How to automate AI compliance and Governance in clinical trials?

Organizations must establish robust AI governance frameworks to safeguard patient trust, ensure ethical use of technology, and stay compliant in an evolving regulatory landscape.

This is where Whisperly can make a real difference. By offering an end-to-end compliance solution, Whisperly helps sponsors manage every stage of the EU AI Act journey, whether the AI system is high-risk or limited-risk, streamlining oversight, documentation, monitoring, and reporting. With Whisperly, life sciences organizations can innovate with confidence, knowing their AI use in clinical trials is both effective and compliant.

How Can Whisperly Help?

Share to social media: