Pharma is still defining its place in the AI landscape. MedTech, by contrast, has already adopted AI at a much faster pace, and the results are tangible. Today, more than 1,000 United States Food and Drug Administration (FDA)-authorized AI/ML-based medical devices support diagnosis, imaging, and clinical decision-making, often in highly critical contexts.
This progress did not happen by bypassing regulation. It happened by embedding intended use, risk classification, validation, and governance into AI development from the beginning.
That distinction matters for pharma organizations still hesitating at the starting line. If AI can safely support patient diagnosis through medical devices and in vitro diagnostics (IVDs), it can also support pharmaceutical operations and decision-making, provided it is designed, validated, and governed with the same rigor.
Risk Is Contextual, Not Inherent to the AI
One of the most important lessons from MedTech is that risk is not a property of the technology itself. Risk is determined by the application.
The same AI model can carry vastly different risk profiles depending on its intended purpose. An algorithm sorting administrative data has a fundamentally different impact than one influencing clinical decisions or batch release activities.
Regulators therefore do not focus solely on the presence of AI. They assess:
- The intended use
- The potential consequences of errors
- The level of human oversight
- The controls built around the system
Within medical devices, the regulatory pathway always starts with defining the intended clinical use. That intended use then determines:
- Risk classification
- Validation depth
- Monitoring expectations
- Human-in-the-loop requirements
The same logic translates directly to pharma.
Whether AI is deployed in pre-clinical research, clinical trial support, or Good Manufacturing Practice (GMP) manufacturing environments, intended use remains the foundation for determining how rigorously the system must be governed.
What changes across the pharma value chain is the level of impact.
In pre-clinical research, the focus is primarily on scientific validity rather than direct patient safety. This partly explains why approximately 70% of current AI investment in life sciences is concentrated in research and development (R&D), where the economic potential is substantial and the regulatory burden remains comparatively lower.
Within clinical trials, AI may influence patient selection, protocol optimization, or trial design, introducing concerns around bias, patient protection, and regulatory oversight.
In manufacturing environments, where AI may affect product quality or patient safety, GMP expectations require deterministic behavior, strict validation, traceability, and continuous monitoring.
Understanding this progression allows organizations to start with lower-risk, high-value applications before gradually expanding AI maturity as trust and operational experience grow.
Where This Already Works: Real-World Use Cases
AI is already operational in regulated pharma environments today.
Our partner delaware, a global consultancy and SAP Platinum Partner, has implemented AI-enabled batch review solutions using SAP''s Batch Release Hub. The platform consolidates batch release information while AI functionality helps identify missing or incomplete documentation throughout the batch lifecycle.
Historically, reviewers manually screened certificates of analysis and searched for incomplete records. Today, the system proactively highlights gaps before final review begins.
As Evelien Cools, Industry Lead Life Sciences at delaware, explained during our webinar, the final release decision remains fully human-driven. The AI operates as controlled decision support built on static, deterministic models.
Similar approaches already exist in supply chain forecasting, where AI supports demand planning, stock optimization, and waste reduction. Again, the AI provides recommendations and insights, while humans retain oversight and decision-making responsibility.
The pattern remains consistent:
- Intended use defines risk
- Risk defines governance
- Governance defines validation and monitoring expectations
Importantly, these risks are not entirely new. Many closely resemble challenges the industry has already managed for decades through computerized systems validation, data governance, and process controls.
The difference with AI is scale. Without proper controls, issues can propagate significantly faster.
The International Society for Pharmaceutical Engineering (ISPE) Maturity Model: A Practical Compass
One framework we frequently use is the International Society for Pharmaceutical Engineering (ISPE) AI maturity model, which helps organizations position AI initiatives based on autonomy and control design.
The model evaluates:
- How independently the AI operates
- How much human oversight remains embedded in the process
On the autonomy axis:
- Stage 0 represents fixed algorithms
- Stage 1 represents locked models with manual retraining
- Stage 3 introduces automatic retraining with manual verification
- Stage 5 represents fully self-deterministic systems
On the control design axis:
- Stage 1 positions AI in parallel with existing GxP processes
- Stage 2 requires human approval of outputs
- Stage 5 enables autonomous correction and process steering
To illustrate this practically:
A manually trained visual inspection system with locked algorithms and operator oversight roughly corresponds to ISPE Level III. The software itself must be validated, but organizations also need controls around:
- Training data verification
- Data splitting
- Model quality assurance
- Retraining governance
By contrast, a digital shadow system observing production data in parallel with existing GMP operations may only correspond to Level I, even when retraining occurs automatically. The difference lies in operational impact: the system observes rather than controls.
The value of the ISPE model is that it provides a structured way to define intended use, human oversight, autonomy levels, and control expectations. That clarity is what enables productive regulatory discussions.
From MedTech to Pharma: A Proven AI Compliance Pathway
At QbD Group, we have applied structured AI/ML compliance frameworks within MedTech for several years, supporting AI-enabled medical devices throughout their lifecycle.
The framework follows a clear progression:
- Define intended use
- Establish data governance
- Train the model
- Verify the system
- Validate the intended application
- Deploy and continuously monitor performance
The first stage, defining intended use, is often underestimated but fundamentally shapes everything that follows.
Organizations must define:
- Accuracy expectations
- Precision targets
- Robustness requirements
- Bias thresholds
…before development begins.
If an AI system is intended to replace or support an existing process, its expected performance should at minimum match current operational standards.
The next foundational layer is data governance.
A process-first governance model should define:
- Data ownership
- Access controls
- Retention policies
- Traceability
- Preprocessing steps
- Labeling criteria
- Data splitting methodologies
These elements should be documented and approved before training starts.
For technical implementation, current regulatory expectations increasingly emphasize static, deterministic models in GxP environments.
As Evelien Cools explained during our webinar, Annex 22 guidance currently points toward deterministic AI approaches for regulated GMP contexts. Outside GxP environments, organizations have greater flexibility to explore dynamic or probabilistic systems.
The human-in-the-loop principle also remains essential, not only for compliance purposes, but because it supports trust, adoption, and operational change management.
Once trained, the model enters verification and validation stages.
Verification answers: "Did we build the system correctly?" This includes installation testing, functional testing, and system-level verification with full traceability to software requirements.
Validation answers: "Did we build the right system?" This often involves soft-launch approaches where AI operates alongside existing workflows before full deployment.
Post-Launch Monitoring Is Where Many Organizations Fall Short
AI governance does not end at deployment.
Unlike static software systems, AI models introduce ongoing risks related to:
- Model drift
- Performance degradation
- Changing datasets
- Expanding intended use
Continuous monitoring therefore becomes essential. Organizations need:
- Drift detection mechanisms
- Periodic revalidation
- Change management controls
- Monitoring responsibilities
- Clear retraining criteria
Even relatively small expansions in intended use may require reassessment of whether the original training data remains representative and unbiased.
Without these controls, bias and performance degradation can gradually emerge unnoticed.
Global Harmonization Is Closer Than Many Think
A common concern is the fragmented nature of global AI regulation. However, experience from MedTech suggests the underlying principles are converging faster than expected.
Across the United States, European Union (EU), and Japan, the core expectations around data governance, validation, monitoring, traceability, and risk management are already broadly aligned. The differences primarily exist in implementation details.
Jonathan Boel drew a useful comparison during our webinar with the evolution of data integrity guidance around 2016 and 2017. At the time, different agencies developed guidance independently, yet the core principles converged remarkably closely.
AI governance is likely to follow a similar trajectory.
The Bottom Line
Regulation has not slowed AI adoption in MedTech. It has enabled trust, repeatability, and large-scale deployment.
The regulatory pathways for AI-enabled medical devices already demonstrate that compliant AI implementation is achievable in highly regulated environments.
Pharma does not need to reinvent this model. The challenge is translating proven MedTech governance and validation principles into existing pharmaceutical processes and operational frameworks.
With partners like delaware bringing expertise in digital transformation and enterprise implementation, and QbD Group providing compliance and validation capabilities, the path from experimentation to compliant production is becoming increasingly clear.
Want to Bring MedTech''s AI Compliance Discipline into Your Pharma Operations?
We explored this topic in depth during our webinar AI in Life Sciences: GxP Compliance, co-hosted with delaware. Watch the on-demand session or contact our experts to discuss how QbD Group can help accelerate compliant AI adoption within your organization.
About the Author
Project Manager at QbD Group
Pieter is a Project Manager at QbD Group, coordinating multi-disciplinary teams to deliver quality and regulatory consulting projects.
About the Author
Division Head Software Solutions & Services at QbD Group
Jonathan co-leads the Quality Assurance and Software Solutions & Services divisions at QbD Group. He is a CSV (Computer System Validation) expert who drives digital transformation and technology-enabled compliance solutions for the life sciences industry, including QbD's cloud-based pre-validated QMS and eIFU services.
Watch On-Demand: AI in Life Sciences
Explore AI maturity models, governance, validation frameworks, and human-in-the-loop principles with Pieter Smits and Evelien Cools (delaware).
Watch the on-demand webinar



