QbD Group
    A 7-Phase Framework for AI/ML Compliance in Life Sciences

    A 7-Phase Framework for AI/ML Compliance in Life Sciences

    Discover a 7-phase AI/ML compliance framework for life sciences, covering data governance, validation, and post-launch monitoring under GxP.

    2026年4月29日3 分钟阅读

    One of the most common questions I hear from life sciences companies starting an AI initiative is simple: where does compliance fit in?

    My answer is always the same: everywhere — from the very first scoping conversation to post-launch monitoring.

    At QbD Group, together with our partner delaware, we have developed an AI/ML compliance framework that governs each stage of the AI lifecycle. It brings together two core building blocks:

    • Data governance, controlling the data that goes into your models
    • AI/ML compliance, covering the verification and validation of the models themselves

    Together, they form the backbone of compliant AI in regulated environments.

    Phase 1: Define Intended Use and Data Governance

    Everything starts here, and this step defines everything that follows.

    Before any development begins, organizations must define the performance characteristics they are aiming for: accuracy, precision, robustness, operational performance, and bias thresholds. These should be grounded in concrete benchmarks. If your AI model is meant to replace an existing process, it should at least match or exceed current performance.

    At the same time, a formal data governance plan must be established. This includes data ownership, access controls, retention policies, and full traceability of datasets.

    It should be clear upfront what your data goes through: preprocessing steps, labelling criteria, and splitting rules. This process must be documented and approved before model training starts.

    We often refer to this as the process-first principle.

    Phases 2 to 4: Data Acquisition, Preparation, and Model Training

    The technical design phases involve rapid iteration on dataset preparation, model selection, and testing.

    In a GxP context, a key principle is to start with static, deterministic models. Outside of GxP, there is more flexibility to explore dynamic or probabilistic approaches.

    Another critical principle is human-in-the-loop design, integrated directly into the technical setup. This is not only relevant for compliance, but also for adoption and trust across the organization.

    In parallel, data quality and bias assessment must be performed. Based on the intended use, organizations need to evaluate the representativeness and completeness of their data and identify potential sources of bias.

    This is where pharma's existing strengths become clear: organizations with mature data governance already understand provenance, lineage, and version control.

    Phases 5 and 6: Model Verification and Validation

    These phases answer two distinct questions.

    • Verification: did we build it right? This includes installation testing, functional testing, unit testing, integration testing, and full system testing — all traceable to software requirements.
    • Validation: did we build the right thing? This is your user acceptance testing stage, confirming that the model supports the intended GxP decisions in a real-world context.

    A common approach is a soft launch, where the AI model runs alongside the existing process before going fully live.

    Phase 7: Post-Launch Monitoring

    This is often the most underestimated phase, and the one that clearly distinguishes AI from traditional software.

    AI models carry inherent risks such as model drift and performance degradation, making continuous oversight essential.

    This includes:

    • Performance monitoring against predefined thresholds
    • Incident and near-miss reporting
    • Model drift detection
    • Periodic revalidation
    • Change management processes

    If the intended use expands over time, a proper impact assessment is required. The original training data may no longer be representative, and this is where bias can silently emerge.

    A Cycle Proven in MedTech, Ready for Pharma

    This lifecycle approach has already been applied successfully in MedTech for several years.

    The framework combines:

    • Strong data governance
    • Structured AI/ML compliance
    • Integration with traditional validation and post-market surveillance

    Together, these elements ensure that AI systems remain compliant with GxP regulations. And in practice, combining QbD Group's compliance expertise with delaware's digital capabilities ensures that regulatory and technical implementation are addressed as one coherent effort.

    AI in Life Sciences webinar

    Watch the Webinar: AI in Life Sciences

    Discover how to deploy AI in a trustworthy, validated, and inspection-ready way under GxP — covering data governance, explainability, and lifecycle management.

    Watch the webinar

    关于作者

    Jonathan Boel
    Jonathan Boel

    Division Head Software Solutions & Services at QbD Group

    Jonathan co-leads the Quality Assurance and Software Solutions & Services divisions at QbD Group. He is a CSV (Computer System Validation) expert who drives digital transformation and technology-enabled compliance solutions for the life sciences industry, including QbD's cloud-based pre-validated QMS and eIFU services.

    分享本文

    订阅生命科学领域的最新动态

    专家观点直达您的收件箱——选择您的兴趣。

    绝无垃圾邮件。随时取消订阅。

    Keep reading

    Related articles

    我们使用 Cookie 来改善您的体验

    我们使用必要的 Cookie 来保证网站功能,以及可选的分析 Cookie 来改善我们的服务。 阅读我们的 隐私政策Cookie 政策.