QbD Group
    A 7-Phase Framework for AI/ML Compliance in Life Sciences

    A 7-Phase Framework for AI/ML Compliance in Life Sciences

    Discover a 7-phase AI/ML compliance framework for life sciences, covering data governance, validation, and post-launch monitoring under GxP.

    April 29, 20263 min read

    One of the most common questions I hear from life sciences companies starting an AI initiative is simple: where does compliance fit in?

    My answer is always the same: everywhere — from the very first scoping conversation to post-launch monitoring.

    At QbD Group, together with our partner delaware, we have developed an AI/ML compliance framework that governs each stage of the AI lifecycle. It brings together two core building blocks:

    • Data governance, controlling the data that goes into your models
    • AI/ML compliance, covering the verification and validation of the models themselves

    Together, they form the backbone of compliant AI in regulated environments.

    Phase 1: Define Intended Use and Data Governance

    Everything starts here, and this step defines everything that follows.

    Before any development begins, organizations must define the performance characteristics they are aiming for: accuracy, precision, robustness, operational performance, and bias thresholds. These should be grounded in concrete benchmarks. If your AI model is meant to replace an existing process, it should at least match or exceed current performance.

    At the same time, a formal data governance plan must be established. This includes data ownership, access controls, retention policies, and full traceability of datasets.

    It should be clear upfront what your data goes through: preprocessing steps, labelling criteria, and splitting rules. This process must be documented and approved before model training starts.

    We often refer to this as the process-first principle.

    Phases 2 to 4: Data Acquisition, Preparation, and Model Training

    The technical design phases involve rapid iteration on dataset preparation, model selection, and testing.

    In a GxP context, a key principle is to start with static, deterministic models. Outside of GxP, there is more flexibility to explore dynamic or probabilistic approaches.

    Another critical principle is human-in-the-loop design, integrated directly into the technical setup. This is not only relevant for compliance, but also for adoption and trust across the organization.

    In parallel, data quality and bias assessment must be performed. Based on the intended use, organizations need to evaluate the representativeness and completeness of their data and identify potential sources of bias.

    This is where pharma's existing strengths become clear: organizations with mature data governance already understand provenance, lineage, and version control.

    Phases 5 and 6: Model Verification and Validation

    These phases answer two distinct questions.

    • Verification: did we build it right? This includes installation testing, functional testing, unit testing, integration testing, and full system testing — all traceable to software requirements.
    • Validation: did we build the right thing? This is your user acceptance testing stage, confirming that the model supports the intended GxP decisions in a real-world context.

    A common approach is a soft launch, where the AI model runs alongside the existing process before going fully live.

    Phase 7: Post-Launch Monitoring

    This is often the most underestimated phase, and the one that clearly distinguishes AI from traditional software.

    AI models carry inherent risks such as model drift and performance degradation, making continuous oversight essential.

    This includes:

    • Performance monitoring against predefined thresholds
    • Incident and near-miss reporting
    • Model drift detection
    • Periodic revalidation
    • Change management processes

    If the intended use expands over time, a proper impact assessment is required. The original training data may no longer be representative, and this is where bias can silently emerge.

    A Cycle Proven in MedTech, Ready for Pharma

    This lifecycle approach has already been applied successfully in MedTech for several years.

    The framework combines:

    • Strong data governance
    • Structured AI/ML compliance
    • Integration with traditional validation and post-market surveillance

    Together, these elements ensure that AI systems remain compliant with GxP regulations. And in practice, combining QbD Group's compliance expertise with delaware's digital capabilities ensures that regulatory and technical implementation are addressed as one coherent effort.

    About the Author

    Jonathan Boel
    Jonathan Boel

    Division Head Software Solutions & Services at QbD Group

    Jonathan co-leads the Quality Assurance and Software Solutions & Services divisions at QbD Group. He is a CSV (Computer System Validation) expert who drives digital transformation and technology-enabled compliance solutions for the life sciences industry, including QbD's cloud-based pre-validated QMS and eIFU services.

    AI in Life Sciences webinar

    Watch the Webinar: AI in Life Sciences

    Discover how to deploy AI in a trustworthy, validated, and inspection-ready way under GxP — covering data governance, explainability, and lifecycle management.

    Watch the webinar
    Share this article

    Subscribe to the latest updates in life science

    Expert perspectives delivered to your inbox — pick your interests.

    No spam, ever. Unsubscribe anytime.

    Keep reading

    Related articles

    We use cookies to enhance your experience

    We use essential cookies for site functionality and optional analytics cookies to improve our services. Read our Privacy Policy and Cookie Policy.