This informal CPD article ‘Embedding Quality by Design in Contemporary Clinical Research’ was provided by Triumph Research Intelligence, an organisation supporting Risk-Based Quality Management (RBQM) to drive better clinical trials, improved patient safety, and compliance with Good Clinical Practice (GCP).
Quality by Design (QbD) continues to represent one of the most evidence‑supported approaches for strengthening the scientific, ethical, and operational foundations of clinical research. Decades of methodological evolution, combined with maturing regulatory expectations, have reinforced that clinical studies should be constructed around the factors most critical to participant protection and data reliability (1)(2).
However, despite this alignment across scientific literature and regulatory guidance, QbD remains unevenly and inconsistently implemented in practice. A persistent gap exists between its theoretical endorsement and its practical adoption, prompting an important question for clinical research professionals: what continues to impede its routine operationalisation?
1. Evolving Regulatory Standards Demonstrate a Clear Shift Toward QbD
The finalisation of ICH E6(R3) in January 2025 represents a significant milestone in the modernisation of Good Clinical Practice. The updated guideline formalises a principled, proportionate, and risk‑aligned approach, explicitly linking study design decisions to Critical‑to‑Quality (CtQ) factors and demanding a more transparent rationale for how risks are identified and managed (1)(3).
These expectations build cumulatively on ICH E8(R1), which defines QbD as foundational to study planning and emphasises the importance of pre‑identifying the few study elements that truly determine participant well‑being and decision‑grade data (2)(3).
Global regulatory agencies are already embedding these principles:
- The FDA continues to position proportionate GCP application and risk‑based monitoring as central components of effective oversight (3)(5).
- The EMA reinforces the role of centralised monitoring, analytical review methods, and risk‑proportionate quality systems (4)(6).
The regulatory trajectory is consistent; quality must be intentionally integrated at the design stage, and documentation must demonstrate a clear relationship between CtQ considerations, risk assessment, and operational controls.
2. Practical Expectations Under E6(R3)
Interpreting E6(R3) in operational terms reveals several non‑negotiable expectations for sponsors and investigators:
- Study processes must be justified against the study’s objectives and risk profile, rather than inherited from previous protocols (2)(3).
- CtQ factors must guide protocol decisions, data collection strategies, and monitoring plans, ensuring that resources target what matters most (2)(3).
- Universal SDV/SDR is no longer defensible as a default approach; monitoring intensity must reflect the nature and severity of actual risks (1)(3).
- Centralised monitoring and remote analytical oversight are expected tools, not optional enhancements (4)(6).
- Data governance must be explicit, facilitating traceability across risk identification, signal detection, action, and outcome (1)(3).
3. Persistent Barriers to QbD Adoption in Clinical Practice
A critical component is identifying why evidence‑based practices fail to translate into routine behaviours. Four systemic barriers continue to hinder QbD adoption:
3.1 Legacy SOP Infrastructure
Many organisations still rely on SOPs that were developed when exhaustive verification was regarded as synonymous with quality. Although regulators have clarified that quality derives from risk‑sensitive controls rather than volume of activity, these outdated structures remain difficult to dismantle (3).
3.2 Fragmented Operational Tooling and Documentation
Risk assessments, KRIs, and mitigation actions frequently exist across disparate systems, limiting the traceability demanded by E6(R3). A cohesive “lifecycle view” of quality, linking CtQ factors to risk, signals, and decisions, remains underdeveloped across much of the industry (1)(3).
3.3 Misaligned Organisational Incentives
Performance indicators often reward operational volume - visits completed, queries processed, or datapoints reviewed, rather than the effective management of risk to CtQ elements. Regulators increasingly expect metrics that demonstrate the positive impact of oversight on participant protection and data integrity (3).
3.4 Entrenched Belief in 100% SDV as a Proxy for Quality
Despite substantial regulatory clarification, some stakeholders remain cautious about reducing SDV. Yet both the FDA’s earlier risk‑based monitoring guidance and current ICH updates indicate that centralised approaches often identify errors and systemic patterns more effectively than exhaustive verification (6)(1)(3).
4. Re‑orienting Clinical Operations Toward What Matters Most
4.1 Prioritising CtQ from the Outset
ICH E8(R1) emphasises the need to distinguish essential from non‑essential procedures. Beginning every protocol with a structured CtQ exercise ensures that design decisions remain anchored to what genuinely impacts safety and scientific validity (2)(3).
4.2 Designing Out Avoidable Complexity
Protocol complexity is strongly associated with avoidable amendments, increased cycle times, and participant burden. Empirical findings from Tufts CSDD highlight that oncology protocols, in particular, experience high volumes of amendments, many of which reflect design shortcomings that could have been mitigated through earlier critical review (8).
4.3 Moving from Threshold‑based Thinking to Continuous Control
Rather than treating Quality Tolerance Limits as thresholds that trigger late corrective actions, E6(R3) encourages ongoing learning and adaptive adjustment based on emerging evidence (1)(3).
4.4 Embedding Centralised Oversight
Centralised monitoring supports earlier detection of anomalies, trends, and operational risks. EMA and FDA documents both recognise that analytical oversight is more efficient and often more sensitive than traditional visit‑based monitoring (4)(6).
5. A Minimal, Practicable QbD Operating Model
Organisations do not require complex infrastructures to begin aligning with E6(R3). A lean, academically defensible model includes:
- A CtQ‑focused design workshop to establish essential factors, anticipated risks, and corresponding controls (2)(3).
- A simplified protocol framework that eliminates non‑critical elements and clarifies decision‑relevant endpoints (8).
- Definition of acceptable ranges or QTLs as part of a structured risk control strategy (1)(3).
- A concise central monitoring plan, identifying which KRIs, roles, and review cadences are proportionate to the trial’s complexity (6).
- Clear data governance pathways, ensuring that inspectors can follow a coherent line from risk identification to mitigation (1)(3).
6. Immediate Actions for Professional Development and Practice
To align with CPD expectations, research professionals can implement the following:
- Conduct a short CtQ assessment for any protocol under development and reflect on how these factors influence design choices (2)(3).
- Revise one outdated SOP step and document the justification for replacing it with a risk‑proportionate control (1)(3).
- Pilot a focused central monitoring approach, comparing time‑to‑signal and operational efficiency against existing methods (6).
- Consolidate the evidence trail for a single study, ensuring that risks, indicators, actions, and outcomes are captured cohesively (1)(3).
Conclusion
QbD is no longer an aspirational concept but an expected standard anchored firmly in ICH E6(R3) and E8(R1). Its systematic application remains one of the most effective ways to reduce unnecessary amendments, improve operational predictability, and enhance participant protection (1)(2)(8).
As regulatory frameworks increasingly emphasise proportionality, intentional design, and analytical oversight, clinical research professionals must adapt their practices accordingly. Embedding QbD principles into routine operational behaviour represents not only regulatory compliance but also a critical component of contemporary professional competence (1)(3).
We hope this article was helpful. For more information from Triumph Research Intelligence, please visit their CPD Member Directory page. Alternatively, you can go to the CPD Industry Hubs for more articles, courses and events relevant to your Continuing Professional Development requirements.
REFERENCES
(1) ICH E6(R3) Guideline for Good Clinical Practice
https://database.ich.org/sites/default/files/ICH_E6(R3)_Step4_FinalGuideline_2025_0106.pdf
(2) ICH E8(R1) General Considerations for Clinical Studies
https://database.ich.org/sites/default/files/E8-R1_Guideline_Step4_2021_1006.pdf
(3) ICH E6(R3) overview presentation
https://admin.ich.org/sites/default/files/inline-files/ICH_E6(R3)_Step%204_Presentation_2025_0123.pdf
(4) EMA – ICH E6 Good Clinical Practice scientific guideline landing page
https://www.ema.europa.eu/en/ich-e6-good-clinical-practice-scientific-guideline
(5) FDA – ICH E6 Good Clinical Practice and related GCP content
https://www.fda.gov/regulatory-information/search-fda-guidance-documents/e6r3-good-clinical-practice-gcp
(6) EMA reflection / guidance supporting central/remote monitoring
https://www.ema.europa.eu/en/documents/scientific-guideline/reflection-paper-risk-based-quality-management-clinical-trials_en.pdf
(7) ACRO RBQM Landscape Summary Report 2025
https://www.acrohealth.org/wp-content/uploads/2025/06/ACRO_2025-RBQM-Report_Final-062425.pdf
(8) Tufts CSDD “New Benchmarks on Protocol Amendment Experience…”
https://pubmed.ncbi.nlm.nih.gov/38530628/