Evidence-based medicine (EBM) has long been the backbone of health technology assessment (HTA). Randomised trials feed meta-analyses, guidelines inform HTA, and HTA shapes pricing, reimbursement and adoption.
But the last decade has exposed deep flaws in this linear model. A growing scientific and policy literature argues that the evidence base itself is structurally biased — shaped by financial conflicts, selective publication, proprietary data, and regulators who rely almost entirely on sponsor-provided evidence.
For manufacturers across pharma, biotech, MedTech, IVDs and digital health, this critique is not academic. HTA bodies are tightening their evidentiary requirements, pushing harder on uncertainty, rejecting poorly justified assumptions, and demanding RWE and transparency far beyond what the traditional EBM workflow was designed to deliver.
This piece explains how biases in the evidence pipeline weaken the foundations of reimbursement, and how manufacturers can redesign their evidence strategies to regain trust in an era of heightened HTA scepticism.
1. How the Illusion of EBM Is Created
1.1 Industry Control Over Trial Pipelines
A large share of “pivotal” clinical trials especially in oncology, rare diseases and advanced biologics — are fully designed, funded, analysed and written by industry. Independent oversight is often limited.
Critical literature identifies several systemic distortions:
Biased comparator selection
– Trials use weak, outdated or unrepresentative comparators, inflating relative benefit.
Surrogate-driven outcomes
– Short-term biomarkers stand in for survival, quality of life or long-term outcomes.
Selective publication & reporting
– Positive studies are overrepresented; negative studies rarely see daylight.
Sponsor-led analyses
– Company statisticians control both analysis plans and interpretation.
Ben Goldacre’s Bad Pharma and Jureidini & McHenry’s The Illusion of Evidence-Based Medicine document how entire therapeutic classes reached global markets on curated fragments of the data generated.
1.2 Finance Bias and Institutional Capture
Howick’s concept of finance bias describes how funding shapes not just results but also:
– which hypotheses are tested
– which outcomes are prioritised
– what gets published
– who interprets the evidence
Regulators themselves rely almost exclusively on sponsor-submitted data. Universities depend heavily on industry grants. Academics become key opinion leaders (KOLs).
The result is a structural asymmetry: industry generates the evidence, regulators accept it, and HTA bodies must make high-stakes reimbursement decisions based on evidence pipelines shaped by strong commercial incentives.
2. HTA Consequences: When Flawed Evidence Drives Reimbursement Decisions
HTA bodies such as NICE (UK), IQWiG/G-BA (Germany), HAS (France), ZIN (Netherlands) and AEMPS/RedETS (Spain) rely on the assumption that clinical evidence is complete, transparent and unbiased.
But the evidence they receive is often incomplete or highly uncertain.
A 2024 BMJ Open study (Osipenko et al.) reviewing 20 years of NICE submissions found:
– gaps in clinical datasets
– weak justification for extrapolated survival curves
– uncertain modelling assumptions
– inconsistent reporting of harms
– heavy dependence on immature data
In short: HTA rigour cannot compensate for flawed evidence pipelines.
Consequences include:
HTA models built on inflated efficacy
ICERs that falsely appear cost-effective
Reimbursement decisions that lock in high opportunity costs
Costly managed access agreements triggered by uncertainty
HTA bodies increasingly reject or conditionally approve technologies particularly in oncology and rare diseases — because they no longer trust the evidentiary foundations.
3. Pharma & Biotech: Where the Illusion Is Most Visible
Pharma and biologics still operate within the classical EBM hierarchy: large RCTs, network meta-analysis, and long-term models.
But HTA bodies now identify recurring issues:
3.1 Optimistic Efficacy Inputs
Selective reporting and publication bias inflate effect sizes entering cost-effectiveness models.
3.2 Under-reported Harms
Short follow-up or incomplete AE reporting reduces the estimated disutility and downstream costs.
3.3 Non-representative Populations
Trial populations often exclude the multi-morbid, elderly or socioeconomically disadvantaged groups central to public health systems.
3.4 Regulatory ≠ HTA-Grade Evidence
EMA/FDA approvals increasingly rely on single-arm trials, surrogate endpoints or accelerated pathways but HTA decisions do not.
Pharma dossiers that simply claim “robust evidence” are now met with scepticism unless they directly confront uncertainty.
4. MedTech & IVDs: Borrowing a Fragile Evidence Paradigm
MedTech, devices and in vitro diagnostics traditionally relied on:
– single-centre studies
– usability data
– analytical validity
– observational evidence
As HTA becomes more formalised in devices (NICE DG, IQWiG NUB assessments, HAS RIHN 2.0), manufacturers attempt to imitate pharma-style evidence strategies.
But devices behave differently:
Rapid iteration → evidence becomes outdated
Human factors & workflow effects → not easily randomised
Diagnostic tests influence care pathways → benefits are indirect
This leads to:
Borrowed assumptions from pharma trials (e.g. treatment effect sizes).
Surrogate benefits (e.g. time-to-result) treated as economic proxies.
Models sensitive to system behaviour, not just patient outcomes.
HTA bodies increasingly challenge these assumptions, demanding:
– multi-centre data
– pathway studies
– implementation evidence
– reproducibility across settings
– evidence of actual, not modelled, change in clinical behaviour
5. Digital Health & AI: Evidence Theatre at Scale
Digital health, DTx and AI tools often present the most acute version of the evidence illusion:
Evidence Theatre
Small RCTs with high risk of bias presented as “category 1” evidence.
Surrogate-heavy outcomes
Engagement metrics, clicks, step counts, mood scores — rarely validated against hard outcomes.
Opaque algorithms
Models change post-launch; algorithm drift invalidates trial evidence.
Weak RWE
Uncontrolled convenience datasets mislabelled as “real-world evidence”.
HTA bodies (DiGA, NHS DCB frameworks, HAS RIHN 2.0) now require:
– validation in representative cohorts
– reproducibility
– transparent algorithmic change management
– cost and utilisation evidence, not just clinical endpoints
Digital tools risk mass reimbursement based on illusory evidence unless transparency becomes the norm.
6. What an Honest, HTA-Ready Evidence Strategy Looks Like
6.1 Directly Address Finance Bias
Acknowledge conflicts; use independent statisticians; commit to IPD sharing at design stage.
6.2 Build Evidence for HTA, Not Just for Regulators
Design early trials and RWE plans around:
– real comparators
– real-world populations
– outcomes aligned with QALYs and healthcare utilisation
– equity and underserved groups
6.3 Radical Transparency
HTA leaders increasingly prefer companies who:
– publish protocols
– share IPD
– commit to external re-analysis
– disclose deviations from pre-specified analysis plans
Transparency is becoming a competitive advantage.
6.4 Integrate Mechanistic, Systems & Implementation Evidence
Mechanistic reasoning, workflow evidence and systems engineering must complement RCTs for:
– diagnostics
– AI
– MedTech
– digital platforms
6.5 Treat Post-Reimbursement Evidence as Mandatory
Coverage-with-evidence-development, registries, RWE platforms, and adaptive HTA submissions are rapidly becoming standard.
7. HTA Is Rewriting the Rules of Evidence
Critiques of the “illusion of EBM” illuminate a deeper truth: reimbursement decisions depend on evidence pipelines that are increasingly unfit for purpose.
Pharma and biologics must confront uncertainty head-on and accept that accelerated approvals require accelerated evidence.
MedTech and diagnostics must shift beyond pharma-style hierarchies and build systems-level evidence.
Digital health and AI must abandon evidence theatre and embrace transparency, reproducibility, and long-term outcomes.
Reimbursement is now awarded not to those who generate the most polished dossiers, but to those who build evidence ecosystems robust enough to survive independent scrutiny.
REFERENCE LIST
1. Jureidini J, McHenry L. The illusion of evidence based medicine. BMJ. 2022.
https://www.bmj.com/content/376/bmj.o702
Seminal critique arguing that EBM has been structurally corrupted by financial conflicts and selective publication.
2. Jureidini & McHenry (Book). The Illusion of Evidence-Based Medicine. Wakefield Press, 2020.
https://www.adelaide.edu.au/robinson-research-institute/…
Expands the BMJ argument and documents case studies of biased drug development.
3. Every-Palmer S, Howick J. J Eval Clin Pract. 2014.
https://pubmed.ncbi.nlm.nih.gov/24819404/
Shows how biased trials and selective publication distort EBM.
4. Howick J. Perspect Biol Med. 2019 (Finance Bias).
https://pubmed.ncbi.nlm.nih.gov/31031303/
Explains how financial incentives can distort evidence generation and dissemination.
5. Goldacre B. Bad Pharma.
https://pmc.ncbi.nlm.nih.gov/articles/PMC3635613/
Documents systemic problems in drug evidence pipelines, including unpublished trials.
6. Osipenko L et al. BMJ Open. 2024.
https://bmjopen.bmj.com/content/14/2/e074341
20-year review of NICE submissions showing persistent evidence gaps and uncertainty.
7. NICE Health Technology Evaluations Manual (PMG36). 2022.
https://www.nice.org.uk/process/pmg36
Official HTA manual outlining evidence standards for UK reimbursement.
8. IQWiG General Methods.
https://www.iqwig.de/en/about-us/methods/methods-paper/
Germany’s HTA methods framework, including benefit assessment principles.