How to Secure a NICE Early Value Assessment

by Odelle Technology

Early Value Assessments are not shortcuts to reimbursement. They are tests of whether a technology deserves the NHS’s attention while uncertainty is still unresolved.

In recent years, NICE has been under growing pressure to reconcile two competing realities: the pace of technological innovation, particularly in diagnostics, digital health, and AI, and the slow, deliberate machinery of evidence-based decision-making. Early Value Assessments (EVAs) emerged as a response to this tension not as a compromise on scientific standards, but as a recognition that waiting for perfect evidence often means waiting too long.

Yet despite their increasing visibility, EVAs remain widely misunderstood. Many companies approach them as abbreviated Health Technology Assessments. Others treat them as early endorsements. Both interpretations are wrong — and costly.

To understand how to secure a NICE Early Value Assessment, it helps first to understand what EVAs are actually designed to do.

What an Early Value Assessment Is and Is Not

Crucially, EVAs are not designed to accelerate reimbursement or guarantee adoption. They are risk-management instruments: a way for the NHS to allow limited exposure to new technologies while containing uncertainty, operational burden, and downstream cost. In that sense, EVA is as much about protecting the system as enabling innovation.

An EVA is not a mini-HTA, and it is not a pricing negotiation. It is a structured, time-limited assessment typically completed within 8 to 10 weeks, designed to answer a narrower, more pragmatic question:

Is there sufficient early evidence and system-level plausibility to justify conditional NHS use while further evidence is generated?

This framing is explicit in NICE’s interim EVA methods and reinforced by the way Evidence Assessment Groups (EAGs) actually operate. EVAs exist to identify evidence, explore unmet need, map uncertainty, and direct future data collection, not to settle questions of definitive clinical or cost-effectiveness.

Technologies emerging through EVAs may be:

  • conditionally recommended for early NHS use,
  • recommended only in research,
  • or not recommended at that time.

Crucially, EVAs also generate evidence generation plans, which shape what data NICE expects next.

methods-used-in-early-value-ass…

How NICE Frames the Decision Problem

One of the clearest findings from completed EVAs is that NICE does not evaluate technologies in isolation. The decision problem is framed around care pathways, not products.

In practice, this means:

  • comparators are often broad and numerous;
  • outcomes span clinical, operational, and economic domains;
  • subgroups and equity considerations are identified early, even if data are sparse.

Some EVAs have assessed up to fourteen interventions and more than twenty comparators within a single scope. Clinical outcomes frequently exceed twenty per evaluation, with multiple economic outcomes specified.

This breadth is intentional. EVAs are designed to locate a technology within the system, not merely measure its performance in controlled conditions.

Clinical Evidence: Why Signal Matters More Than Certainty

The clinical evidence base available to EVAs is, by design, immature.

Across completed EVAs:

  • all assessments searched MEDLINE and Embase;
  • most supplemented these with trial registries, websites, and grey literature;
  • searches were often conducted jointly for clinical and economic evidence to save time.

Single-reviewer screening and data extraction were common, with partial or unclear checking by a second reviewer. Meta-analysis was sometimes planned, but almost always abandoned due to heterogeneity.

The result is that narrative synthesis dominates not as a methodological failure, but as a rational response to fragmented evidence. EVAs are built to detect directional signals, assess biological and clinical plausibility, and identify where uncertainty resides, rather than to generate pooled effect sizes that cannot be defended.

Importantly, the absence of reported methodological detail does not imply poor practice. The paper is explicit that lack of reporting reflects time pressure, not lack of rigour.

methods-used-in-early-value-ass…

Critical Appraisal: Used Selectively, Not Dogmatically

NICE does not require formal risk-of-bias appraisal at EVA stage. Even so, seven of seventeen EVAs undertook or planned structured critical appraisal using established tools such as ROB-2, ROBINS-I, QUADAS-2, and PROBAST.

Where appraisal was conducted, it functioned less as a gatekeeping exercise and more as risk mapping:

  • identifying fragile findings,
  • highlighting sources of optimism bias,
  • contextualising transferability to NHS settings.

This reflects the EVA philosophy: early assessment under uncertainty, not evidentiary closure.

Economic Evaluation: Exploration, Not Verdict

Economic analysis within EVAs is frequently misunderstood particularly by companies accustomed to traditional HTA thresholds.

Across completed EVAs, economic approaches included:

  • cost-utility analyses,
  • cost-effectiveness analyses,
  • cost-consequence analyses,
  • cost comparisons,
  • and, in several cases, no executable model at all due to insufficient data.

Decision trees were the most common model structure, followed by Markov models. Conceptual models were presented in the majority of EVAs, often without full parameterisation. This is consistent with Decision Support Unit guidance, which explicitly recommends conceptual modelling when data are insufficient.

The purpose of EVA economics is not to prove cost-effectiveness at a willingness-to-pay threshold. It is to explore:

  • key cost and outcome drivers,
  • sensitivity to assumptions,
  • implementation burden,
  • and the conditions under which value collapses.

False precision is avoided deliberately.

methods-used-in-early-value-ass…

Uncertainty as the Core Analytical Output

If EVAs have a methodological centre of gravity, it is uncertainty analysis.

Scenario analysis was the most commonly reported approach, followed by deterministic and probabilistic sensitivity analysis. Only two EVAs used formal value-of-information methods, and one explored an economically justifiable price.

What matters is not the sophistication of the technique, but the question it answers:

What evidence would actually change the decision?

This is why EVA conclusions often read as conditional and provisional. That is not weakness — it is precision about uncertainty.

Data Inputs: Pragmatism Over Purism

The paper is unequivocal about data sources. Where published evidence was lacking, EAGs relied heavily on:

  • manufacturer submissions,
  • unpublished pilot data,
  • expert clinical opinion,
  • prior NICE evaluations.

Implementation costs, training, IT infrastructure, and workflow change frequently dominate models, especially for digital and AI technologies. Carer costs appeared in only one EVA. No EVA applied a severity modifier.

These choices reflect the realities of early-stage assessment, not methodological shortcuts.

Equity and Patient Involvement: Identified, Rarely Quantified

Equity considerations were listed in nearly all scopes, but only eight EVAs described methods for assessing them. Subgroup analyses were usually planned but infeasible.

Patient and public involvement was even rarer: only two EVAs incorporated it meaningfully. This is striking, particularly given the relevance of usability, acceptability, and behavioural response for digital health technologies.

The authors explicitly note this as a missed opportunity not a failure of intent, but of structure and time.

What an EVA Outcome Really Means

Perhaps the most important misunderstanding to correct is this:
an EVA outcome is not a judgement on a technology’s ultimate value.

It is a decision about:

  • whether early NHS use is justified,
  • whether further evidence generation should be supported,
  • and where future research should focus.

Some EVA-supported technologies have already gone on to receive substantial public funding for real-world evidence generation. Others have been redirected or paused.

EVAs are decision tools, not endorsements.

Why EVAs Reward Preparation, Not Confidence

The companies that succeed in EVAs are rarely those with the most polished decks. They are the ones who:

  • understand NHS system pressures,
  • frame value at the pathway level,
  • acknowledge uncertainty openly,
  • and treat evidence gaps as design inputs rather than liabilities.

Early Value Assessments do not reward certainty.
They reward credibility under uncertainty.

And in a system as cautious and consequential as the NHS, that distinction matters.

References

National Institute for Health and Care Excellence (NICE). Early value assessment (EVA) for medtech. NICE; updated 2024.
Available at: https://www.nice.org.uk/about/what-we-do/eva-for-medtech

What this is:
NICE’s official programme page defining EVAs — their purpose, scope, eligible technologies, timelines, and how they differ from full HTAs.
Why it matters:
This is the authoritative source for what an EVA is and is not. Any serious discussion of EVAs should cite this first.

National Institute for Health and Care Excellence (NICE). Interim process and methods for early value assessment. NICE; 2022.
Available at: https://www.nice.org.uk/process/pmg39/chapter/interim-process-and-methods-for-early-value-assessment

What this is:
The methodological backbone of EVAs — covering evidence standards, uncertainty handling, economic reasoning, and decision rules.
Why it matters:
This document explains how NICE tolerates uncertainty in EVAs without abandoning rigour.

National Institute for Health and Care Excellence (NICE). NICE health technology evaluations: the manual (PMG36). NICE; updated October 2023.
Available at: https://www.nice.org.uk/process/pmg36/chapter/introduction-to-health-technology-evaluation

What this is:
The master manual governing NICE evaluations, including EVAs, HTEs, and technology appraisals.
Why it matters:
EVAs sit inside this framework — understanding PMG36 prevents treating EVAs as a “shortcut HTA”.

National Institute for Health and Care Excellence (NICE). Involvement and participation in health technology evaluations. NICE; updated October 2023.
Available at: https://www.nice.org.uk/process/pmg36/chapter/involvement-and-participation-2

What this is:
Formal guidance on patient, clinician, and system stakeholder involvement in NICE evaluations.
Why it matters:
EVAs explicitly assess system readiness and acceptability, not just clinical performance.

National Health Service (NHS). The NHS Long Term Plan. NHS England; ongoing policy framework.
Available at: https://www.longtermplan.nhs.uk

What this is:
The strategic policy context for prevention, digital health, diagnostics, and service redesign in England.
Why it matters:
EVAs are aligned to delivery priorities, not just evidence maturity.

NICE EVA Case Examples

National Institute for Health and Care Excellence (NICE). Point-of-care tests for urinary tract infections to improve antimicrobial prescribing: Early value assessment (HTE7). NICE; 2023.
Available at: https://www.nice.org.uk/guidance/hte7

What this is:
An EVA assessing diagnostics under real-world antimicrobial stewardship constraints.
Why it matters:
A textbook example of system-level value under uncertainty — highly relevant to diagnostics and AMR.

National Institute for Health and Care Excellence (NICE). Artificial intelligence-derived software to analyse chest X-rays for suspected lung cancer: Early value assessment (HTE12). NICE; 2023.
Available at: https://www.nice.org.uk/guidance/hte12

What this is:
An EVA of AI diagnostic software in primary care referral pathways.
Why it matters:
Shows how NICE evaluates workflow impact, safety, and downstream consequences, not just accuracy.

National Institute for Health and Care Excellence (NICE). Digitally enabled therapies for adults with depression: Early value assessment (HTE8). NICE; updated 2024.
Available at: https://www.nice.org.uk/guidance/hte8

What this is:
An EVA assessing digital therapeutics alongside existing care pathways.
Why it matters:
Illustrates how NICE handles comparators, adherence, and scalability in early digital evidence.

Economic & Methodological References

Yee, M.M., Tappenden, P. & Wailoo, A. Economic evaluation in NICE early value assessments. Sheffield: SCHARR, University of Sheffield; 2023.

What this is:
A Decision Support Unit (DSU) report explaining how economics is used in EVAs.
Why it matters:
Clarifies that EVA economics is directional and decision-informing, not cost-effectiveness verdicts.

Husereau, D., Drummond, M., Augustovski, F., et al. Consolidated Health Economic Evaluation Reporting Standards (CHEERS 2022). BMC Medicine. 2022;20:23.
Available at: https://bmcmedicine.biomedcentral.com/articles/10.1186/s12916-021-02204-6

What this is:
Global reporting standards for economic evaluations.
Why it matters:
Often cited to explain why full CHEERS compliance is not expected in EVAs.

Johnson, E.E., Onwuelazu Uteh, C., Belilios, E., Pearson, F. Reporting of patient and public involvement in technology appraisal and assessment reports: a rapid scoping review. The Patient. 2025;18(2):109–114. https://link.springer.com/article/10.1007/s40271-024-00721-7

What this is:
A contemporary review of how patient involvement is reported in HTA.
Why it matters:
Supports NICE’s emphasis on participation and acceptability in early evaluations.

NICE Early Value Assessment (EVA): Scientific & Health-Economic FAQ


1. What economic question is an EVA actually designed to answer?

An EVA does not ask whether a technology is cost-effective at a fixed willingness-to-pay threshold.

The core economic question is:

Is there a plausible value proposition under realistic NHS assumptions that justifies conditional use and further evidence generation?

This reframes economic analysis away from decision confirmation and toward decision exploration. The emphasis is on:

  • structural plausibility,
  • sensitivity to assumptions,
  • identification of dominant cost drivers,
  • and mapping where uncertainty is decision-critical.

This is why EVAs tolerate incomplete models and emphasise scenario analysis.


2. Why are cost-utility analyses (CUAs) used at all if QALYs are immature?

CUAs appear in EVAs not because QALYs are robust, but because they provide a common economic language for comparison.

In practice:

  • QALYs are often derived from proxy outcomes, short-term data, or mapped utilities;
  • time horizons are frequently constrained or exploratory;
  • results are interpreted qualitatively, not as definitive ICERs.

CUAs in EVAs function as stress-tests, not verdicts. When QALY estimation becomes too speculative, NICE accepts cost-consequence or cost-comparison approaches instead methods-used-in-early-value-ass….


3. Under what conditions is cost-consequence analysis (CCA) preferable to CUA?

CCA becomes methodologically preferable when:

  • multiple outcomes matter simultaneously (clinical, operational, behavioural);
  • causal links between intervention and long-term health outcomes are weak or delayed;
  • implementation costs dominate early value;
  • collapsing outcomes into a single metric would obscure trade-offs.

Many EVAs explicitly adopt CCA to preserve informational richness at early stages, especially for digital and AI technologies where adoption behaviour strongly mediates outcomes.


4. Why do EVAs rely so heavily on conceptual models?

Conceptual models are not placeholders; they are core scientific outputs of EVAs.

They serve to:

  • formalise causal assumptions;
  • map dependencies between evidence gaps and outcomes;
  • define the architecture of future empirical studies;
  • prevent premature parameterisation that would introduce false precision.

In most EVAs, conceptual models are used to:

  • test logical coherence,
  • guide scenario construction,
  • and inform evidence generation plans.

This aligns with DSU guidance recommending conceptual modelling when full parameterisation is infeasible methods-used-in-early-value-ass….


5. Why is implementation cost so dominant in EVA economics?

Because early value is often operational before it is clinical.

Across EVAs, implementation costs frequently include:

  • training and staff time,
  • IT infrastructure and data storage,
  • hardware acquisition,
  • workflow redesign,
  • onboarding and maintenance.

For digital technologies especially, these costs can exceed marginal clinical costs, meaning early economic value is driven by system efficiency, not health gain.

Ignoring implementation costs is one of the fastest ways to undermine EVA credibility.


6. How does NICE treat uncertainty differently in EVAs compared with full HTAs?

In EVAs, uncertainty is not something to be “reduced away”; it is something to be characterised.

Common approaches include:

  • scenario analysis to test structural assumptions,
  • deterministic sensitivity analysis to identify dominant parameters,
  • probabilistic sensitivity analysis where feasible,
  • occasional value-of-information analysis to guide research priorities.

The analytical goal is not precision, but decision sensitivity:

Which uncertainties actually matter for NHS decision-making?


7. Why is value-of-information (VOI) analysis rare but important?

VOI analysis appears in only a small number of EVAs, not because it lacks value, but because:

  • it requires additional modelling effort,
  • parameter uncertainty must already be reasonably structured.

When used, VOI helps NICE answer:

  • whether further research is worth funding,
  • which parameters should be prioritised,
  • whether uncertainty is reducible or structural.

Its presence signals a mature EVA, but its absence does not imply poor practice methods-used-in-early-value-ass….


8. How are expert opinion and company data treated scientifically?

Expert elicitation and company data are treated as necessary inputs, not methodological weaknesses.

In EVAs:

  • empirical data are often insufficient;
  • experts validate assumptions, pathways, and plausibility;
  • company data frequently populate early models.

The scientific standard is transparency, not independence:

  • assumptions must be explicit,
  • sources must be declared,
  • sensitivity must be explored.

Attempts to disguise expert judgement as “objective data” are viewed negatively.


9. Why are long-term outcomes often modelled with weak evidence?

Because EVAs assess potential trajectory, not realised impact.

When long-term outcomes are included:

  • they are typically extrapolated cautiously,
  • assumptions are stress-tested,
  • and conclusions are framed conditionally.

NICE accepts this because the alternative — excluding long-term outcomes entirely — would systematically disadvantage preventive, diagnostic, and early-intervention technologies.


10. Why are severity modifiers absent in EVAs?

Severity modifiers are absent because:

  • disease severity is often poorly characterised at early stages,
  • evidence linking intervention to severity-adjusted benefit is lacking,
  • and EVAs aim to avoid premature normative weighting.

Severity considerations may re-enter at later HTA stages once evidence matures.


11. How should equity be treated economically in an EVA?

Equity in EVAs is primarily identificatory, not quantitative.

Common approaches include:

  • qualitative assessment of digital exclusion,
  • subgroup identification using PROGRESS-Plus factors,
  • flagging potential inequities for future study.

Quantitative equity modelling is rare due to data constraints. NICE expects risks to be acknowledged, not fully resolved, at EVA stage methods-used-in-early-value-ass….


12. Why is patient and public involvement so limited — and why does it matter economically?

Only a small minority of EVAs include patient input, largely due to:

  • compressed timelines,
  • lack of prescriptive guidance,
  • legacy HTA practices.

This matters economically because:

  • adherence affects effectiveness,
  • usability affects uptake,
  • behavioural response affects realised value.

From an economic perspective, excluding patient insight risks systematic overestimation of value.


13. What distinguishes a strong EVA submission from a weak one economically?

Strong EVA submissions:

  • frame value at the pathway level;
  • model uncertainty explicitly;
  • acknowledge implementation burden;
  • use economics to ask “what would change the decision?”

Weak submissions:

  • over-engineer ICERs;
  • hide assumptions;
  • underplay system costs;
  • treat early modelling as a defence rather than an exploration.

14. How should companies think about pricing in an EVA context?

Pricing in EVAs is provisional by necessity.

Economically, EVAs may explore:

  • break-even pricing,
  • economically justifiable price ceilings,
  • sensitivity to uptake and scale.

Prices are not locked in. EVAs inform pricing strategy, not final reimbursement.


15. What is the single most common economic mistake in EVAs?

Mistaking precision for credibility.

Over-specified models with fragile assumptions are less persuasive than:

  • simple models,
  • transparent uncertainty,
  • and honest limitations.

In early assessment, epistemic humility is an economic strength.

Early Value Assessments are not about proving value.

They are about demonstrating that value is worth investigating further.

From a health-economic standpoint, success in an EVA comes from:

  • structural coherence,
  • transparent uncertainty,
  • and system-level relevance.

Glossary of Terms — NICE Early Value Assessments & Health Economics


Early Value Assessment (EVA)

A NICE assessment pathway designed to evaluate early-stage medical technologies under conditions of uncertainty. EVAs aim to determine whether a technology shows sufficient potential value to justify conditional NHS use and further evidence generation, rather than definitive adoption or reimbursement.


Evidence Assessment Group (EAG)

An independent academic group commissioned by NICE to conduct the technical assessment for an EVA, including clinical evidence synthesis, economic modelling, and uncertainty analysis. EAGs operate under compressed timelines and apply pragmatic, proportionate methods.


Health Technology Assessment (HTA)

A multidisciplinary process evaluating the clinical effectiveness, safety, cost-effectiveness, and wider impact of health technologies. EVAs sit upstream of full HTA and are not intended to deliver definitive reimbursement decisions.


Decision Problem

The structured definition of what an assessment seeks to answer, including population, intervention, comparators, outcomes, and setting. In EVAs, the decision problem is typically framed around care pathways and system impact, not isolated product performance.


Narrative Synthesis

A qualitative method of evidence synthesis used when quantitative pooling (meta-analysis) is infeasible due to heterogeneity or limited data. Narrative synthesis in EVAs focuses on direction of effect, plausibility, and uncertainty rather than precise effect estimates.


Meta-analysis

A statistical technique that combines results from multiple studies to produce a pooled effect estimate. Meta-analysis is rarely feasible in EVAs due to heterogeneous outcomes, comparators, and immature evidence bases.


Conceptual Model

A formal representation of the causal pathways, assumptions, and relationships linking an intervention to outcomes and costs. In EVAs, conceptual models are often used in place of fully parameterised economic models and serve as core scientific outputs.


Decision Tree Model

A simple economic model structure used to represent short-term decisions and outcomes. Decision trees are commonly used in EVAs due to their transparency and suitability for early-stage data.


Markov Model

An economic modelling approach that represents disease progression over time using defined health states and transition probabilities. Markov models are used selectively in EVAs where longitudinal progression can be plausibly described, despite limited data.


Cost-Utility Analysis (CUA)

An economic evaluation method comparing costs and outcomes measured in quality-adjusted life years (QALYs). In EVAs, CUAs are exploratory and often rely on proxy or mapped utilities, with results interpreted cautiously.


Cost-Effectiveness Analysis (CEA)

An economic evaluation comparing costs and outcomes measured in natural units (e.g. cases detected, events avoided). CEAs are used in EVAs when QALY estimation is not appropriate or feasible.


Cost-Consequence Analysis (CCA)

An economic approach presenting multiple costs and outcomes separately, without aggregating them into a single metric. CCAs are particularly useful in EVAs where outcomes span clinical, operational, and behavioural domains.


Cost Comparison

An economic analysis comparing costs under the assumption of broadly equivalent outcomes. Cost comparisons are used in EVAs when outcome data are insufficient to support more complex modelling.


Quality-Adjusted Life Year (QALY)

A composite measure combining length of life and health-related quality of life. QALYs provide a common unit for comparing health benefits across interventions but are often immature or proxy-based in EVAs.


Implementation Costs

Costs associated with introducing a technology into routine practice, including training, IT infrastructure, workflow redesign, onboarding, and maintenance. Implementation costs frequently dominate EVA economic models, especially for digital and AI technologies.


Scenario Analysis

An uncertainty analysis method exploring how results change under alternative structural or behavioural assumptions. Scenario analysis is the most commonly used uncertainty approach in EVAs.


Sensitivity Analysis

An analytical technique testing how results respond to changes in input parameters. EVAs may include deterministic (one-way or multi-way) and probabilistic sensitivity analyses, depending on data availability.


Probabilistic Sensitivity Analysis (PSA)

A method of uncertainty analysis in which model parameters are assigned probability distributions and sampled repeatedly. PSA is used selectively in EVAs due to data and time constraints.


Value-of-Information (VOI) Analysis

A set of methods estimating the value of reducing uncertainty through further research. VOI analysis helps identify research priorities and is used sparingly but strategically in EVAs.


Economically Justifiable Price (EJP)

A price ceiling derived from economic modelling that indicates the maximum price at which an intervention could be considered to offer value under specified assumptions. EJP is exploratory and non-binding in EVAs.


Expert Elicitation

The structured use of clinical or technical expert judgement to inform model assumptions or parameter values where empirical data are lacking. Expert elicitation is a legitimate and necessary method in EVAs when used transparently.


Equity Considerations

Assessment of whether a technology may differentially affect groups defined by characteristics such as age, sex, ethnicity, socioeconomic status, disability, or digital access. In EVAs, equity is usually identified qualitatively rather than modelled quantitatively.


Digital Exclusion

The risk that individuals or groups are unable to access or benefit from digital technologies due to lack of connectivity, devices, skills, or infrastructure. Digital exclusion is a recurrent equity concern in EVAs of digital health technologies.


Patient and Public Involvement (PPI)

The involvement of patients, carers, or the public in shaping assessment questions, outcomes, or interpretation. PPI is under-utilised in EVAs despite its relevance to adherence, usability, and real-world effectiveness.


Evidence Generation Plan

A forward-looking framework identifying which uncertainties matter most, what data should be collected next, and how future studies should be designed. Evidence generation planning is a central but often implicit output of EVAs.


Conditional Recommendation

An EVA outcome indicating that a technology may be used within the NHS under defined conditions, often linked to further evidence collection or managed access arrangements.


Epistemic Uncertainty

Uncertainty arising from limited knowledge or data, as opposed to inherent randomness. EVAs explicitly engage with epistemic uncertainty rather than attempting to eliminate it prematurely.


False Precision

The appearance of accuracy or certainty in modelling results that is not supported by the underlying data. EVAs deliberately avoid false precision through simplified models and explicit uncertainty.Anything else is premature optimisation.

You may also like

This website uses cookies to improve your experience. We'll assume you're ok with this, but if you require more information click the 'Read More' link Accept Read More