Cost-Effectiveness Thresholds and Expert Elicitation: a Bridge Too Far?

Article by: Isobel Firth, Chris Sampson, Adrian Towse

team at the University of York recently published a study in which they sought to generate more reliable evidence to inform the estimation of marginal productivity in the NHS. In our response, published in Medical Decision Making, we suggest that, unfortunately, the estimates provided by the ‘experts’ in the study are of little practical use.

Allocation of funding for new health care technologies (e.g. drugs and medical devices) in many health systems is informed by cost-effectiveness analysis, comparing the incremental gains from a new technology against best alternative uses of that money. In England, if the price and effectiveness of a new technology reviewed by the National Institute for Health and Care Excellence (NICE) means the NHS pays less than a threshold value of £20,000-£30,000 per quality-adjusted life year (QALY) gained (£20-30k/QALY), it will likely be made available within the NHS. Lowering this threshold would mean that fewer health technologies would be available in the NHS.

Many researchers believe that NICE’s cost-effectiveness threshold should reflect marginal productivity within the NHS, measured by how much it currently costs to produce one QALY. Work by Claxton et al. (2015) sought to find a threshold that reflected the marginal productivity of expenditure in the NHS in England. They estimated a threshold of £12,936 per QALY for 2008, a value much lower than the £20-30k/QALY NICE threshold. That work relied on a variety of assumptions which OHE critiqued in a response. Subsequently, Lomas et al. (2019) repeated the Claxton et al approach using the same key assumptions, with similar results between £5,000 and £15,000 per QALY for 2003 to 2012. Recently, Soares et al. (2020) published a study which attempted to refine the original Claxton et al. (2015) assumptions to identify a more robust estimate of the £12,936 per QALY gained threshold. 

Soares et al. (2020) used a promising technique for addressing uncertainty called expert elicitation. Expert elicitation provides a framework in which people ‘in the know’ can provide estimates for unknown quantities and estimate their own uncertainty about those quantities. For example, a practising neurologist may be able to observe – and accurately describe the probability of – different clinical endpoints for a specific neurological intervention. In their study, Soares et al asked clinical and policy experts to estimate values relating to the association between expenditure and health outcomes in the NHS.

The main conclusion of Soares et al. (2020) is that the previous work by Claxton et al. overestimated the shadow price of a QALY in the NHS, implying that the threshold should be even lower than £12,936. In our commentary on the study, we set out three reasons why the evidence generated in the expert elicitation exercise is not a valid basis for estimating (or adjusting an estimate of) marginal productivity in the NHS.

1.There are no experts for this task

Expert elicitation is used to estimate values based on experts’ observations and experience. However, the quantities of interest in Soares et al. (2020) relate to associations between mortality, morbidity, and expenditures in health care at the system level. These quantities are not observable in clinical practice. For instance, a clinician cannot observe how changes in mortality rates in oncology might relate to changes in mortality rates in mental health.

The authors also asked clinicians to assess values for a range of diseases within budgeting categories. A quarter of clinical experts stated either that they were not an expert, or only an expert in a single clinical field, and therefore lacked relevant knowledge.

2.There is significant uncertainty in the responses

Participants were very unsure about their responses. Only 14% of clinical experts were confident that their answers represented their views, not that their answers represented the truth, but that they represented their own views. One respondent stated that they were “not sure what I have based my estimates on,” while others explained the problems associated with comparing budgeting categories and disease areas. One respondent said that there was “too much to aggregate.”

3.The quantities estimated are not meaningful

The researchers estimated quantities of interest that characterise the proportional relationship between two other quantities. For example, following an increase in expenditure, how does a reduction in mortality rates in the second year compare to reductions in mortality observed in the first year? The problem here is that there is no unique value to identify. The answer is very likely to be different across clinical areas and to vary from one year to the next.

The authors’ response to our commentary argued that “mounting empirical estimates” supported the results of the original Claxton et al. estimate. However, the three studies they cite involve themselves and use fundamentally the same approach (as they acknowledge). This indicates a degree of circularity; concerns about the uncertainty in the Claxton et al. (2015) study estimate motivated the expert elicitation study carried out by Soares et al. (2020).

The ‘experts’ in this expert elicitation were given an impossible task. As a result, the authors’ main conclusions are invalid. The estimates from the experts tell us very little about the relationship between expenditure and outcomes in the NHS. Rather, they serve to highlight the difficulty in estimating the marginal productivity of a health service based on the assumption of a system-wide health production function, and the fundamental uncertainty of the resulting estimates.

You can read the original study, our commentary, and the authors’ response in Medical Decision Making.

Related Research

Sampson, C., Firth, I. and Towse, A., 2021. Health Opportunity Costs and Expert Elicitation: A Comment on Soares et al. Medical Decision Making, 41(3), pp.255–257. 10.1177/0272989X20987211.

Barnsley, P., Towse, A., Karlsberg Schaffer, S. and Sussex, J., 2013. Critique of CHE Research Paper 81: Methods for the Estimation of the NICE Cost Effectiveness Threshold. [Occasional Paper] Office of Health Economics. Available at: [Accessed 28 Apr. 2021].

Cubi-Molla, P., Mott, D., Henderson, N., Zamora, B., Grobler, M. and Garau, M., 2021. Resource Allocation in Public Sector Programmes: Does the Value of a Life Differ Between Governmental Departments? [OHE Research Paper] Office of Health Economics. Available at: [Accessed 28 Apr. 2021].

Hernandez-Villafuerte, K., Zamora, B., Feng, Y., Parkin, D. and Towse, A., 2019. Exploring Variations in the Opportunity Cost Cost-effectiveness Threshold by Clinical Area: Results from a Feasibility Study in England. [OHE Research Paper] Office of Health Economics. Available at: [Accessed 28 Apr. 2021].

Hernandez-Villafuerte, K., Zamora, B. and Towse, A., 2018. Issues Surrounding the Estimation of the Opportunity Cost of Adopting a New Health Care Technology: Areas for Further Research. [OHE Research Paper] Office of Health Economics. Available at: [Accessed 28 Apr. 2021].

Karlesberg Schaffer, S., Cubi-Molla, P., Devlin, N. and Towse, A., 2016. Shaping the Research Agenda to Estimate Cost-effectiveness Thresholds for Decision Making. [OHE Consulting Report] Office of Health Economics. Available at: [Accessed 28 Apr. 2021].

Karlsberg Schaffer, S., Sussex, J., Devlin, N. and Walker, A., 2015. Local health care expenditure plans and their opportunity costs. Health Policy (Amsterdam, Netherlands), 119(9), pp.1237–1244. 10.1016/j.healthpol.2015.07.007.

Karlsberg Schaffer, S., Sussex, J., Hughes, D. and Devlin, N., 2016. Opportunity costs and local health service spending decisions: a qualitative study from Wales. BMC health services research, 16, p.103. 10.1186/s12913-016-1354-1.

Posted in Health Technology Assessment | Tagged External publications