Handling Uncertainty in the Results of Economic Evaluation

Briggs, A.

Briefing
September 1995

Sign Up for OHE Updates

Why not sign up for our email updates? Around once or twice a week we will send you a round-up of the latest news, events and publications from OHE.

I'm already signed up    No thanks

The recent increase in the number of published economic evaluations has been considerable [Wellcome, 1992; Udvarhelyi et al, 1992]. It is of some concern, however that reviews of economic evaluations have highlighted a high degree of methodological shortcomings in many studies [Adams at al, 1992; Gerard, 1992]. Furthermore, the situation does not appear to have improved over time [Udvarhelyi et al, 1992]. In particular, the importance of dealing systematically and comprehensively with uncertainty appears to have been overlooked by many analysts. Udvarhelyi and colleagues note that, although authors frequently mentioned the limitations in their underlying assumptions, only 30% of studies used sensitivity analysis to explore the effect of changes in those assumptions [Udvarhelyi et al, 1992]. Adams and colleagues found that only 16% of studies had utilised sensitivity analysis in their review of economic analyses alongside randomised trials [Adams et al, 1992]. By contrast Gerard found that 79% of cost utility analyses reviewed had conducted a sensitivity analysis, although just over half of these were judged to be limited in scope [Gerard, 1992]. In a recent review of economic evaluations focusing on methods employed to handle uncertainty, the concerns raised by the more general methodological reviews were found to be justified [Briggs & Sculpher, forthcoming].

The increasing use of the randomised controlled trial (RCT) as a vehicle for economic evaluation presents the opportunity to sample economic as well as clinical data and offers the potential for uncertainty to be quantified through conventional statistical techniques [O’Brien,et al, 1994]. However, most economic evaluations are based largely on deterministic data (ie, evaluations that have been taken from the literature or provided by experts) which have no intrinsic measures of variance, and therefore statistical analysis is impossible. Even where stochastic data (ie, data which have been sampled allowing estimation of both average values and associated variance) are collected from a clinical trial, there is a continuing role for sensitivity analysis in dealing with those parameters where uncertainty is not related to sampling error [Briggs et al, 1994].

Many commentators in the economic evaluation methodology literature stress the importance of using sensitivity analysis to test the robustness of a study’s conclusions [Weinstein et al, 1980; Weinstein, 1981; Drummond et al, 1987; Eisenberg, 1989; Luce & Elixhauser, 1990].  Perhaps more significantly, recent guidelines for conducting economic evaluation drawn up between the Department of Health and the Association of the British Pharmaceutical Industry [ABPI, 1994] stress that not only should economic evaluation include sensitivity analysis but that the results of that analysis should be quantitatively reported. The failure of many studies to use any sensitivity analysis or to present only a limited analysis highlights the significance of these guidelines. Despite the many recommendations to conduct sensitivity analysis, few details are offered as to how exactly the analysis should be carried out and how the results should be presented. Sensitivity analysis is not a single technique but encompasses a range of approaches designed to examine the effect of changing the underlying assumptions of a study. Many of the terms employed, such as ‘robustness’ and ‘plausible range’, are ill-defined and open to a good deal of interpretation.

The purpose of this paper is to examine uncertainty in economic evaluation and how sensitivity analysis can be employed to represent that uncertainty. This paper should be of interest to all those intending to undertake economic evaluations as well as those considering applying the results of completed evaluation studies.