How many lives do our CMAM programmes save? Statistical commentary
By Max O Bachmann
Max Bachmann is Professor of Health Services Research at Norwich Medical School, University of East Anglia, UK
The authors of this study aimed to answer this question by estimating how many children who received CMAM in Nigeria died, then estimating how many of them would have died if they had not received CMAM, then subtracting the first estimate from the second. They then go on to estimate how much CMAM cost for every death averted.
Some readers may be put off by the equations and graphs in the article, but the basic principles are quite simple. The calculations are all based on just seven sets of numbers (called “inputs” or “parameters” in this type of modelling study). The six parameters used to estimate deaths averted are listed in the first column of Table 2. The seventh parameter is the total cost of the CMAM programme.
The results of such a study are uncertain, partly because of uncertainty about the true values of the parameters. Much of the paper - in the section headed “Accounting for uncertainty – A sampling-based approach” - is about how the authors dealt with this “parameter uncertainty”. This method is what health economists call a “probabilistic sensitivity analysis”. In other words, if we don’t know the true value of some parameters, we change their values many times, each time repeating the calculations, then seeing how much the final results vary with repeated sampling. We get a computer to do this, each time sampling from a frequency distribution which we have assumed for each parameter. We can then report the uncertainty about each estimate, conventionally using the 5th and 95th percentiles.
The third column of Table 2, and Figures 2-4, describe the frequency distributions used in this study. As the figures illustrate, the computer will mostly sample numbers near the middle of the distribution, which is where the average or mostly likely value of the parameter lies. Values further away from the middle are sampled less often, because there is reason to believe they are less likely to be true. The only exception is for the proportion of defaulters cured, which for some reason the authors assume has an equal probability of being anywhere between 5% and 20%, but zero probability of being less than 5% or more than 20%. There is no statistical uncertainty about the number treated. The statistical uncertainty about the proportions discharged as cured or defaulting are tiny because of the enormous sample size (almost a million children). I don’t understand the reasoning behind the uncertainties for background mortality and mortality in untreated cases (called σ in Table 2), which does not seem to be based on any statistical principle that I am aware of.1 In any case, although there is little or no uncertainty about six of the seven parameters used to estimate the numbers of deaths averted, when they are all combined through the various calculations in the model, the confidence interval around the number of deaths averted is quite large, that is, plus or minus 20% of the central estimate.
Commercially available software programmes commonly used by health economists, such as TreeAge, make it easy to do probabilistic sensitivity analyses like this, with the assistance of graphics and menus. However they can be expensive and are more than is really needed for an analysis such as this. The advantages of using open access software such as R are that is free2 and, as Box 1 shows, quite simple and transparent to anyone who is not put off by simple programming language.
1A footnote marked ** was subsequently added to Table 2 of the article by the authors to explain the basis of their reasoning.
2R? is a programming language and software environment for statistical computing and graphics. R is a GNU project (a free software, mass collaboration project). http://en.wikipedia.org/wiki/GNU_Project