Tag Archives: evidence

Payment by results schemes – advice for the Government and advice for contractors

Advice for the Government
I heard on the radio today that the Government will establish a payment by results scheme for a service to reduce recidivism among offenders after a short (less than one year) jail sentence. Currently, approximately 60 per cent1 of such offenders are re-incarcerated within one year – the so-called ‘revolving door’2. Contractors will help released offenders find their way in life; for example by making a deal with housing associations to provide accommodation and providing other sources of support. These contractors will be remunerated in proportion to their success in reducing reoffending. However, the scientific evidence that this will work is not strong3 and there are a number of potential challenges to implementing such a scheme4. These include the potential for gaming of the system and ‘cherry-picking’ certain cases to maximise returns; the difficulty in measuring outcomes that cannot easily be defined or evaluated; where to obtain the payments from as not all savings made from a reduction in crime would be available as money, and that which is, would be from both public and private sectors; and the scale of change possible, as most successful interventions have only produced small changes in outcomes4,5.

More important, from the point of view of the remuneration, is that the extent to which it could work – the effect size – is poorly calibrated. This is because insufficient head-to-head trials have been conducted of different interventions to reduce recidivism. This places the taxpayer at considerable risk of either under and over paying for the service. The corollary is that – payment by results schemes should only be introduced where there is a good way of calibrating cause and effect consequences of the service. I know of what I speak, since I chair the scientific advisory committee for the payment by results scheme for the multiple sclerosis drugs. The idea here is that the drug companies would repay some of the costs of the drugs if they underperform, or the Treasury would provide a retrospective enhanced payment if the drugs worked better than expected. The problem here is that the effect of the drugs has only been properly calibrated after two years of use, whereas the scheme runs for ten years and is concerned with longer term outcomes. So, we have to try and work out whether the drugs are working better or worse than expected, not by means of a proper experiment (head-to-head trial), but by simply observing how well people do on medicine and trying to compare this with a retrospective cohort of patients. This is a very tricky and uncertain business. This problem, of working out how effective interventions are, leads me to advice for contractors.

Advice for contractors
As a contractor, I would choose my ground very carefully. I would try to provide services, in situations where there is likely to be a positive underlying trend. In that case, the underlying trend would contribute to my ‘results’. With the wind behind me I would have a very good chance of making a sturdy profit.

A third way
There is of course an alternative proposition. This would be to bring in the payment by results scheme as part of a prospective, carefully designed study. For example, the intervention (payment by results) could be rolled out sequentially across different parts of the country, where the order was determined at random – a so-called cluster stepped wedge design6. Such a study, if large enough, could be used not only to tell if the general idea of payment by results works, but also to determine which type of scheme is most effective. In other words, it would be possible to get a handle on which types of service provides best outcomes. The Cabinet Office has advocated such experimental approaches to public policy7. I strongly urge the Government to look at its own excellent plan of making policy on the basis of empirical evidence.

References
1. Ministry of Justice. Table 19a: Adult proven re-offending data, by custodial sentence length, 2000, 2002 to March 2011 in Early estimates of proven re-offending: results from April 2011 to March 2012. 2012 Available from: http://www.justice.gov.uk/downloads/statistics/reoffending/proven-reoffending-apr10-mar11-tables.xls (accessed 9 May 2013).

2. Cutherbertson P. The failure of revolving door community sentencing. Centre for Crime Prevention. 2013. Available from: https://docs.google.com/file/d/0B25IaOtJKlvwYjkxVENsbi1TbTg/edit?usp=sharing (accessed 9 May 2013).

3. Nicholson C. Rehabilitation Works: Ensuring payment by results cuts reoffending. London. Centre Forum; 2011.

4. Fox C, Albertson K. Is payment by results the most efficient way to address the challenges faced by the criminal justice sector? Probation Journal. 2012; 59(4):355-73.

5. Fox C, Albertson K. Payment by results and social impact bonds in the criminal justice sector: New challenges for the concept of evidence-based policy? Criminology & Criminal Justice. 2011;11:395.

6. Brown CA, Lilford RL. The stepped wedge trial design: a systematic review. BMC Medical Research Methodology 2006, 6:54

7. Haynes L, Service O, Goldacre B, Torgerson D. Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials. Cabinet Office. Available from https://www.gov.uk/government/publications/test-learn-adapt-developing-public-policy-with-randomised-controlled-trials (accessed 9 May 2013).

Advertisements

Public inquiries versus systematic collection of the evidence

The Francis Report1 has had a great influence on British public life – from the Cabinet, through the boardroom and down to the shop floor. The report will be widely quoted for many years to come. The report is 1,782 pages long and contains no fewer than 290 recommendations. But how much can one really learn from such an in-depth analysis of just one site? Contrast the Francis Report with a recent systematic overview of the evidence on quality improvement from the Agency for Healthcare, Research and Quality (AHRQ) in Washington, recently summarised in Annals of Internal Medicine2. This AHRQ study is based on a systematic and intellectually grounded analysis of the entire high quality, world literature. It builds on a similar review conducted on behalf of AHRQ by the Stanford Evidence-based Practice Center over a decade ago. And a very interesting and active decade this has been with an exponential increase in research in the areas of quality and safety of healthcare.

Service delivery interventions to improve quality and safety can be divided, from a methodological point of view, into two classes3. Interventions applied close to the patient, with a specific objective in mind, are ‘targeted interventions.’ Interventions applied more upstream of the patient, with multiple objectives in mind are called ‘generic interventions.’ Generic interventions have much broader or diffuse effects on quality. An example of a targeted intervention is the use of ultrasound to guide the placement of intravenous cannulae. Examples of generic interventions include improving the nurse-to-patient ratio or changing the human resources policy.

Targeted interventions are much easier to study – for example they are much more amenable to evaluation through randomised trials. The AHRQ report shows that a number of targeted interventions are effective, including use of peroperative checklists, outlawing use of hazardous abbreviations, medication reconciliation and various types of guideline such as those concerned with ventilator-associated pneumonia, prolonged use of urinary catheters and thromboembolism prophylaxis.

Generic interventions with diffuse effects, are more difficult to study than targeted interventions. Nevertheless, a compelling case for or against generic interventions can often be built systematically by triangulating various sorts of evidence between and within studies.3 It is in this way, for example, that the authors of the overview conclude that improving the nurse-patient ratio leads to better outcomes (including hospital mortality). The report also produces reasonably convincing evidence in favour of rapid response teams, which can be called out from the intensive care unit to attend patients who are deteriorating on the wards. There is very strong evidence for simulation training, especially for complicated technical procedures, but the case for specific team training (as opposed to training in teams) was somewhat less convincing. There is evidence that surgical ‘score cards’ – that is to say a system where surgeons collect detailed data on their cases – leads to improved care when this is owned by the surgical societies and where individual hospitals are put in charge of improvement efforts. This result would seem to vindicate my recent post on how the outcomes of surgical procedures should influence practice. One ‘old chestnut’ is a question of top down cultural change. The evidence that top down cultural change can be produced through ‘heroic’ leadership is extremely unconvincing. A dispersed model of leadership, combined with bottom up specific improvement practices, seems to be the way to go. The report does not treat safety interventions as a black box, but seeks to understand what makes an intervention work or fail. For instance, rapid response teams are dependent on both good monitoring of patients’ conditions on the ward (the afferent arm) and a rapid, efficient response (the efferent arm). Many guidelines, such as checklists, will merely elicit ritualistic displays of compliance unless practitioners have first been convinced of their rationale.

The above are just a small sample of the extensive evidence in the overview. It is a rich source of high quality evidence, based, wherever possible, on comparative studies. It should be essential reading for clinicians and health service managers.
References

1. Francis R. Report of the Mid Staffordshire NHS Foundation Trust Public Inquiry. Available from http://www.midstaffspublicinquiry.com/report. Accessed 14 March 2013.

2. Shekelle PG, Pronovost PJ, Wachter RM, Taylor SL, Dy SM, Foy R, Hempel S, McDonald KM, Ovretveit J, Rubenstein LV, Adams AS, Angood PB, Bates DW, Bickman L, Carayon P, Donaldson L, Duan N, Farley DO, Greenhalgh T, Haughom J, Lake ET, Lilford R, Lohr KN, Meyer GS, Miller MR, Neuhauser DV, Ryan G, Saint S, Shojania KG, Shortell SM, Stevens DP, Walshe K.. Advancing the Science of Patient Safety. Ann Intern Med.2011;154(10):693-696

3. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ 2010; 341 doi: http://dx.doi.org/10.1136/bmj.