Today Francis Maude will claim that thousands of patients have moved GP practice as a result of performance data published. When Williams asked the Cabinet Office to clarify, the Department of Health cited a document called Making Open Data Real: A Public Consultation in particular section A1.16:
A1.16 Seminal research in the early 1990s showed the impact of public reporting of mortality rates in New York; those physicians and hospitals publishing better health outcomes subsequently saw their market share grow.[43] In the UK, NHS Barnsley has shown the effect of simple kite marking information on choice. Fourteen GP practices, serving 40% of the local population, were accredited with Barnsley's own "Green Tick" professional standards kite mark. Between the launch of the scheme in 2008 and April 2011, 4,500 patients have so far chosen to switch to one "Green Tick" practices. On finding their GP practice had been validated, one patient said: "I have been a patient [here] for many years but seeing that the practice has received the award assures me that I am in the right place to receive the care I need when I need it."The reference [43] is to work carried out by the RAND Corporation published by the Health Foundation. This report compares the results of papers published since 2000 on the effects of publishing performance data on improvements of hospitals. The NYC example interested me:
research in the early 1990s showed the impact of public reporting of mortality rates in New York; those physicians and hospitals publishing better health outcomes subsequently saw their market share growThis seems to contradict the statement that Prof Gwyn Bevan made to the Health Select Committee in November 2010:
There are systematic reviews in the United States for putting information out on a hospital's performance. They consistently find that people do not switch from poor to high-performing hospitals. One of the paradoxes about the New York study where they issued data on risk-adjusted mortality rates for cardiac surgery is that patients continued to go to hospitals with high mortality rates. But by publishing the information, the hospitals got better. The most famous case is Bill Clinton, who had his quadruple bypass in a hospital that the information said at the time was one of the two worst outliers in the whole of New York State he could have gone to.The Making Open Data Real document says that physicians and hospitals that published the better data saw their market share increased, so if Prof Bevan is right that would imply that there must have been a lot more patients (ie the overall number of patients increased) since those with worst results saw no change in the number of patients.
To find out who is right I read the Health Foundation document. On pp12-13 it shows the effect of publishing mortality data on patient choice of hospitals and surgeons.
On the New York State Cardiac Surgery Reporting System it says published reports "provide conflicting results". Here are summaries of those reports. First for hospitals:
Mukamel and Mushlin (1998) | "providers with better outcomes had higher growth rates in market share" |
Hannan et al (1994) | "did not find a change in hospital surgical volume during roughly the same time period" |
Chassin (2002) | compared market share of hospitals that were identified as statistical outliers in the year before they were named as outliers compared to the year after (1989 to 1995). Changes were small; fewer than half the hospitals saw an increased market share for high performance or decreased share for poor performance. |
Jha and Epstein (2006) | "found no evidence" that the data "had a meaningful impact on hospitals' or surgeons' market share" |
Cutler et al (2004) | reported an initial decline after being designated as a 'poor-performer' but "this decline was not statistically significant one year after the initial report". They did not find a corresponding increase in volume among low-mortality hospitals. |
As you can see, there is no overwhelming endorsement that publishing mortality data increases the number of patients using a hospital.
Now for surgeons (in the US surgeons are typically independent contractors).
Mukamel et al (2004/05) | Medicare enrolees were less likely to select a surgeon with higher mortality |
Mukamel and Mushlin (1998) | suggested that physicians with better outcomes had higher growth rates in their charges |
Hannan et al (1994) | did not find a change in individual providers’ volume of surgery |
Hannan et al (1995) | did not find patients changed, but found that the worse surgeons stopped practising |
Jha and Epstein (2006) | reported that it was more likely that surgeons performing in the bottom quartile ceased practising |
Mukamel et al (2000) | said that only 20% of managed care organisations in New York indicated that the reports were a major factor in their contracting decisions. But analysis of actual contracting patterns show that mortality scores did not affect their choice of surgeon |
Mukamel et al (2002) | found that only in parts of NY state did higher reported quality made it more likely that an MCO would contract a surgeon |
Again, the results are mixed, and there is no clear result saying that publishing data increased the business of the better performers.
Clearly the government's selective reading of the Health Foundation report chose to use the results that agreed with their point of view and ignored the other reports.
This is alarming. It's sad to know that this is really happening. Sometimes, government are really one sided and seeing/reviewing such reports. I hope there will be justice for this one.
ReplyDelete