Below are some strong arguments against and for publishing the adjusted mortalities of child heart surgery units, with certain caveats. Personally, I’m in favour of publishing. Mortality rations are not perfect, but I think they can help spot problems. They pointed out the problems of child heart surgery in Bristol and Oxford, and the high death rates in Mid Staffordshire. The problem was that the medical and political establishments sought to discredit the data, rather than investigate swiftly to see if there was a problem and so prevent patients suffering avoidable harm.
Mortality ratios for child heart surgery have been published in New York since 1997 and the world hasn’t come to an end. So it can be done. The latest report was published in October 2011 (see http://www.health.ny.gov/statistics/diseases/cardiovascular/index.htm, and scroll down to ‘Pediatric Congenital Cardiac Surgery in New York State’ near the bottom).
What seems to have caused most offence is my statement that I would choose a unit with a below average mortality ratio for my child. There are clearly other complex factors involved but if my local unit had a high mortality ratio, I would want a good explanation as to why before I proceeded. The Bristol heart scandal taught me to look at the data, however imperfect, and ask awkward questions.
Please enter the debate below and let me know which side you come out on.
Thanks
Phil Hammond
How to choose a child heart surgeon
Sir
MD makes a bold statement that he would choose where his child with congenital heart disease went for treatment based upon Brian Jarman’s (Dr Foster’s) latest website data on standardised mortality ratios. Would he? Really? There’s little doubt that most of us, public or professional, would choose a centre with the lowest mortality for the procedure our child needed. It doesn’t necessarily follow that a centre good at one thing is as good at another.
It would be wonderfully convenient for all if there was a valid way of “scoring” a congenital cardiac centre’s “quality”but there isn’t. Just isn’t. Not anywhere. Not anywhere in the world, despite years of international effort and numerous abandoned attempts to do so. The problem lies in the very nature of congenital heart disease, both in diagnosis and in treatment. Many things can go wrong with many parts of the heart, and it’s very common to have more than one thing wrong. When there is more than one thing wrong there are few fixed patterns of abnormality – there are literally hundreds of permutations of abnormalities. Often a procedure doesn’t address all the abnormalities at once, and repeated procedures of the same kind or a different kind may be required over many years, even into adulthood. This huge diversity of diagnoses and treatment make it impossible to put many patients into convenient, neat groups to compare outcomes. Risk adjustment, therefore, whilst desirable, is hugely complex. To date, nobody in the world has come up with a validated and complete model for adjusting risk so that all patients can be pooled together to produce a nice, tidy “quality score” for overall outcomes. That’s not an excuse, it’s statistical reality.
The methodology used by Brian Jarman (Dr Foster) and cited by MD was based upon that used for the Bristol Inquiry. It is crude, outdated and invalid. As far as one can see from his website it did not involve any clinical congenital cardiac input – bizarre in such a clinical minefield. In correspondence with MD and between the interested parties following MD’s piece in the Eye, it was suggested that the government funded national audit, Congenital CCAD (Congenital Cardiac Audit Database, part of NICOR, National Institute of Cardiovascular Outcomes Research) had refused to collaborate with Dr Foster (DFI).
Parents and patients over 16 give consent for their data to be sent to congenital CCAD and it goes without saying that we should do our best to ensure their data is used responsibly. Dr Foster applied for CCAD’s data with the aim of comparing it with NHS HES (Hospital Episode Statistics) data using opinion (not data) based risk adjustment. CCAD rejected the application on the basis of flawed methodology and the application was also rejected by HQIP (the Health Quality Improvement Partnership) because of the lack of demonstrable benefit to patients.
Prior to Dr Foster’s request, CCAD and HQIP had approved a research application (which had been through the usual independent rigours of research funding and ethical review) from the Clinical Operational Research Unit (CORU) at UCL, for a new approach to risk adjustment on real data. Their team have expert statisticians with direct input from congenital heart disease specialists. The first stage of this peer reviewed, collaborative work is due to be published shortly.
In emotive situations it’s very easy to get the public to pay attention to sensational statistics, even when the stats are wrong. The importance of being fair to the public and being fair to doctors in such a complex field can’t be overemphasised. In the UK we (congenital CCAD) put more data on treatment for congenital heart disease in children and adults into the public domain than any other country in the world. Risk stratification still hasn’t been worked out by anyone, anywhere, so it’s not proper or fair to use it in any of its current forms.
MD says on his website “It seems churlish to complain that there isn’t a robust method of risk adjustment for PCS if you aren’t enabling your best statisticians to collaborate on it”. We are collaborating with an independent expert statistical research group and are disappointed to hear from MD that they are not the best. So who are the best and the worst statisticians? I think we should be told. How about an Eye article on “how to choose your statistician”? A league table with quality scores would be handy.
John Gibbs FRCP
Lead clinician for congenital heart disease
Central Cardiac Audit Database
NICOR
170 Tottenham Court Road
London W1T 7HA
Dear Phil,
Thank you for sending me your correspondence with John Gibbs, and also that with Leslie Hamilton on Monday.
The data that I put on my website is not a “quality score” for overall outcomes and I did not claim that it was. The data represents adjusted mortalities for PCS units. I acknowledged on my website that the casemix used is outdated because it is based on that used for the Bristol Inquiry. I also mentioned that we had applied for the more up-to-date CCAD data but been turned down by HQIP. John confirms that our application was partly “rejected by HQIP (the Health Quality Improvement Partnership) because of the lack of demonstrable benefit to patients.” As John probably knows, there was very extensive clinical congenital cardiac input into the methodology used for the analyses that our unit did for the Bristol Inquiry. They were approved for the Inquiry report, including by the congenital cardiac clinicians (http://www.bristol-inquiry.org.uk/).
I said in my reply to the email from Leslie Hamilton that you sent me on Monday:
“Thank you for copying me into your email exchange with Leslie Hamilton. He says that “[i]t would seem obvious to use outcome data to assess the quality of care when deciding which units to designate as the surgical units for the future. However the clinical Steering Group (who advised the JCPCT) was very clear that the data should not be used in this way.” I agree that it seems obvious that outcome measures should be one of the factors that should be used when deciding which units to designate as the surgical units for the future. I disagree with the clinical Steering Group that they should not be considered. As Leslie knows, at Bristol the adjusted mortality for open heart PCS for children under one was high for about a decade, significantly high for several years, and it was not until the results were published and an external investigation was carried out by Hunter and de Leval in 1995 that the extensive problems at that unit were confirmed. The 1988 internal report of the Regional Cardiac Strategy, Cardiology and Cardiac Surgery Medical Advisory Subcommittee, dated 1 November 1988, chaired by Dr Deirdre Hine (Bristol Inquiry reference UBHT 0174 0057) had come to conclusions similar to those that we reached in during the Bristol Inquiry report regarding the problems at Bristol and had made some similar recommendations to those that we made in the Bristol Inquiry report, but little action seemed to have been taken to implement the Subcommittee’s recommendations. Once action was taken after the Hunter and de Leval external investigation the mortality ratio that I have referred to dropped at Bristol from 29% to 3.5% within three years.
I believe that the hundreds of parents who had taken their children to Bristol for PCS should have been able to see the significant differences between the adjusted mortality ratios of the English PCS units in order to allow them to make decisions as to where to take their child, just as much as that information should be one of the factors in deciding which units to designate for the future. I mentioned on my website the limitations of the data that we used and the need for more up-to-date information on case-mix, together with the fact that we (Paul Aylin) had applied for the CCAD data. Our application was turned down by HQIP. David Cunningham said in his email to me on 31 October “I think HQIP’s point was there seemed little in the application that demonstrated any value to patients.” I find HQIP’s attitude appalling: it doesn’t auger well for a re-application.
Surely it should be possible to share the CCAD data (our unit is not without experience regarding calculating adjusted hospital mortality ratios, including those for PCS). There could possibly be advantages for both the public and those making decisions about designating units.”
That is still my view. I believe, on balance, that it would be beneficial to get a ‘good enough’ measures of adjusted mortalities for PCS units, using the best data available and being aware of the caveats.
Brian Jarman.
From: John Gibbs
Subject: Re: MD’s piece on choosing a child’s heart surgeon
Date: Thu, 8 Nov 2012 16:10:23 +0000
To: hamm82@msn.com
Hmm… would redress the balance a bit if you published in the Eye rather than bury it on the website.
You really don’t need to persuade me (or my colleagues at CCAD) of the benefits of putting good data in the public domain. I’ve worked my nuts off at CCAD for over 16 years trying to do that and trying to persuade surgeons that it isn’t just a witch hunt. And I didn’t say that calculating SMRs couldn’t be done. I said there was no validated way of doing it properly at present. If it can’t be done properly it’s not fair to the public or doctors to do it. You keep citing New York State. Yeah – nobody else in the world has adopted their risk adjustment, have they? Wonder why that is (it’s totally unfathomable on their website).
CCAD are very much in favour of sharing data, but only if data requests pass the usual reviews by CCAD and HQIP. We have shared data with a wide variety of researchers, clinicians and commissioners over the years. We’ve only rejected two applications ever – and one of those was because the data requested was already available on our public website.
Brian Jarman says he’s appalled at our (and HQIP’s) decision to reject Dr Foster’s application for data. I think it would have been appalling and irresponsible for us to agree to a collaborative study using outdated methodology which has been abandoned by everyone else in the field. The request was rejected because of its poor design which dictated that it was highly unlikely to come up with information of benefit to patients.
Come on Phil, be objective. Brian actually admits his SMR methodology is outdated, that it may not be suitable for today’s case mix, and that he has not had up to date input from specialists in the field (he just cites clinical input from the Bristol Inquiry). Is that reasonable? Does it sound like “the best” statistical approach? Does it really sound fair and in the public interest?
It’s worth bearing in mind that paediatric cardiac surgeons (and cardiologists) do incredibly complex and stressful stuff with crazy working hours and bugger all home life. Even the ones in the highest general regard feel hounded
by repeated analyses of mortalities done by different bodies using different methods and coming up with different results and yet another “scandal” baby murdering story in the ST. In the last few years at least four paediatric cardiac surgeons have either left the country or left the specailty prematurely – and it’s increasingly hard to find trainees to want to come into a specialty under such relentless criticism. If it carries on like this there won’t be anything like enough surgeons to cope with the workload, and then the patients will be really screwed. I’m not suggesting one should be soppy or should give them any special treatment – just that we should all be fair. And I can’t see any reason why it’s not possible to be fair to doctors and patients at the same time.
We’ve argued in the past about the safe & sustainable decision making before, so prob not much point revisiting that but wow, does that need some simple, objective thinking injected into it! (Not from you, judging from your previous logic free comments!!). Looks in a right mess just now – what a tragic waste of an opportunity!
Pip pip
John
Thanks John
Jarman’s analysis picked up Oxford as a significant outlier. As you know, surgery at Oxford was suspended in 2010, but largely because of a whistleblower. Did CCAD flag it up too? I know Paul Aylin from DFU flagged up Oxford in the BMJ in 2004 as an outlier and got referred to the GMC for his troubles. Mortality Ratios might be crude but they do point to where the outliers might be, and I’d rather have them published than not published. I’d also much prefer if you and DFU collaborated on casemix, coding etc to get as fair an analysis as you can. If you don’t, another analysis will come out through the FOI cracks (eg Spiegelhalter’s a few years ago).
First you tell me know one in the world has published mortality ratios for PCS, then I discover New York has been doing it since 1997, and you dismiss it as unintelligible. Would be more interesting to ask them how they did it, what the reaction has been, whether it has increased pressure on surgeons or improved outcomes as it did in adults.
With Tim Kelsey on the NHS CB I would imagine all outcomes – for GPs, nurses, orthopaedic surgeons – will be published, all imperfect but working towards transparency and accountability. You might as well stay ahead of the curve and work with DFU.
Phil
On 9 Nov 2012, at 00:18, Jarman, Brian wrote:
Phil,
I saw on your website John’s response to my reply to his letter below. Neither he nor Leslie has corresponded directly with me or copied me into their emails to you about my publication.
I wish John would quote me correctly. It is not correct to say “Brian Jarman says he’s appalled at our (and HQIP’s) decision to reject Dr Foster’s application for data.”I said that that I was appalled that HQIP rejected our application “because of the lack of demonstrable benefit to patients.” That is very different .
Brian.
From: John Gibbs Sent: 09 November 2012 11:23 To: Jarman, Brian Subject: Re: MD’s piece on choosing a child’s heart surgeon
To be fair to HQIP, Brian, I think we share the responsibility for the rejection. HQIP made their comments having seen ours, so my interpretation was that they couldn’t see any benefit to patients from using outdated methodology. HQIP also knew (having approved the data request) of CORU’s ongoing research into risk adjustment. Clearly if there was validated risk adjustment then putting SMRs into the public domain would be in the public interest. If our collaborative research with CORU (or with any other group) comes up with validated risk adjustment that gets past the scrutiny of peer review, the SCTS and the BCCA, we would be mad not to publish SMRs. But we won’t until the methodology is right. It’s not fair to patients or doctors to do otherwise.
John
John Gibbs
November 9. From Brian Jarman to John Gibbs
Thanks John.
I’m grateful that you have corresponded with me directly.
I think you know that we have a fair amount of experience in calculating risk adjusted mortality ratios and would have used whatever we found was the best method, once we had the data to do so. The CCAD data has been available for years. I do believe that ‘good enough’ analyses of PCS data for units (as we produced for Bristol: the results will never be perfect but caveats can be given) should have been made available to the public since the days of the Bristol Inquiry. I find the current situation quite indefensible when one considers the reduction of the death rates that occurred at Bristol once Private had published their articles in 1992 and Hunter and de Leval had done their inspection and made their recommendations in 1995, and that was using poorer data than is currently available.
Are you of the school that considers that only perfect adjusted data for units should be available to the public? In a somewhat analogous situation, the Care Quality Commission criticised the year long investigation of Mid Staffs done by its predecessor organisation, the Healthcare Commission, for what were roughly similar reasons. I think it’s generally agreed that in fact the HCC did a pretty good job but it might have taken a bit too long and they might have been more proactive after their 23 May 2008 letter but it was certainly good enough for action to be taken for patient safety (the matter is on my mind at the moment because I went to a lecture about the Mid Staffs Independent Inquiry by Robert Francis at the Medical Society of London last night). Even with adult cardiac surgery, fully adjusted data is not published, as far as I know. I have been involved with both the Bristol Inquiry and the Mid Staffs Public Inquiry and I am not totally convinced that the doubts of the HQIP experts about our ability to analyse the CCAD data was the sole reason for their rejecting our application.
I think it should be possible for those with considerable expertise to collaborate to produce the best results for patients.
I’ll copy this to Phil – he has been involved in the debate.
Brian.
Update from Brian Jarman, May 13, 2015
Dear Phil,
I should mention that although we use HES data and the methodology and case-mix that was used for the Bristol Royal Infirmary Inquiry (and hence the case-mix recommended by the cardiac surgeons is now out of date) there could be advantages for our unit to work with NICOR and CCAD data to update our case mix to be the same as theirs because analyses based on HES data have advantages:-
1. Our HES-based analyses are done monthly and are available two or three months after the month of the surgery – probably at least a year more timely than the NICOR CCAD-based results.
2. Our analyses use HES administrative data, which is complete and does not depend on variables such as the patient’s weight (used by NICOR – I understand the Leeds unit had missing values for weight in one of the NICOR analyses). At Bristol we decided to use HES data and not the cardio thoracic surgical register (the precursor of CCAD), partly for that reason.
Regards,
Brian.