Menu

Home

Private Eye

Tour Dates

#VoteDrPhil

#health4all

Books

Staying Alive

Videos

Biography

Contact

Press Info

Interview Feature

Press Quotes

Tour Reviews

Merchandise

Photos

Archive - Month: November 2012

November 14, 2012

Medicine Balls 1326
Filed under: Private Eye — Dr. Phil @ 1:05 pm

Show Me the Data

 

On October 17th, Tim Kelsey, the National Director for Patients and Information at the NHS Commissioning Board and founder of Dr Foster, said he ‘should be sacked’ if the NHS doesn’t undergo ‘a data revolution’ under his leadership. Both Kelsey and David Cameron are fond of citing the publication of outcome data for adult heart surgeons in England as proof of a more transparent, accountable NHS. Alas, as the Telegraph spotted, the scheme has stalled due to a lack of funding.

 

The publication of comparative clinical outcomes was one of the key recommendations of the Bristol heart inquiry and in 2004, heart surgeon Sir Bruce Keogh – now clinical director of the NHS – managed to persuade his 240 colleagues to publish the results of adult heart surgery. Dramatic improvements in survival rates followed. As Kelsey puts it: ‘In some procedures, more than a third of patients are living when they might previously have died and adult heart surgery in England is measurably, demonstrably and statistically better than anywhere else in Europe.’ Or at least it was until they stopped publishing the data.

 

Mortality ratios don’t give the full picture of how a surgeon, unit or hospital is performing, but if they’re high they warrant proper investigation and questions from patients and relatives. This methodology not only helped spot the Bristol heart scandal but guided the unit’s eventual turnaround. When an external investigation finally took place at Bristol, changes were implemented that saw the mortality ratio drop from 29% to 3.5% within three years. Mid Staffordshire hospital had a significantly high mortality ratio from 1998. The Francis report (due in January 2013) might finally tell us, 15 years later, why mortality ratios and other statistical alerts were ignored for so long. Julie Bailey, founder of Cure the NHS, has published a book – ‘From Ward to Whitehall’ – that shows how patients and relatives were ignored too. The truth of the statistics is often revealed by a visit the ward.

Professor Brian Jarman’s analysis of child heart surgery mortality (Eye last) is not perfect, but it still identified Oxford as a significant outlier. The official Central Cardiac Audit Database (CCAD) did not, and it was left to a whistleblower to get surgery suspended there after four deaths in 2010 (Eyes passim). The Dr Foster Unit (DFU) and CCAD both want to find a fair way of comparing child heart surgery units in England, so it would make sense for them to share data and expertise. But when the DFU applied to have access to data and coding from CCAD, the request was declined because it didn’t ‘demonstrate value to patients.’

 

CCAD argues that risk adjusted mortality for child heart surgery isn’t yet perfect enough to publish comparative outcomes, but they’ve done it in New York since 1997. In the UK and Ireland, paediatric intensive care has used risk adjusted mortality predictions for over ten years, constantly modified to keep it up to date, and valued and trusted by all the units (www.picanet.org.uk).

 

Child heart surgery teams are under huge pressure in their understaffed, over-stretched units awaiting the outcome of Jeremy Hunt’s review of the Safe and Sustainable reorganization. Bristol Children’s Hospital has now been put on an official warning by the Care Quality Commission after a series of deaths were linked to understaffing on the cardiac ward. The sooner resources and expertise are pooled in fewer centres, the better. MD’s guess is that when the reorganization finally happens, official mortality ratios will start to be published.

The NHS needs not just to measure clinical outcomes in real time, but act swiftly on them if they give cause for concern. Breast surgeon Ian Paterson is alleged to have performed over a thousand inappropriate cancer procedures since 1994. Who was monitoring his outcomes? The National Joint Registry has data on the comparative results of individual orthopaedic surgeons, and the success rates vary widely. Why can’t patients see them? And who has a clue how their GP compares? Good data costs money, but not nearly as much as the avoidable harm of secretive, substandard care. Kelsey knows this, but he’ll have a job persuading much of the NHS.





November 8, 2012

How to choose a child heart surgeon (continued)
Filed under: Private Eye — Dr. Phil @ 12:18 pm

Below are some strong arguments against and for publishing the adjusted mortalities of child heart surgery units, with certain caveats. Personally, I’m in favour of publishing. Mortality rations are not perfect, but I think they can help spot problems. They pointed out the problems of child heart surgery in Bristol and Oxford, and the high death rates in Mid Staffordshire. The problem was that the medical and political establishments sought to discredit the data, rather than investigate swiftly to see if there was a problem and so prevent patients suffering avoidable harm.

Mortality ratios for child heart surgery have been published in New York since 1997 and the world hasn’t come to an end. So it can be done. The latest report was published in October 2011 (see http://www.health.ny.gov/statistics/diseases/cardiovascular/index.htm, and scroll down to ‘Pediatric Congenital Cardiac Surgery in New York State’ near the bottom).

What seems to have caused most offence is my statement that I would choose a unit with a below average mortality ratio for my child. There are clearly other complex factors involved but if my local unit had a high mortality ratio, I would want a good explanation as to why before I proceeded. The Bristol heart scandal taught me to look at the data, however imperfect, and ask awkward questions.

Please enter the debate below and let me know which side you come out on.

Thanks

Phil Hammond

 

How to choose a child heart surgeon

Sir

MD makes a bold statement that he would choose where his child with congenital heart disease went for treatment based upon Brian Jarman’s (Dr Foster’s) latest website data on standardised mortality ratios. Would he? Really? There’s little doubt that most of us, public or professional, would choose a centre with the lowest mortality for the procedure our child needed. It doesn’t necessarily follow that a centre good at one thing is as good at another.

It would be wonderfully convenient for all if there was a valid way of “scoring” a congenital cardiac centre’s “quality”but there isn’t. Just isn’t. Not anywhere. Not anywhere in the world, despite years of international effort and numerous abandoned attempts to do so. The problem lies in the very nature of congenital heart disease, both in diagnosis and in treatment. Many things can go wrong with many parts of the heart, and it’s very common to have more than one thing wrong. When there is more than one thing wrong there are few fixed patterns of abnormality – there are literally hundreds of permutations of abnormalities. Often a procedure doesn’t address all the abnormalities at once, and repeated procedures of the same kind or a different kind may be required over many years, even into adulthood. This huge diversity of diagnoses and treatment make it impossible to put many patients into convenient, neat groups to compare outcomes. Risk adjustment, therefore, whilst desirable, is hugely complex. To date, nobody in the world has come up with a validated and complete model for adjusting risk so that all patients can be pooled together to produce a nice, tidy “quality score” for overall outcomes. That’s not an excuse, it’s statistical reality.

The methodology used by Brian Jarman (Dr Foster) and cited by MD was based upon that used for the Bristol Inquiry. It is crude, outdated and invalid. As far as one can see from his website it did not involve any clinical congenital cardiac input – bizarre in such a clinical minefield. In correspondence with MD and between the interested parties following MD’s piece in the Eye, it was suggested that the government funded national audit, Congenital CCAD (Congenital Cardiac Audit Database, part of NICOR, National Institute of Cardiovascular Outcomes Research) had refused to collaborate with Dr Foster (DFI).

Parents and patients over 16 give consent for their data to be sent to congenital CCAD and it goes without saying that we should do our best to ensure their data is used responsibly. Dr Foster applied for CCAD’s data with the aim of comparing it with NHS HES (Hospital Episode Statistics) data using opinion (not data) based risk adjustment. CCAD rejected the application on the basis of flawed methodology and the application was also rejected by HQIP (the Health Quality Improvement Partnership) because of the lack of demonstrable benefit to patients.

Prior to Dr Foster’s request, CCAD and HQIP had approved a research application (which had been through the usual independent rigours of research funding and ethical review) from the Clinical Operational Research Unit (CORU) at UCL, for a new approach to risk adjustment on real data. Their team have expert statisticians with direct input from congenital heart disease specialists. The first stage of this peer reviewed, collaborative work is due to be published shortly.

In emotive situations it’s very easy to get the public to pay attention to sensational statistics, even when the stats are wrong. The importance of being fair to the public and being fair to doctors in such a complex field can’t be overemphasised. In the UK we (congenital CCAD) put more data on treatment for congenital heart disease in children and adults into the public domain than any other country in the world. Risk stratification still hasn’t been worked out by anyone, anywhere, so it’s not proper or fair to use it in any of its current forms.

MD says on his website “It seems churlish to complain that there isn’t a robust method of risk adjustment for PCS if you aren’t enabling your best statisticians to collaborate on it”. We are collaborating with an independent expert statistical research group and are disappointed to hear from MD that they are not the best. So who are the best and the worst statisticians? I think we should be told. How about an Eye article on “how to choose your statistician”? A league table with quality scores would be handy.

John Gibbs FRCP

Lead clinician for congenital heart disease

Central Cardiac Audit Database

NICOR

170 Tottenham Court Road

London W1T 7HA

Dear Phil,

Thank you for sending me your correspondence with John Gibbs, and also that with Leslie Hamilton on Monday.

The data that I put on my website is not a “quality score” for overall outcomes and I did not claim that it was. The data represents adjusted mortalities for PCS units. I acknowledged on my website that the casemix used is outdated because it is based on that used for the Bristol Inquiry. I also mentioned that we had applied for the more up-to-date CCAD data but been turned down by HQIP. John confirms that our application was partly “rejected by HQIP (the Health Quality Improvement Partnership) because of the lack of demonstrable benefit to patients.” As John probably knows, there was very extensive clinical congenital cardiac input into the methodology used for the analyses that our unit did for the Bristol Inquiry. They were approved for the Inquiry report, including by the congenital cardiac clinicians (http://www.bristol-inquiry.org.uk/).

I said in my reply to the email from Leslie Hamilton that you sent me on Monday:

“Thank you for copying me into your email exchange with Leslie Hamilton. He says that “[i]t would seem obvious to use outcome data to assess the quality of care when deciding which units to designate as the surgical units for the future. However the clinical Steering Group (who advised the JCPCT) was very clear that the data should not be used in this way.” I agree that it seems obvious that outcome measures should be one of the factors that should be used when deciding which units to designate as the surgical units for the future. I disagree with the clinical Steering Group that they should not be considered. As Leslie knows, at Bristol the adjusted mortality for open heart PCS for children under one was high for about a decade, significantly high for several years, and it was not until the results were published and an external investigation was carried out by Hunter and de Leval in 1995 that the extensive problems at that unit were confirmed. The 1988 internal report of the Regional Cardiac Strategy, Cardiology and Cardiac Surgery Medical Advisory Subcommittee, dated 1 November 1988, chaired by Dr Deirdre Hine (Bristol Inquiry reference UBHT 0174 0057) had come to conclusions similar to those that we reached in during the Bristol Inquiry report regarding the problems at Bristol and had made some similar recommendations to those that we made in the Bristol Inquiry report, but little action seemed to have been taken to implement the Subcommittee’s recommendations. Once action was taken after the Hunter and de Leval external investigation the mortality ratio that I have referred to dropped at Bristol from 29% to 3.5% within three years.

I believe that the hundreds of parents who had taken their children to Bristol for PCS should have been able to see the significant differences between the adjusted mortality ratios of the English PCS units in order to allow them to make decisions as to where to take their child, just as much as that information should be one of the factors in deciding which units to designate for the future. I mentioned on my website the limitations of the data that we used and the need for more up-to-date information on case-mix, together with the fact that we (Paul Aylin) had applied for the CCAD data. Our application was turned down by HQIP. David Cunningham said in his email to me on 31 October “I think HQIP’s point was there seemed little in the application that demonstrated any value to patients.” I find HQIP’s attitude appalling: it doesn’t auger well for a re-application.

Surely it should be possible to share the CCAD data (our unit is not without experience regarding calculating adjusted hospital mortality ratios, including those for PCS). There could possibly be advantages for both the public and those making decisions about designating units.”

That is still my view. I believe, on balance, that it would be beneficial to get a ‘good enough’ measures of adjusted mortalities for PCS units, using the best data available and being aware of the caveats.

Brian Jarman.

 


From: John Gibbs
Subject: Re: MD’s piece on choosing a child’s heart surgeon
Date: Thu, 8 Nov 2012 16:10:23 +0000
To: hamm82@msn.com

Hmm… would redress the balance a bit if you published in the Eye rather than bury it on the website.

 

You really don’t need to persuade me (or my colleagues at CCAD) of the benefits of putting good data in the public domain. I’ve worked my nuts off at CCAD for over 16 years trying to do that and trying to persuade surgeons that it isn’t just a witch hunt. And I didn’t say that calculating SMRs couldn’t be done. I said there was no validated way of doing it properly at present. If it can’t be done properly it’s not fair to the public or doctors to do it. You keep citing New York State. Yeah – nobody else in the world has adopted their risk adjustment, have they? Wonder why that is (it’s totally unfathomable on their website).

 

CCAD are very much in favour of sharing data, but only if data requests pass the usual reviews by CCAD and HQIP. We have shared data with a wide variety of researchers, clinicians and commissioners over the years. We’ve only rejected two applications ever – and one of those was because the data requested was already available on our public website.

 

Brian Jarman says he’s appalled at our (and HQIP’s) decision to reject Dr Foster’s application for data. I think it would have been appalling and irresponsible for us to agree to a collaborative study using outdated methodology which has been abandoned by everyone else in the field. The request was rejected because of its poor design which dictated that it was highly unlikely to come up with information of benefit to patients.

 

Come on Phil, be objective. Brian actually admits his SMR methodology is outdated, that it may not be suitable for today’s case mix, and that he has not had up to date input from specialists in the field (he just cites clinical input from the Bristol Inquiry). Is that reasonable? Does it sound like “the best” statistical approach? Does it really sound fair and in the public interest?

 

It’s worth bearing in mind that paediatric cardiac surgeons (and cardiologists) do incredibly complex and stressful stuff with crazy working hours and bugger all home life. Even the ones in the highest general regard feel hounded

by repeated analyses of mortalities done by different bodies using different methods and coming up with different results and yet another “scandal” baby murdering story in the ST. In the last few years at least four paediatric cardiac surgeons have either left the country or left the specailty prematurely – and it’s increasingly hard to find trainees to want to come into a specialty under such relentless criticism. If it carries on like this there won’t be anything like enough surgeons to cope with the workload, and then the patients will be really screwed. I’m not suggesting one should be soppy or should give them any special treatment – just that we should all be fair. And I can’t see any reason why it’s not possible to be fair to doctors and patients at the same time.

 

We’ve argued in the past about the safe & sustainable decision making before, so prob not much point revisiting that but wow, does that need some simple, objective thinking injected into it! (Not from you, judging from your previous logic free comments!!). Looks in a right mess just now – what a tragic waste of an opportunity!

 

Pip pip

John

 

Thanks John

Jarman’s analysis picked up Oxford as a significant outlier. As you know, surgery at Oxford was suspended in 2010, but largely because of a whistleblower. Did CCAD flag it up too? I know Paul Aylin from DFU flagged up Oxford in the BMJ in 2004 as an outlier and got referred to the GMC for his troubles. Mortality Ratios might be crude but they do point to where the outliers might be, and I’d rather have them published than not published. I’d also much prefer if you and DFU collaborated on casemix, coding etc to get as fair an analysis as you can. If you don’t, another analysis will come out through the FOI cracks (eg Spiegelhalter’s a few years ago).

First you tell me know one in the world has published mortality ratios for PCS, then I discover New York has been doing it since 1997, and you dismiss it as unintelligible. Would be more interesting to ask them how they did it, what the reaction has been, whether it has increased pressure on surgeons or improved outcomes as it did in adults.

With Tim Kelsey on the NHS CB I would imagine all outcomes – for GPs, nurses, orthopaedic surgeons – will be published, all imperfect but working towards transparency and accountability. You might as well stay ahead of the curve and work with DFU.

Phil


On 9 Nov 2012, at 00:18, Jarman, Brian wrote:

 

Phil,

 

I saw on your website John’s response to my reply to his letter below. Neither he nor Leslie has corresponded directly with me or copied me into their emails to you about my publication.

 

I wish John would quote me correctly. It is not correct to say “Brian Jarman says he’s appalled at our (and HQIP’s) decision to reject Dr Foster’s application for data.”I said that that I was appalled that HQIP rejected our application “because of the lack of demonstrable benefit to patients.” That is very different .

 

Brian.

 

From: John Gibbs  Sent: 09 November 2012 11:23 To: Jarman, Brian Subject: Re: MD’s piece on choosing a child’s heart surgeon

 

To be fair to HQIP, Brian, I think we share the responsibility for the rejection. HQIP made their comments having seen ours, so my interpretation was that they couldn’t see any benefit to patients from using outdated methodology. HQIP also knew (having approved the data request) of CORU’s ongoing research into risk adjustment. Clearly if there was validated risk adjustment then putting SMRs into the public domain would be in the public interest. If our collaborative research with CORU (or with any other group) comes up with validated risk adjustment that gets past the scrutiny of peer review, the SCTS and the BCCA, we would be mad not to publish SMRs. But we won’t until the methodology is right. It’s not fair to patients or doctors to do otherwise.

John

 

John Gibbs

 

November 9. From Brian Jarman to John Gibbs

Thanks John.

I’m grateful that you have corresponded with me directly.

 

I think you know that we have a fair amount of experience in calculating risk adjusted mortality ratios and would have used whatever we found was the best method, once we had the data to do so. The CCAD data has been available for years. I do believe that ‘good enough’ analyses of PCS data for units (as we produced for Bristol: the results will never be perfect but caveats can be given) should have been made available to the public since the days of the Bristol Inquiry. I find the current situation quite indefensible when one considers the reduction of the death rates that occurred at  Bristol once Private had published their articles in 1992 and Hunter and de Leval had done their inspection and made their recommendations in 1995, and that was using poorer data than is currently available.

 

Are you of the school that considers that only perfect adjusted data for units should be available to the public? In a somewhat analogous situation, the Care Quality Commission criticised the year long investigation of Mid Staffs done by its predecessor organisation, the Healthcare Commission, for what were roughly similar reasons. I think it’s generally agreed that in fact the HCC did a pretty good job but it might have taken a bit too long and they might have been more proactive after their 23 May 2008 letter but it was certainly good enough for action to be taken for patient safety (the matter is on my mind at the moment because I went to a lecture about the Mid Staffs Independent Inquiry by Robert Francis at the Medical Society of London last night). Even with adult cardiac surgery, fully adjusted data is not published, as far as I know. I have been involved with both the Bristol Inquiry and the Mid Staffs Public Inquiry and I am not totally convinced that the doubts of the HQIP experts about our ability to analyse the CCAD data was the sole reason for their rejecting our application.

 

I think it should be possible for those with considerable expertise to collaborate to produce the best results for patients.

 

I’ll copy this to Phil –  he has been involved in the debate.

 

Brian.

Update from Brian Jarman, May 13, 2015

Dear Phil,

I should mention that although we use HES data and the methodology and case-mix that was used for the Bristol Royal Infirmary Inquiry (and hence the case-mix recommended by the cardiac surgeons is now out of date) there could be advantages for our unit to work with NICOR and CCAD data to update our case mix to be the same as theirs because analyses based on HES data have advantages:-

1. Our HES-based analyses are done monthly and are available two or three months after the month of the surgery – probably at least a year more timely than the NICOR CCAD-based results.

2. Our analyses use HES administrative data, which is complete and does not depend on variables such as the patient’s weight (used by NICOR – I understand the Leeds unit had missing values for weight in one of the NICOR analyses). At Bristol we decided to use HES data and not the cardio thoracic surgical register (the precursor of CCAD), partly for that reason.

Regards,

Brian.





November 5, 2012

RESPONSE TO ‘HOW TO CHOOSE A CHILD HEART SURGEON’
Filed under: Private Eye — Dr. Phil @ 10:48 pm

FROM

Leslie Hamilton

Cardiac (adult and transplant) surgeon Freeman Hospital, Newcastle

Past President SCTS

Vice chair Safe+Sustainable Steering Group

 


Subject: “Safe and Sustainable”: the Jarman data
Date: Mon, 5 Nov 2012 20:40:58 +0000
From: Leslie.Hamilton

Dear Phil

 

We met at the very first national “Stakeholders”meeting when the review started. I have always enjoyed your MD column in Private Eye and have very much appreciated your support for the principles of the review. Your column in the current issue (Eye 1326) is no exception.

Nonetheless, I have significant reservations about Professor Jarman’s analysis of the outcome (mortality) data. I know he was a member of the panel for the Bristol Inquiry and am aware of his work through the Dr Foster organisation.

 

It would seem obvious to use outcome data to assess the quality of care when deciding which units to designate as the surgical units for the future. However the clinical Steering Group (who advised the JCPCT) was very clear that the data should not be used in this way.

 

In adult cardiac surgery we have a well established, internationally accepted risk stratification system (EuroSCORE) which was developed using a robust statistical analysis of a database containing tens of thousands of patients undergoing a small range of operations – this enables us to allow for patient and operation factors to give a predicted mortality so that outcomes can be compared in a fair manner. Indeed we have shown that mortality has fallen steadily over the years so that the score has had to be recalibrated – EuroSCORE 2 was presented at the annual meeting of the European Association for Cardiothoracic Surgery last week in Barcelona (http://www.euroscore.org/calc.html).

In paediatric cardiac surgery, we have long recognised that we needed a similar process to enable outcomes to be compared. However we have the opposite situation – a small number of children undergoing a large range (149 on the database but only the 50 commonest shown on CCAD) of procedures (with sometimes several procedures combined in an individual child). Also some of the higher risk procedures currently performed do not have a specific code in HES data.

 

Several international groups (including the one at UCL / NICOR) have been working to develop a scoring system which would allow for both case complexity and patient factors in paediatric cardiac surgery:

 

 

 

 

Ultimately though these are empirical scores based on professional opinion. It was for this reason that the Steering Group advised the JCPCT not to use outcome data to assess units. We are currently seeing challenges to the decision making process used by the JCPCT – if outcome data had been used, dissatisfied units would have had endless ammunition with which to challenge the decision! It was felt better to ask Sir Ian Kennedy to assess the units against their ability to meet the standards proposed for the future (which he and his team did).

 

We know that there is significant variation in the complexity of cases undertaken by the current units. We know that neonates have the more complex operations and have the highest mortality – so using age up to 5 years (as Prof Jarman has done) does not make sense. We also know that primary repair at a young age is usually better but an initial palliative operation (a shunt?) will have a lower mortality risk initially – so initial outcome may look better. We also know that the hazard risk for surgery probably goes out to 90 days so using 30 day mortality can be misleading.

 

One of the strong arguments for having bigger centres with a higher volume of cases is that they will do a similar range of cases and the statistical analysis will be more robust. And we can look at outcomes other than mortality (parents are especially interested in neurological injury). Work needs to continue on producing a score which is helpful to parents and fair to surgical teams.

 

My real concern about the Jarman analysis and your conclusion is that parents will be mislead – if my grandchild needed heart surgery I would have no hesitation in going to Birmingham or Guys/St Thomas’s. So I think the JCPCT got it right. I can say this with no vested interest – I stopped doing paediatric cardiac surgery 5 years ago when the physical, emotional and mental pressures of a 1 in 2 rota got to me. Which is why I feel so strongly about seeing this through.

 

Yours sincerely

 

Leslie Hamilton

 

Cardiac (adult and transplant) surgeon Freeman Hospital, Newcastle

Past President SCTS

Vice chair S+S Steering Group
MY RESPONSE

Thanks Leslie

 

Ever since the Bristol Inquiry, UK experts have told me that it is too difficult to collect properly risk adjusted mortality in paediatric cardiac patients, that the heterogeneity of the population make it not worthwhile, that the numbers are too small, that there is no validated model and so on and so forth.

Odd then that New York state have been collecting and publishing this pooled data in 3 yearly batches since 1997, with the latest report published in October 2011 (see http://www.health.ny.gov/statistics/diseases/cardiovascular/index.htm, and scroll down to ‘Pediatric Congenital Cardiac Surgery in New York State’near the bottom (where paediatrics always is)).

 

I have already corresponded with David Cunningham on this, and put the correspondence up on my website where I shall put your response too. My own view is that first we need to get the data, models and analysis as good as they possibly can be – and that means sharing CCAD data with Brian Jarman, Paul Aylin and their team to allow them to contribute to this process. I think it is appalling that their request for the data has been denied – had it not been, there may well have been a more complete analysis, and Guys and Birmingham may have had a lower mortality ratio. But they can only work with the data and codings that they have. Professor Jarman has sent his data to Bruce Keogh and the DH twice, with no response. Time to converse and share, please.

 

Although the final analysis of pooled data may not be perfect and there may be mitigating factors, I believe it should be in the public domain – as per New York – over an agreed period of time, say three years. How else will you spot another Bristol? Or Oxford? And be sure to act on it? In Bristol and Oxford, the data simply was not acted on – it took whistleblowers to expose the scandals. You could argue that publication of data alone is not enough – the Mid Staffs scandal tells us that. You may have good arguments for Birmingham and Guys being high – and you should share them with Professor Jarman. I believe parents have a right to see this data and have it explained to them. But if CCAD won’t share the data with DFI, you shouldn’t be surprised if the analysis isn’t as complete as it could be.

 

I understand why the Steering Group advised the JCPCT not to use outcome data to assess units for the Safe and Sustainable review, but I won’t agree with it until I’ve seen the best statistical analysis of the pooled data. Do we have any idea how many excess deaths there have been in the 11 years since the Bristol inquiry? Have there been any significant statistical outliers over that time? HMSRs aren’t perfect either, but I feel a lot safer knowing they’re out in the public domain.

 

Yours Sincerely

 

Phil Hammond





November 3, 2012

Medicine Balls 1325
Filed under: Private Eye — Dr. Phil @ 2:28 pm

How to choose a child heart surgeon

One key recommendation of the Bristol Inquiry 11 years ago was that ‘patients must be able to obtain information as to the relative performance of the trust and the services and consultant units within the trust.’ The Inquiry concluded that between 30 and 35 more children had died after heart surgery between 1991 and 1995 in Bristol compared to a typical unit in England at that time. So how are England’s child heart surgeons performing now?

The official figures for 2010-2011 look superficially reassuring. Over fifty procedures are listed and if you know which one you child is having, you can look at the results in your hospital and compare it to other units on a graph. If you look at, say, the results for the arterial shunt operation, you’ll see that half the units are above average, half are below average but none of them breach the ‘significant statistical outlier’ line that would trigger an investigation. The same is true of pretty much all of the procedures, but the problem is that for many operations, the numbers per hospital are so small as to be statistically meaningless and you’d have to be truly shocking to trigger an investigation.

Generally, the more of an operation you do, the better you get at doing it and teaching it, and the easier it is to statistically prove your competence, which is why the Bristol Inquiry recommended a reduction in centres performing surgery. The fact that we still haven’t managed this 11 years after the inquiry and over 20 years since the Eye broke the Bristol scandal is, in the words of NHS Medical Director Bruce Keogh ‘a stain on the soul of the specialty.’

Health secretary Jeremy Hunt has just ordered a review into the current review. So the Independent Reconfiguration Panel (IRP) will now decide if the Joint Committee of Primary Care Trusts (JCPCT) has made the correct choices in its proposed reduction in the number of surgical centres from 11 to 7, each with a critical mass of 400 operations a year (Eyes passim). Just about everyone agrees that child heart surgery in England would be safer if expertise, resources, research and training were concentrated in fewer centres, but no-one wants their local centre to close.

Meanwhile, expertise is spread too thinly and units are buckling under the strain. The Care Quality Commission (CQC) has investigated after a recent child cardiac death and other safety concerns in Bristol, has issued a formal warning to University Hospitals Bristol FT and concluded the post-operative ward is understaffed.

The JPCT chose not to use any outcome data in reaching its decisions because the numbers were too small. But if you add together the data for four or more years you get a much better idea of which units are performing best.  Last year, Professor Sir Brian Jarman, who heads the Imperial College Dr Foster Intelligence Unit, did a comparative analysis using the best data available to him and the same techniques that uncovered Bristol. He sent it to Keogh but got no response. He has just repeated the analysis for 2006-2012, sent it to Keogh and put it on his website. For open heart surgery in the under 5’s over the last 6 years, there are six units that have lower than the expected mortality ratio of 100: Leicester (45), Bristol (47), Brompton (65), Southampton (67), Newcastle (68) and Great Ormond Street (73). Leicester and the Brompton are marked for closure despite being in the top 3 for overall survival. Of the others chosen by the JCPCT, Birmingham has a mortality ratio of 110, Alder Hey 120 and Guys and St Thomas’s 128. Leeds (120) has not been chosen and Oxford (160) has been stopped from operating. The JCPCT argues that when (if) their proposals go through in 2014, the amalgamated expertise in the 7 units should ensure excellent results whatever the hospital site (if the staff work together and put their rivalries behind them). But if I could choose where my child went for open heart surgery tomorrow, I’d check out the staffing levels in Bristol but otherwise opt for any of the top six.

E mail correspondence post publication

From:david.cunningham@ucl.ac.uk

To: hamm82@msn.com Subject:

RE: Outcomes of heart surgery in children

Date: Wed, 31 Oct 2012 15:43:01 +0000

Hi Phil,

Enjoyed your article in Eye 1326 about Childrens’ Heart Surgery. In the CCAD Portal you can, as you say, observe data for each year. However the funnel-plot graphs (which are just about to be updated) show a THREE year view of the data, not just a single year – because, as you say, numbers get just too small. We thought three year was a good compromise between avoiding small numbers and using data that was too old to reflect current practice. Brian Jarman has chosen two cuts of the HES data, 6-year and 4-year. The casemix adjustment, or lack of it due to the inadequacies of HES codings, may explain some of the apparent differences. For instance Brimingham and Guys do most of the Norwood procedures for hypoplastic left heart syndrome, which may explain their relatively high SMRs. In our non-casemix adjusted analysis of the same 6 year period, inclusion or exclusion of Norwood ops for HLHS made a difference of 20 to the SMRs of Guys and Birmingham.

It is extremely difficult to risk adjust properly for congenital heart surgery – there is no adequate risk model existing in the world – but we are actively working on that with the Clinical Outcomes Research Unit (CORU) at UCL. Early results look fairly promising and the work is ongoing. In the meantime that is why we display results for over 50 relatively common procedures.

Brian mentions that they have applied for CCAD data, which is correct. The audit commissioning body, HQIP, rejected the application and offered DFI advice on how to reapply successfully for the data. We currently await the revised application. I would personally like to work with DFI in achieving an optimal analysis and would welcome discussions with them. It is really important this is done correctly – since Bristol there have been some false trails which have taken a long time to sort out and they perhaps could have been avoided without compromising the public interest, in my view.

I cannot comment on the wisdom of the choices made by the JCPCT but I understand at least some of the recommendations are being reviewed by the new Health Secretary. It is always going to be difficult, and highly emotive for so many people, to make these difficult choices. I hope this makes some sense. Rgds David Cunningham

Dr A D Cunningham Senior Strategist for National Cardiac Audits 07753 682686 . david.cunningham@ucl.ac.uk NICOR . Centre for Cardiovascular Prevention and Outcomes . University College London 3rd Floor . 170 Tottenham Court Road . London W1T 7NU

From:Jarman, Brian [mailto:b.jarman@imperial.ac.uk] Sent: 31 October 2012 17:43

To: Cunningham, A Cc: Phil Hammond

Subject: RE: Outcomes of heart surgery in children

Dear David, Phil copied me into his email to you. I am pleased to make contact. It has always seemed strange to me that, with my having been on the Bristol Inquiry panel, and Paul Aylin, and our Imperial College unit, having done the calculations for the Bristol Inquiry, we have never been able to get the CCAD data and yet there is no risk adjusted overall model published for PCS units (I don’t mean for individual procedures, where the umber of cases and deaths is not likely to lead to significant results that the public could use for making decisions about the units).

As you will see on my website, I say that our data is risk adjusted but we still use the case mix of procedures that we used for Bristol and they will have changed now. Paul has tried over the years to have access to the CCAD but HQIP has rejected the application and Paul says that they keep coming back to him saying he needs ethical approval, or ECC clearance. As you probably know, we have full clearance to receive HES data from the National Information Governance Board (NIGB), previously its predecessor the Patient Information Advisory Group (PIAG).

I think it could add to the information available if we were allowed to do our normal risk adjustments, which I describe on my website, using the CCAD data with the up-to-date case-mix. We would like to work with you to achieve an optimal analysis. At the Bristol Inquiry the parents of the children who died asked us why they weren’t told that the Bristol PCS, under 1 year, open heart surgery mortality was 29% (over 3 years). They could have driven an hour or two up the motorway and got a third of that mortality. We didn’t have an answer tor them then and, as far as I know, we don’t have an answer now. We concluded then (Bristol report, 2001, Chapter 12, “Concerns” paragraph 6):- “In short, there was no effective national system for monitoring outcomes. This situation was compounded by the assumption by a number of the respective organisations that it was not their responsibility but that of some other body. This meant, in turn, that the absence of, and need for, a national system was not recognised nor acknowledged at the time.” Also: “It would be reassuring to believe that it could not happen again. We cannot give that reassurance. Unless lessons are learned, it certainly could happen again, if not in the area of paediatric cardiac surgery, then in some other area of care.”

Best wishes, Brian.

 

From: Cunningham, A [mailto:david.cunningham@ucl.ac.uk]

Sent: 31 October 2012 19:13

To: Jarman, Brian Cc: Phil Hammond Subject:

RE: Outcomes of heart surgery in children

Dear Brian, Paul’s application to use for data has been rumbling on for a while now (but not ‘years’, but that’s by the by). I think HQIP’s point was there seemed little in the application that demonstrated any value to patients. I have no comment about that. Our point was that the application suggested the use of RACHS-1 methodology, which is quite definitely outdated and just doesn’t work. I do not personally see why Paul can’t word his application to keep HQIP happy. I would lose the reference to RACHS-1 and instead suggest that you/he will use or generate your own casemix adjustment. The adjustments you used in your analysis of 2006-12 data were necessarily limited due to you not having access to enough information in HES.

I don’t personally think valve disease is the way to go for risk adjustment in this very young group of patients – but here’s the thing, YOU may not agree with that and you may find that you can build a risk model that does work, which would of course be of value to everyone. I do not know about the ECC’s position – simplest way would be to email Claire Edgeworth (Claire.Edgeworth@nhs.net) – she is the Deputy Approvals Manager and could advise. Certainly we always have to have specific ECC approval when we seek to receive new data even if we have similar approvals already in place. I would refute any suggestion that all-cause, all-procedure mortality is a valid comparator UNTIL we have a properly working casemix adjustment tool.

My mentor Tony Rickards always said if he was going for an op he would look for the centre with the HIGHEST overall mortality and the lowest for his risk group – in other words they weren’t afraid to take on high risk patients. Not everyone would agree, naturally! It will be a condition of releasing data, should you/we convince HQIP that this is in everyone’s interest, that one of our Steering Group (probably me) works with you on the project and that publications and presentations are cleared with HQIP. I know the latter might be seen as restrictive, but no “veto” has ever been applied to any of the research projects we have collaborated with so far. I hope this is helpful, and perhaps signals the begining of a long overdue collaborative effort,

Rgds David Dr A D Cunningham Senior Strategist for National Cardiac Audits 07753 682686 . david.cunningham@ucl.ac.uk NICOR . Centre for Cardiovascular Prevention and Outcomes . University College London 3rd Floor . 170 Tottenham Court Road . London W1T 7NU

From:Jarman, Brian [mailto:b.jarman@imperial.ac.uk] Sent: 31 October 2012 To: Cunningham, A Cc: Phil Hammond

Subject: RE: Outcomes of heart surgery in children

Thanks David, I love their comment “there seemed little in the application that demonstrated any value to patients.” Tell that to the Bristol parents. Such a typical attitude, one that I find appalling, simply appalling.. If you don’t mind I’ll email your comments to Paul. I’d love to be able to work with you on this data with you – I think there’s a chance that we’d produce something that would be of “value to patients. Congratulations to Phil and Private Eye – there’s a chance that they will again have done something valuable for those parents. Best wishes, Brian.

From: James.Ford@grayling.com To: hamm82@msn.com

Subject: the Jarman data Date: Fri, 2 Nov 2012 10:49:29 +0000

Dear Phil

I suspect you may get a few letters about this week’s column. The Jarman data tells a partial picture (eg complexity of caseload) and is therefore seen as misleading. Can I suggest we set up a briefing (face to face or phone) for you to speak to one of the experts. I would suggest Prof Martin Elliott. If not: Leslie Hamilton, Shak Qureshi or Roger Boyle?

Best wishes, James 020 7025 7523 Managing Director – Public Sector Grayling 29-35 Lexington Street London W1F 9AH 020 7025 7523 www.grayling.com

From: hamm82@msn.com To: james.ford@grayling.com Subject: RE: the Jarman data

Date: Fri, 2 Nov 2012 11:48:16 +0000

Thanks James

Have already had a helpful reply from David Cunningham (attached) which I’ll be using in my next column. Let me know if there’s anything you’d like to add.

I will also point out that Brian Jarman has now sent his data to Bruce Keogh and the DH twice, without any response, and that Paul Aylin applied for the CCAD data to improve the risk adjustment of their analyses but was turned down by HQIP. I find it inexcusable that statistical experts aren’t sharing data to come up with the best combined analysis to guide patients and spot avoidable harm. As David Cunningham says, he’d like to work with Professor Jarman and DFI, and if the Eye column facilitates this collaboration and gets HQIP to approve the data share, I believe it will be a good thing. It seems churlish to complain that there isn’t a robust method of risk adjustment for PCS if you aren’t enabling your best statisticians to collaborate on it.

Phil





Page 1 of 1