PDF The BMJ Mid Staffordshire Inquiry book

Free download. Book file PDF easily for everyone and every device. You can download and read online The BMJ Mid Staffordshire Inquiry book file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with The BMJ Mid Staffordshire Inquiry book book. Happy reading The BMJ Mid Staffordshire Inquiry book Bookeveryone. Download file Free Book PDF The BMJ Mid Staffordshire Inquiry book at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF The BMJ Mid Staffordshire Inquiry book Pocket Guide.

They then calculated, for a hypothetical centre performing the same kinds of operation as Bristol, what the expected mortality would be.

T6L Mid Staffordshire NHS Foundation Trust Public Inquiry

According to this analysis, both the UKCSR and the HES showed evidence of excess mortality from to March in open-heart operations on children under one year old. In this period, for these children, the mortality rate in Bristol was around double that of other centres. The estimated excess mortality was 19 out of the 43 deaths reported to the CSR and 24 out of the 41 deaths recorded in the HES. The main analysis used a Bayesian approach to statistics, that is to say one in which data are interpreted as updating our prior beliefs about the world rather than being analysed as if they were all that we knew.

In the Bristol case, a key specification was that the calculation of the risk associated with, for example, one of the types of operation performed should not assume that the risk would be the same in each centre. The approach would have been unfamiliar to many of the doctors reading the report of the Bristol inquiry or the various journal articles in which the work was subsequently published. After the Bristol inquiry, statisticians were granted regular updates of HES data, Kennedy having recommended that there should be more openness and transparency.

In , Paul Aylin and others, including Brian Jarman, an emeritus professor of primary care at Imperial College who had been on the panel of the Bristol inquiry, published figures in the BMJ comparing the performance of different units doing open-heart surgery on children under one year old during three different time periods.

Manual The BMJ Mid Staffordshire Inquiry book

The analysis was based on HES data. On the charts they produced see below , each unit is represented by a red dot; the position of the dot is the best estimate of the mortality rate; the bars through the dot mark the 95 per cent confidence interval: if the data are accurate and the method unbiased, 95 per cent of the time the true rate will be somewhere in this range.

Hospital units are arranged in order of the number of operations performed, with smaller units to the right. The extent to which the performance at Bristol was divergent during the period covered by the inquiry is shown clearly in the chart for Epoch 3. Looking at all three charts, we can see that Oxford too seems something of an outlier. In fact, doctors there were so outraged that their professional competence had been publicly undermined on the basis of something as ropy as HES data that they reported Aylin to the General Medical Council.

UK government accepts recommendations for reforming regulation of healthcare professionals

Their case was dismissed. The Oxford unit is no longer allowed to perform open-heart surgery on children under one.

  • The Magic of You (Malory-Anderson Family Book 4)!
  • Breaths of Memories;
  • BMA - Mid Staffordshire Inquiry analysis.
  • The Stregoni Sequence Book 1: Golden Healer, Dark Enchantress;

The data on actual deaths are taken from HES and restricted to in-hospital deaths, leaving out patients who were admitted for palliative care. The mix of these factors in each hospital is used to calculate the expected death rate. If the expected death rate is the same as the actual death rate, the trust scores ; actual scores tend to range between 75 and In all but two years the lower end of the 95 per cent confidence interval was above Although the data and the statistical techniques used to calculate the HSMR are similar to those used by Spiegelhalter and Aylin at Bristol, DFI wanted to assess not the performance of a single unit performing a dangerous operation requiring a great deal of technical skill, but the performance of an entire hospital carrying out a wide variety of procedures of differing degrees of complexity and risk — and to do so for every hospital in the NHS.

Many were unconvinced. His reasons can be simply illustrated:. Imagine that these figures were presented as being the percentage mortality rates for paediatric cardiac surgery in ten different hospitals. You would be pretty unhappy to be told that your child was to be treated at hospital A, given the difference between A and G. But in fact I generated the table using a computer program that assumes the probability of mortality in any given operation — in any hospital — is exactly 4 per cent.

The more often the program is run, simulating large numbers of operations, the closer the mortality rate in all hospitals will approach that figure, but if you run it just a few times, approximating real sample sizes I set it to operations per hospital — as many as took place at Bristol between and , you will see a range like the one above. League tables — applied to surgeons, hospitals or schools — do not give the person reading them any sense of how much variation would be expected for a given sample size and instead impose a rank that the data may not support.

Governments and regulators tend to make drastic interventions with the worst-performing institutions in mind, but the more effective strategy is probably to aim at improving average hospitals, since the ideas developed along the way will work almost anywhere. The HSMR shares the weaknesses of the HES data on which it based, but Lilford and Mohammed had a more fundamental criticism: mortality is unlikely to be a good measure of how good care is in a hospital. Typically, 98 per cent of in-patients survive their visit, so nearly all the data about what happens in a given hospital are ignored if we concentrate on mortality rates.

Lilford and Mohammed carried out a set of simulations which showed that the HSMR is a reliable measure of quality of care only if the proportion of avoidable deaths is at least 15 per cent of the total. In other words, in a typical hospital with, say, 10, admissions a year, you might expect patients to die.

The Patient Experience Library

Further, the HSMR was susceptible to bias: the measure would consistently favour some hospitals and penalise others. Say two hospitals are equally good but hospital A has a high proportion of emergency admissions compared to hospital B. One might expect A to have a higher death rate than B. Jarman and Aylin would say they can still make a fair comparison because their measure takes the proportion of emergency admissions into account.

They say that while of course emergency admissions are more dangerous than other sorts, the degree of danger will vary. Similar distortions appear, for example, in the case of hospitals that discharge an unusually high proportion of patients into hospices, since the mortality rate used in HSMR considers only patients who die in hospital, not those who die immediately after discharge. By increasing the proportion of patients it coded as receiving palliative care, Medway lowered its HSMR dramatically.

Mid-Staffs was not a client of this consultancy, and there is no evidence that it was influenced by CHKS, but a year later its use of the Z Here the proportion of deaths coded as palliative care rose from zero in the last quarter of to 34 per cent in the third quarter of None of these explanations seems quite to account for the transformation. As Jarman put it, the hospital seemed to have turned itself overnight into a specialist in terminal care. Jarman and Aylin rebutted the criticisms of their measure, arguing that it correlated with other indicators of quality of care and that the biases identified by Lilford and Mohammed make little difference.

That HSMR is able to detect failing hospitals at all is perhaps because those hospitals are even worse than we imagined they could be. Between and the proportion of deaths at Mid-Staffs classed as avoidable the difference between the actual and the expected number of deaths was 18 per cent of the actual total, higher than the 15 per cent Lilford and Mohammed thought implausible. It was chronically understaffed, initial assessments were carried out by receptionists with no medical training, and essential equipment — cardiac monitors, for example — were missing or broken.

Problems at the hospital, a former workhouse with around six hundred patients, some of them mentally ill, most with learning difficulties, came to light when the News of the World sent Kenneth Robinson, then minister for health, a statement from a nursing assistant describing vulnerable patients being beaten, bullied, starved and robbed by nurses.

Robinson ordered an independent inquiry, to be chaired by Geoffrey Howe, then a QC. One nurse, who carried a stick with which to threaten patients, was said to have bathed his patients by ordering them to strip in the hospital yard and turning a hose of cold water onto them. Howe describes the inadequacy of systems for reporting incidents; Francis observed the same problem at Mid-Staffs. Howe described the nurses at Ely as having formed a close-knit, inward-looking community; Francis was surprised that in Mid-Staffs the trust and its staff seemed to have few outside contacts.


Stafford Hospital scandal

The principal responsibility for the shortcomings at Ely lay with the hospital management board; the deficient care at Mid-Staffs was the result of a collective failure on the part of the trust. It would be easy to conclude from this that nothing was learned after , but the Howe Report did have an effect. Mark Drakeford, a member of the Welsh Assembly and an academic who has written about the case, argues that the report was a significant boost to the movement to close hospitals such as Ely in favour of caring for their patients in the community.

Unless lessons are learned, it certainly could happen again. Yet they do still say it: they said it, in fact, times in the evidence presented to the Mid-Staffs inquiry. Hospitals will have to get at least 15 per cent of patients to answer this question, and have been told that commissioners will be able to reward high-performing trusts.

  • Recent Articles & Resources on Professionalism in Healthcare Settings.
  • Courageous Counsel: Conversations with Women General Counsel in the Fortune 500?
  • Featured Reports.
  • The Fiction Factory: Being the Experience of a Writer Who, for Twenty-Two Years, Has Kept a Story-Mill Grinding Successfully?
  • Blog Archives?
  • Mid Staffordshire NHS Foundation Trust Archives - Page 3 of 42 - Nursing Times?
  • Start a Business with $100?

The media are now turning their attention to five other trusts with a high HSMR. The police were however sufficiently concerned to contact the GMC about practices on the ward in While the GMC took the matters at Gosport very seriously, it suspended its investigations on more than one occasion to accommodate the police and various inquests. This had the effect of grossly prolonging the GMC investigation.

During this time, Dr Barton continued to work unrestricted as a GP. In the meantime, the Police kept referring new information to the GMC at irregular intervals. The final IOC restricted her ability to prescribe and use opiates and other medications in It found that Dr Barton was indeed guilty of serious professional misconduct in her practise at Gosport. However, it took into account her subsequent nine years of unblemished practise as a GP, and accepted her mitigation that she was working in a stressful under-resourced environment and instead of erasing her from the register as expected, placed conditions on her practice for three years.

In effect, the FTP panels were run independently within the structure of the GMC, and so whilst they had to give weight to the evidence provided by the investigation department and their opinion that erasure was the appropriate sanction, they were not bound by it. To avoid a theoretical abuse of process, the GMC did not at that time have any legal right to appeal a sanction placed upon a doctor by a panel that it felt was inappropriately lenient.

We will be carefully reviewing the decision before deciding what further action, if any, may be necessary. The CHRE declined to appeal, on the grounds that the punishment handed to Dr Barton was lenient, but within the law separate actions against the ward consultants were closed without action. The nursing staff were also reported to their regulator, the NMC, by various relatives. The NMC had at this time no formal investigation arm, instead farming out cases to specialist solicitors. It was acknowledged that the solicitors had probably not had the full facts or specialist knowledge at this time.

Your Privacy

It was legally and logistically difficult for the NMC to re-open these cases, even if they were inclined to do so. It is the Barton case that prompted the GMC to petition the Government to change the law, allowing it to appeal disciplinary decisions taken by a panel that in its view were insufficient to ensure public safety and confidence in the profession and the regulator. The Gosport report is a clear and coherent document and reported after years of campaigning by the relatives.

While most of the patients referred to in the report were not in the best of health, the majority were not expected to die soon, and the report clearly concludes that many were placed on palliative care plans inappropriately, with excessive doses of medication being used. Such a scandal preys on the public concerns and political fallout of other such occurrences, such as the Mid Staffordshire Scandal [6], the Shipman murder trial [7] and the Barrow in Furness Obstetric scandal [8], undermining the enormous amount of good work the NHS does.

And yet, the lessons to be learnt from each of these cases are depressingly similar. That is, local governance must be more robust. Concerns raised locally must be addressed properly, even if they turn out to be groundless. Hospital managers and politicians need to be more concerned with safety as the main priority rather than reputation. Regional and national mechanisms for ensuring quality need to be more specific, and more reactive.

In particular, peaks in mortality should be picked up by outside bodies such as the Coronial service, and addressed more quickly, something which the new but actually recommended in the Shipman report role of medical examiner is supposed to facilitate [9]. The department of health should ensure that appropriate regulatory mechanisms are in place, that they function, and that staff will be supported, not punished, for raising concerns and errors.

In addition, regulators need to be more stringent in their approach to such allegations, taking swift action where public safety could be at risk. As the political fallout of these cases continues, what are the actions that individuals can take? Clearly, they must place the patient at the centre of everything, and make everything as safe as possible. Concerns raised locally need to be properly investigated, and any lessons learnt identified and applied. Silo working which fosters secrecy and poor communication needs to be eliminated, and proper supervision and accountability, at a local and managerial level needs to be implemented.

Ultimately, all medical practitioners need to challenge what they see as poor processes or instances of patient care. This needs to be done in a supportive environment to allow individuals and organisations to learn.