Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ BMJarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
BMJ
Article
Data sources: UnpayWall
image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
BMJ
Article . 2008 . Peer-reviewed
Data sources: Crossref
BMJ
Other literature type . 2008
versions View all 2 versions
addClaim

How to improve surgical outcomes

Authors: Peter J E, Holt; Jan D, Poloniecki; Matt M, Thompson;

How to improve surgical outcomes

Abstract

Surgical outcomes are increasingly being scrutinised through national audit and publication of unadjusted league tables.1 Two accompanying studies report different ways of measuring surgical outcomes and performance—one in groin hernia repair and the other in percutaneous coronary intervention.2 3 Public scrutiny of surgical outcomes should be encouraged, but the data and statistical analysis should be robust, meaningful, and accurate. Unadjusted league tables are often misleading because they take insufficient account of the patients’ risk factors. Commercial organisations can also produce in-depth analyses of NHS data, but many clinicians argue that the accuracy of the raw data is questionable and that such analyses are expensive and of unknown utility. Encouraging clinicians to take responsibility for data analysis at local and national levels could improve our understanding of surgical results and help develop ways to improve outcomes. The outcomes studied should be important and easy to measure—for example, postoperative death or disease specific recurrence rates. Studies on “benefit” need further development before risk-benefit analyses can be used to plan health services. Healthcare organisations in North America have suggested that variability in surgical outcomes is caused by factors other than cost and investment. Similar observations have been reported in the United Kingdom; this suggests that the identification and modification of risk factors at hospital level is important for improving patient outcomes. Several specialties, including arterial and hepatopancreatobiliary surgery, have focused on the relation between hospital annual workload (volume) and outcome.4 5 6 7 The results show that units doing a higher volume of work produce significantly better outcomes. This association must be acknowledged when services are commissioned, and complex surgery should not be performed in low volume centres but should be centralised to larger units.8 Similar associations between volume and outcome are apparent for procedures with a low surgical risk, such as groin hernia repair. In the first of the accompanying papers, Nordin and colleagues report that in Sweden re-operation rates were significantly higher for surgeons who performed fewer than five procedures each year.2 They also report that almost half of hernia surgeons in Sweden are low volume operators and they performed only 8% of hernia repairs. Service reconfiguration might also be facilitated by the publication of “safety charts” in complex procedures where mortality is an appropriate outcome measure.9 This method allows the comparison of individual procedures on a national level. The technique provides a graphical output that distinguishes between hospitals with statistical evidence of safety, those with evidence of danger, and those with insufficient evidence of either. Using this technique, low volume centres were often unable to provide evidence of safety because of low case volume and consequent lack of statistical power. Therefore, not only are low volume centres associated with a worse outcome, but the appropriateness of performing high risk surgery in such centres is questionable, because outcomes cannot be assessed in terms of safety. In a second accompanying paper, Kunadian and colleagues used funnel plots to show risk adjusted adverse outcome rates for percutaneous coronary intervention for individual operators and the overall unit.3 The plots allowed the concurrent representation of observed and expected adverse outcome rates. They showed that the overall in-hospital rates of major adverse cardiovascular and cerebrovascular events were lower than the predicted event rate. The authors suggest that the plots could be used for internal monitoring and that individual operators could monitor their own performance in a way that is compatible with benchmarking to colleagues. Analyses of national data have an important role in planning the delivery of services and in comparing peers, but they may be less useful at a local level. Local data must be used to understand individual unit outcomes, identify areas for improvement, and guide local commissioning. Local monitoring is of immediate importance to patients because divergent results can be identified and investigated. One way that this can be achieved is through the formation of mortality monitoring groups that meet on a monthly basis.10 These groups use local data and statistical techniques, including cumulative sum techniques such as cumulative risk adjusted mortality charts or moving average charts, to detect change.9 11 Individual death reports should be produced for every death to try to identify problems with care. It is through robust local monitoring that the greatest improvements in outcomes may be seen. Too many clinicians and trusts defer responsibility for assuring outcomes to the analysis of the minimum data sets required by national bodies. This is not ideal because regular and prompt processing of local data encourages an early reaction to divergence. Finally, analyses of local data will be of interest to healthcare commissioners. Access to these data may help commissioners to decide where surgical procedures should be performed. Centres could be selected on the basis of the demonstration of safety, a sufficient case volume, and a commitment to the ongoing assessment of local outcomes. The analysis of surgical outcomes must remain a priority at both local and national levels. Trusts and clinicians have an obligation to be aware of local outcomes and to detect and investigate changes promptly. National data must be analysed with relevant clinical input and be of a high enough standard to provide evidence to facilitate service reconfiguration.

Related Organizations
Keywords

Data Collection, Surgical Procedures, Operative, Outcome Assessment, Health Care, Humans, State Medicine, United Kingdom

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    16
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Top 10%
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Top 10%
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
16
Average
Top 10%
Top 10%
bronze