“You cannot improve what you cannot measure.” – W. Edwards Deming
It is generally accepted that measurement of performance is the first step toward quality improvement, whether in business or medicine. This has been the fundamental concept driving the attention insurers and governmental and regulatory agencies place on development of quality and performance measures in health care. As our population ages, more health care will be consumed and even greater expense incurred. One of the hopes for the future is that we will find ways to maximize the benefit of our care, minimizing waste or harm – thus creating an optimal value (high quality at low cost) or “best practice”-based approach. While this is obviously not a near-term goal in an all-encompassing system, the longest journeys begin with the first step, and we can see the path before us. “Value-based purchasing” is coming. The public simply won’t accept that it takes many years or perhaps decades for widespread adoption of a “best practice.” We must accelerate the incorporation of our knowledge into everyday care.
Measurement to Improvement: Communication Strategies
“Information is not knowledge.”
– Albert Einstein
An area that has received too little emphasis is how we take data from measurements of quality and convert that into actual, real-world improvement. In fact, the manner in which quality measures can best be translated into improvements in practice “is not currently supported by a coherent body of literature.”1
The penultimate step in the process of improvement is feedback or communication of the status quo to all parties involved. There are many opportunities to diminish the benefit if this is not handled in a clear and sensitive manner. Dr. Varughese and colleagues at Cincinnati Children’s Hospital Medical Center (CCHMH) have been at the forefront of this process in pediatric anesthesia, but their experience translates well to anesthesia practice broadly. They indicate that this communication must be “consistent, clear and frequent.” This is accomplished at CCHMC by quarterly presentations of information to staff and hospital leadership in standardized, easily understood formats such as scorecards and dashboards.2,3 The outputs can be related to an entire department, sections or, ideally, to individual physicians. Often, variability between practitioners, when clearly demonstrated, allows us to learn from one another what works best in the measured parameter and in our local practice. We have discarded the “bad apple” concept in favor of Deming’s system concept – believing optimal results come from great systems as opposed to great individuals. Thus, adoption of successful or improved practices is promulgated.
In a highly recommended review, Benn and colleagues note, “Metrics collected during the immediate post-anaesthetic recovery period, such as patient temperature, patient-reported quality of recovery, and pain and nausea, provide potentially useful information for the anaesthetist, yet this information is not routinely fed back.” This individual approach is most important and is supported by the requirements of such data for maintenance of certification and re-credentialing.
Implementing Improvements Into Practice
Traditionally, intermittent and discontinuous data from periodic audits yield only snapshot views that cannot reveal embedded previous trends (akin to a single blood pressure measurement in our clinical experience). Such potentially flawed systems result in little confidence and much room for denial. In contrast, Continuous Quality Improvement (CQI) theory, also known as industrial process control, provides a data-rich environment for assessment of practice over time. It can be applied to almost any desired data set. CQI describes two types of variation – random or common variation (which can also be termed “background noise”) and special cause variation. Special cause variation stands out by its magnitude and offers insight to where improvement efforts are likely to offer the greatest benefit. Run charts and control charts are used to graphically represent clinical data (e.g., proportion of patients presenting to PACU with hypothermia) over time. These outstanding features are readily apparent when graphically represented.1-3
Process measures (“use of multimodal prophylaxis for PONV in high-risk patients”) differ from outcome measures (“PONV rates among different providers or groups”). CMS and others greatly prefer outcome measures because improved outcomes are the direct goal. One of the major problems with outcomes measures is risk adjustment, because there are so many confounding variables in a clinical practice. Process measures offer the chance to reduce variability and streamline care. “Balancing measures” may also be needed if one measure may impact another area. For example, emphasis on minimizing pain in the PACU may beget more problems with respiratory events, so that a paired assessment of these consequences might detect exacerbation of this latter problem by improving the first.2
Additionally, “process of care measures may be more sensitive to data feedback initiatives than outcomes” measurement, partly due to the absence of the risk adjustment issue, which leaves room for denial. In Benn’s report, feedback “success factors included sufficient timeliness (time between data collection and the forthcoming feedback report), dissemination of information, trust in data quality, and having a confidential or non-judgmental tone.”1
Other Ways Measurement Helps
Perhaps the most important benefit of quality measurement is, indirectly and ironically, difficult to measure. The mere act of measurement has a Hawthorne effect, in that individuals who know their performance will be examined make greater efforts to comply with guidelines and standards and to follow checklists (e.g., timely administration of prophylactic antibiotics). In a larger sphere, measurement also alters the culture of a department or industry by the manifest expression that quality is important to the community of providers. “Feedback from operational experience over time is an important mechanism of organizational learning, resulting in both incremental and large-scale modification to care systems and processes.”1
This culture change is likely the major benefit of even small efforts to improve quality. Once frontline caregivers participate in a focused project, they tend to see many other opportunities in other phases of care. This momentum propels individuals and departments to further strive for quality, increasing pride and a sense of accomplishment along the way. This can offset what I see as the greatest obstacle to consistent quality in anesthesia: the human tendency toward complacency in everyday delivery of anesthesia care. Especially in a field with a large proportion of repetitive steps, it is in our nature to fail to appreciate how valuable every small step is in each case, and to then accept shortcuts and “it’ll probably be okay” thinking. It is not enough to exhort ourselves to treat every patient as if he or she were our own loved one. This results in emotional fatigue. What we can do is develop a systems-based approach that supports best practice at every turn, following the Deming/Toyota model that quality flows not from individuals alone but from robust systems. Knowing where we are via measurement and feedback is the foundation of this process – for individuals, group practices and the specialty overall.
1. Benn J, Arnold G, Wei I, Riley C, Aleva F. Using quality indicators in anaesthesia: feeding back data to improve care. Br J Anaesth. 2012;109(1):80-91
2. Varughese AM, Hagerman NS, Kurth CD. Quality in pediatric anesthesia. Pediatr Anesth. 2010;20(8):684-696.
3. Varughese AM, Rampersand SE, Whitney GM, Flick RP, Anton B, Heitmiller ES. Quality and safety in pediatric anesthesia. Anesth Analg. 2013;117(6):1408-1418.
Previous Article / Next Article