Are cognitive aids just the latest trend, or will their widespread integration truly transform the level of safe patient care we deliver? In the past decade, there has been an enormous increase in use of cognitive aids in anesthesiology, particularly for perioperative emergency situations.1 While this idea is gaining in recent popularity, checklists in the O.R. date back at least to 1924, when Dr. Babcock asked, “Have you a plan of action so developed so that the right thing is always done ... and time is not frittered away?” and suggested that “a fixed emergency routine” be “posted on the walls of every operating room and drilled into every member of the staff.”2 This now-landmark manuscript is referenced in many modern discussions of emergency cognitive aids. Nearly a century later, robust development of such aids as well as iterative testing and implementation strategies of such aids are being widely disseminated in top journals. In a recent editorial, Dr. Augoustides and colleagues raised important questions: What is the ideal therapeutic dose of cognitive aids in perioperative practice? Is there a risk of “checklist toxicity” (checklist apathy with deleterious consequences) if this dose is exceeded?3 This article will describe some of the currently available cognitive aids, summarize published implementation strategies and lessons learned, and discuss the potential downsides of checklist toxicity as well as overreliance or misuse of cognitive aids.
Perioperative cognitive aids have been developed from a variety of perspectives. A leading example of a specific emergency cognitive aid, developed by the Malignant Hyperthermia Association of the United States (MHAUS), incorporates an emergency hotline for decision support in addition to published guidelines, laminated cards and posters, and a mobile device app www.mhaus.org/healthcare-professionals/managing-a-crisis. Aids have also been developed to address broader sets of perioperative emergencies. The Stanford Emergency Manual has been adopted and implemented widely and includes 23 critical events as well as broad Crisis Resource Management principles and reminders emergencymanual.stanford.edu. (Stanford picture-based checklists, such as those on page 17, can be accessed at www.cognitiveaids.org). They cite several good reasons to use emergency manuals, including the adverse impact of stress on memory and the repetitive practice required to achieve expert status (and thus, few or no clinicians are true experts in every emergency). Recently, the Emergency Manual Implementation Collaborative emergencymanuals.org was formed with the mission “to foster the dissemination and effective use of emergency manuals to enhance our patients’ safety.” Goldhaber-Fiebert and colleagues have studied critical implementation steps, emphasizing that provision of manuals is insufficient without training and buy-in from team members.4
It has been argued that perioperative events are distinct from prehospital or community environments (for which traditional life support algorithms should be followed) because O.R. crises are “usually witnessed, frequently anticipated, and involve a rescuer physician with knowledge of the patient’s comorbidities and coexisting anesthetic or surgically related pathophysiology.”5 Moitra and colleagues propose modification to standard ACLS algorithms intended to provide specific, etiology-based resuscitation responses. This provides even more impetus for developing checklists specifically for the O.R., rather than relying on and adapting existing aids from other contexts.
The Society for Pediatric Anesthesia has developed a set of free critical event checklists pedsanesthesia.org containing emergencies such as “Air Embolism,” “Anaphylaxis,” “Difficult Airway,” “Fire: Airway and O.R.,” “Local Anesthetic Toxicity,” “Loss of Evoked Potentials,” “Malignant Hyperthermia,” “Transfusion & Reactions” and “Trauma” as well as respiratory and hemodynamic emergencies. Importantly, they have included weight- and age-based dosing, as well as references for normal values, in an effort to reduce the ever-present potential for and widespread danger of pediatric dosing errors.6
Arriaga and colleagues recently published a randomized controlled trial in the New England Journal of Medicine offering support for the effectiveness of their Operating Room Crisis Checklists (projectcheck.org); when checklists were used during simulated O.R. crises, teams missed only 6 percent of critical clinical processes compared to 23 percent when checklists were not used.7 They concluded that every team performed better when the crisis checklists were available. Their set of checklists addressed a dozen scenarios that the authors perceived to be the most common operating room crises and included air embolism, anaphylaxis, asystolic cardiac arrest, hemorrhage followed by ventricular fibrillation, malignant hyperthermia, unexplained hypotension and hypoxemia followed by unstable bradycardia, and unstable tachycardia. Interestingly, actual surgeons were not part of these teams in most of the study sessions (the surgeon role was played by a resident who did not participate in decision-making), and the authors perceived that the critical actions were in the domain of the anesthesiologist and in some cases, the nurse.
Although many successes for checklists and cognitive aids exist, there are also some important considerations for adoption and research related to checklists. Psychologists know that excessive cognitive load can impair performance during a crisis, and it then follows that emergency manuals can unburden the team leader to step back, organize, diagnose and better manage critical events. Whether this really happens when cognitive aids are used in the operating room remains unknown. Thus far, the most compelling studies supporting beneficial effects of emergency manuals have important limitations: restricted team composition and context specificity.
Team-composition is a potentially confounding issue because most of the teams studied thus far have been composed of residents and other trainees, with little participation by attending surgeons.7 Might more experienced clinicians be less inclined to use the emergency manual? Could their usage patterns differ from trainees? And, can ad hoc teams (as are typically formed during clinical care) be expected to perform in the same way that the study teams performed?
Context specificity refers to the process of validating the manual on a simulated crisis for which the cognitive aid is specifically designed. For example, the “anaphylaxis” algorithm is tested with a simulated anaphylaxis scenario, and so on. In specific contexts, performance improvements have been observed with cognitive aids, particularly when a “reader” is assigned.8-10 But what about situations that do not exactly match a diagnostic consideration or respond as expected to a therapeutic algorithm? Might teams become fixated on using the aid and “force” an existing chapter to fit the crisis? Could clinical outcomes be sub-optimal despite a high adherence to the algorithm, checklist or emergency manual?
Many of the checklist sets include diagnostic features and possible causes along with treatment algorithms. However, most of the cognitive aids focus on the treatment algorithm and at least partially assume accuracy of the working diagnosis or context. And yet, diagnostic errors (including inaccurate, delayed or missed diagnoses) are leading causes of medical error.11 Many cognitive aids reference classic “pertinent positives” (e.g. Air Embolus: ↓ETCO2 ↓SaO2 ↓BP) at the top of each checklist. While this may be beneficial, such descriptions mayalso lead clinicians astray by introducing a framing effect that results in premature closure. Confirmation bias may also play a role, as clinicians look to verify that, indeed, the suggested items on the checklist are present, rather than seeking any disconfirming evidence to support an alternate diagnosis. Phenomena such as framing effects, premature closure, and confirmation bias are among a larger group of decision factors that might contribute to error and are reviewed in detail elsewhere.12
Face validity arguments for adapting emergency manuals to medicine often includes analogies to other safety-conscious industries. However, a careful examination of aviation data does not always support the common sense view that emergency manuals improve safety. Consider the famous ditching of Flight 1549 on the Hudson River by Captain “Sully” Sullenberger (NTSB report, 2010 www.ntsb.gov/doclib/reports/2010/AAR1003.pdf). Within seconds after birds struck the aircraft, the co-pilot began reading aloud from the “Dual Engine Failure” page of the emergency manual. Some actions from this manual, such as re-starting the engines, did not work. Other actions from the manual – like “windmilling” (i.e., pointing the aircraft downwards to facilitate re-ignition) – were rejected because the airplane was too low (the algorithm was designed for a context of engine failure at 20,000 feet, not for the low-altitude [3,000 feet] failure that actually occurred). At some point, Captain Sullenberger disregarded the manual and landed the plane using a combination of experience and educated guesswork. This complex cognition occurred in a very short time of only 208 seconds, and most certainly under stressful conditions with high-stakes outcomes. Did the manual help land the plane safely? On balance, the answer is “maybe.” On one hand, the checklist helped the crew focus on useful tasks (e.g., restarting the engines). On the other hand, the checklist did not match the crisis (i.e., high versus low-altitude failure), and nothing in the manual seems to have directly facilitated the landing. Moreover, cockpit recording suggests that perseveration on the engine re-start protocol was beginning to occur, and potentially helpful information (i.e., setting the wing-flaps correctly for a water landing) was not utilized.
Widespread adoption of emergency manuals may indeed be a much-needed step forward in patient safety, bringing the operating room into line with other safety-oriented industries. However, in parallel with implementation, it seems prudent to continue testing and quantifying their limitations so as to understand how to maximize their benefit. Specifically, these manuals should be studied during crises that do not exactly match a particular section of the EM, and by teams composed of seasoned clinicians and attending surgeons.
Marjorie P. Stiegler, M.D. is Assistant Professor of Anesthesiologyand Director of Anesthesiology Simulation, University of North Carolina, Chapel Hill.
David A. August, M.D., Ph.D. is an anesthesiologist at Massachusetts General Hospital, Department of Anesthesiology, Critical Care and Pain Medicine, Division of Pediatric Anesthesia, Boston.
1. Mulroy M. Emergency manuals: the time has come. APSF Newsl. 2013;28(1):1, 9-10.
2. Babcock W. Resuscitation during anesthesia. Anesth Analg. 1924;3(6):208-213.
3. Augoustides JG, Atkins J, Kofke WA. Much ado about checklists: who says I need them and who moved my cheese? Anesth Analg. 2013;117(5):1037-1038.
4. Goldhaber-Fiebert SN, Howard SK. Implementing emergency manuals: can cognitive aids help translate best practices for patient care during acute events? Anesth Analg. 2013;117(5):1149-1161.
5. Moitra VK, Gabrielli A, Maccioli GA, O’Connor MF. Anesthesia advanced circulatory life support. Can J Anaesth. 2012;59(6):586-603.
6. Merry AF, Anderson BJ. Medication errors--new approaches to prevention. Paediatr Anaesth. 2011;21(7):743-753.
7. Arriaga AF, Bader AM, Wong JM, et al. Simulation-based trial of surgical-crisis checklists. N Engl J Med. 2013;368(3):246-253.
8. Harrison TK, Manser T, Howard SK, Gaba DM. Use of cognitive aids in a simulated anesthetic crisis. Anesth Analg. 2006;103(3):551-556.
9. Neal JM, Hsiung RL, Mulroy MF, Halpern BB, Dragnich AD, Slee AE. ASRA checklist improves trainee performance during a simulated episode of local anesthetic systemic toxicity. Reg Anesth Pain Med. 2012;37(1):8-15.
10. Burden AR, Carr ZJ, Staman GW, Littman JJ, Torjman MC. Does every code need a “reader?” improvement of rare event management with a cognitive aid “reader” during a simulated emergency: a pilot study. Simul Healthc. 2012;7(1):1-9.
11. Patient safety primer: diagnostic error. PSNet, AHRQ website. http://psnet.ahrq.gov/primer.aspx?primerID=12. Updated October, 2012. Accessed March 14, 2014.
12. Stiegler MP, Tung A. Cognitive processes in anesthesiology decision making. Anesthesiology. 2014;120(1):204-217.
Previous Article / Next Article