Letter from the Editor

Parrying the Pressure to Proceed


I love non-fiction books about disasters of epic proportions.

 

Now, when I say disasters, I not only have the classic definition in mind—such as the Chernobyl nuclear accident and the Challenger space shuttle explosion—but also slow-burn ones like Theranos. In fact, Bad Blood, about that saga, is one of the most fascinating books I’ve ever read.

 

I’m thinking of disasters because I just purchased my second book about the Challenger incident. I read one a few years ago and—when I was recently looking for a new tome—a new one caught my eye. I’m only a few chapters in, and it got me thinking about the commonalities among such incidents, which led me to today’s healthcare and the pressure to implement AI.

 

But before we go into that, let’s go around the horn and review two of the aforementioned incidents.

 

The Chernobyl disaster unfolded in a climate where deference to authority and external pressures overrode whatever limited safety culture existed. A much-delayed test had been planned for that day—to be executed by the more experienced day crew. But grid controllers pressed the plant to keep generating power to meet regional demand, pushing the exercise late into the night and onto a less-experienced shift. Then, with senior managers intent on “getting the test done,” operators were urged to continue despite conditions that violated procedures, yet the test window was not abandoned. Under the forceful direction of a supervising engineer, automatic protection systems were disabled or bypassed so they would not abort the test, alarms were normalized as nuisances, and too many control rods were withdrawn to claw back power. The resulting configuration—low power, poor coolant flow, high positive reactivity, and crippled safety interlocks—left the reactor in an exceptionally fragile state.

 

Those decisions directly set the stage for the accident. In short, hierarchical pressure and external production demands narrowed the operators’ choices to those that looked compliant in the moment but were systematically unsafe. By prioritizing schedule and appearances over conservative operations, the organization stripped away layers of defense that should have made a severe error recoverable; once those layers were gone, a single test sequence proved catastrophic.

 

The Challenger disaster unfolded amid institutional pressures that rewarded schedule adherence and public spectacle over conservative engineering judgment. NASA faced a growing launch backlog, political scrutiny of its shuttle program, and a highly publicized “Teacher in Space” mission timed for live classroom broadcasts. On the eve of launch, Morton Thiokol engineers warned that unusually cold temperatures would compromise the solid rocket booster O-ring seals and recommended a delay. During a late-night teleconference, NASA managers pressed for a rationale to proceed, and Thiokol management reversed the engineers’ “no-go,” shifting the burden of proof from demonstrating safety to disproving risk. Ice concerns on the pad were similarly reframed as hurdles to be managed rather than reasons to stand down, reflecting a culture of normalized deviance born of prior flights that had survived warning signs.

 

Those dynamics directly shaped the failure sequence. At liftoff, the frigid O-rings failed to seal the right booster’s field joint, allowing hot combustion gases to escape and erode the seal as the joint flexed under load. A flame plume developed, impinging on the external tank and aft strut; seventy-three seconds into flight it breached the liquid hydrogen tank, triggering structural breakup of the stack. What should have been caught by conservative launch criteria and deference to engineering evidence became a preventable catastrophe once layers of procedural safeguards were weakened by schedule pressure and managerial override.

 

Now, if the “solution” to such disasters were merely to avoid risk, that would not be sustainable in the real world. The only way to move things forward is to be innovative; and innovation involves taking risk. The question, as always, is how much risk and when.

 

As part of our recent Special Report on AI, Dr. John Halamka makes the point that taking a risk with supply chain—and perhaps overordering masks—is far different from taking a risk that could have direct and immediate clinical impact on a patient. The question is: What’s the risk and what’s the payoff?

 

The pressure on health system IT executives to move the ball forward today—specifically, to promote (or at least not inhibit) the use of AI to improve operations in every way—is tremendous. And I wonder if sometimes it doesn’t feel like me trying to keep my lawn weed-free—an impossible task. But that is the job—that is the name of the game.

 

How do you function as an executive who lets a thousand AI flowers bloom while keeping a keen, discerning eye to immediately rip out the most dangerous weeds—for they are not all equally troubling? To overreact here is as problematic as being too passive. And therein lies the art of being a healthcare IT executive.

 

Those who rush to the scene of a disaster are rightly recognized as heroes. But there are others—unsung and unnoticed—perhaps even chided as impediments to progress, alarmists, or, worse, cowards—who demand the brakes be pressed, the train slowed, the weed pulled. History teaches us they are (and, when they fail to act, could have been) every bit as much the hero, even if we never know their names.

 

Related Reading


 Find this column online

Thoughts on this piece? Drop me a line aguerra@healthsystemCIO.com

Special Report – AI

Health System Leaders Developing the Frameworks to Let AI Flourish

Health systems are moving beyond AI pilots to an operating model centered on measurable outcomes, fit-for-purpose governance, and disciplined change management. In this special report, healthsystemCIO Editor-in-Chief Anthony Guerra spoke with leaders from UCI Health, Duke, Mercy, Mayo Clinic Platform, John Muir Health, Cedars-Sinai, and Mount Sinai who described single-front-door intake, risk tiering, time-boxed pilots, and human-in-the-loop oversight. Early value concentrates in administrative work and call-center automation, while clinical uses advance carefully. Data quality, workforce activation, and swap-ready architectures enable sustained enterprise scale.


Experts Behind the Report -

Featured Interview of the Week

Geisinger Refines AI Governance and Workforce Literacy

Morgan Jeffries, MD, Medical Director for AI, Geisinger, outlines a shift from model-building to product-driven AI operations with risk-tiered governance and oversight for platform tools. He describes centralized policy with distributed accountability, vendor vetting, and monitoring signals to prevent vigilance decay. The program emphasizes workforce literacy—covering hallucinations, bias, and safe prompting—and measured adoption of ambient documentation where value is proven. Priorities are set with executives, enabling capacity to focus on high-impact clinical and business outcomes.

News, Articles & Columns

Epic Art In Basket: Style vs. Substance

Stephon Proctor, PhD., Associate Chief Health Informatics Officer for Platform Innovation, Children’s Hospital of Philadelphia, assesses Epic’s Art for In Basket messaging. He reviews two persistent critiques—voice authenticity and clinical awareness—and outlines Epic’s responses: style learning from prior replies, mini-Insights patient context, Assessment and Plan inclusion, and organizational protocols. Proctor credits meaningful progress yet expects modest adoption gains, arguing trust depends on medical specificity rather than merely feeding GPT with more patient and clinician data.

Partner Perspective: Artera’s de Zwirek Says IT Leaders Must Commit to Becoming AI Experts; Outlines Three Paths to Deploying Agents

Guillaume de Zwirek, CEO, Artera, says that health-system IT leaders must become fluent in AI to remain trusted advisors as adoption accelerates. In a wide-ranging interview, he outlines three paths—build, buy, or partner—while pointing to call-center automation as a near-term win with measurable ROI. de Zwirek emphasizes rigorous governance, including SOC 2/HITRUST, MCP-guarded integrations, and “LLM-as-judge” testing, plus executive dashboards tracking accuracy, handoffs, and latency. Starting with appointment verification, he says, builds muscle to scale responsibly, enterprise-wide.

Hospital Trends in the Use, Evaluation, and Governance of Predictive AI, 2023-2024

U.S. hospitals are accelerating adoption of predictive AI, rising from 66% in 2023 to 71% in 2024, mostly integrated with EHRs, according to a data brief from ASTP. Uptake remains uneven—small, rural, independent, government-owned and critical access hospitals lag. Fastest growth centers on billing simplification and scheduling, while inpatient risk prediction remains common. Hospitals favor EHR-supplied models but increasingly use third-party and self-developed tools. Most evaluate accuracy, bias, and monitor post-deployment via multi-entity governance, yet gaps persist across hospital types.

The AMA AI Toolkit: Nice Definitions, Missing Implementation

In this review, Sarah Gebauer, MD, Senior Physician Researcher at RAND, assesses the AMA/Manatt “Governance for Augmented Intelligence” toolkit. She credits its clear definitions, risk taxonomy, and a model policy, plus an eight-step framework outlining roles and oversight. Yet she argues it lacks practical guidance—on committee operations, vendor evaluation, performance monitoring, incident response, and resourcing—and omits truly rigorous validation methods. The takeaway: a legitimizing blueprint that organizes governance, not a construction manual for running it.

Your Epic Vendors are Building the Wrong Things Too

John Lee, MD, Emergency Physician at Edward Hospital Naperville and an informaticist/Epic consultant, argues vendors replicate dysfunction by building bolt-ons that pull data out of Epic, fragmenting truth, creating latency, and diverting clinicians from workflows. He urges procurement to demand transformation, not automation: extend Epic’s capabilities (e.g., SmartData Elements, registries, CDS), embed tools, align with Epic’s architecture, and design to outcomes. The vendors that matter will reduce technical debt by amplifying Epic, not bypassing it.

Gartner Says That in the Age of GenAI, Preemptive Capabilities, Not Detection and Response, Are the Future of Cybersecurity

Preemptive cybersecurity—predictive intelligence, deception, and moving target defense powered by AI and ML—will surpass detection and response, reaching over 50% of IT security spending by 2030 (under 5% in 2024), according to Gartner. The expanding global attack surface grid and a projected one million CVEs by 2030 propel this shift. Gartner emphasizes early Autonomous Cyber Immune System concepts, agentic AI and DSLMs, and a move toward specialized, interoperable solutions requiring partnerships, standardized APIs, and common standards.

Sponsor Updates

KLAS Research Spotlight Report: Heidi Health Earns High Scores for Reducing Clinician Documentation Burden


Why HITRUST Isn’t Enough for Agentic AI Systems: Insights from Artera’s SVP of Technical Operations

Brilliant Bites

Top 5 Posts of the Week

Partner Perspective: Artera’s de Zwirek Says IT Leaders Must Commit to Becoming AI Experts; Outlines Three Paths to Deploying Agents -- 9/15/2025


Partner Perspective: Drawing on Global Experience, Heidi Health’s Kelly Offers Advice on Optimizing Clinician AI Adoption9/03/2025


Special Report: Health System Leaders Developing the Frameworks to Let AI Flourish -- 9/15/2025


AI Necessitates New Approach to Clinician Training, Advises U Maryland Medical Center’s Kuebler -- 8/26/2025


Providence’s Shah Focused on Uptake of Digital Tools; Wendt’s Study Indicates Progress -- 9/03/2025

Upcoming Webinars

Click Here to Register for Any of the Webinars Below


Coordinating IT Training to Improve Usability and Reduce Burnout (10/2)

  • Gretchen Britt, Liberty Market VP Information and Technology, CIO, The University of Kansas Health System
  • Clara Lin, MD, VP/CMIO, Seattle Children's
  • Dirk Stanley, MD, CMIO, UConn Health 


From Conversation to Contract: Keys to Getting off on the Right Foot with Startups & Other Vendors (10/7)

  • Ryan Cameron, VP, Technology & Innovation, Children's Nebraska 
  • Michelle Stansbury, Associate Chief Innovation Officer & VP, IT Applications, Houston Methodist
  • Nick Culbertson, Managing Director, Techstars


Optimizing Ambient AI Adoption Across the Care Team (10/14)

  • Zafar Chaudry, MD, SVP – Chief Digital Officer & Chief AI and Information Officer, Seattle Children's 
  • Nancy Cibotti-Granof, MD, Associate CMIO, Beth Israel Lahey Health
  • Dr. Thomas Kelly, Co-Founder & CEO, Heidi Health 

On-Demand Webinars

Click Here to View Any of the Webinars Below


Strategic Transitions: The Do’s and Don’ts of Executive Career Moves

  • Chuck Christian, VP of Technology/CTO, Franciscan Health
  • Joy Oh, Chief Information & Digital Transformation Officer, The Christ Hospital Health Network
  • Chuck Podesta, CIO, Renown Health


Keys to Effective IT Capacity Management — Aligning Resources, Transparency & Communication to Meet Demand


  • Naomi Rapoza Lenane, CIO/VP of Information Services, Dana-Farber Cancer Institute
  • Muhammad Siddiqui, Chief Digital & Information Officer, Reid Health 
  • Rich Temple, Former VP/CIO, Deborah Heart and Lung Center 


Leading Through Today's Talent Crunch: Techniques for Attracting & Retaining Top Teammates

  • Michael Carr, CIO, Health First
  • Steve Stanic, CIO, Lake Charles Memorial Health System
  • Brian Sterud, VP/CIO, Faith Regional Health Services

healthsystemCIO

Subscribe


Our Community


Advertising & Marketing Services

LinkedIn  YouTube