Willingness to Seek Bad News:
Key leadership skill for crisis management...
|
|
The suppression of bad news, when consequential, is easy to decry. Yet, doing so is more common than we care. Understanding its source and overcoming its occurrence is key to having enterprise dynamics supportive of agility, resilience, and reliability and ultimately great performance.
|
|
The New York Times reports “
China Created a Fail-Safe System to Track Contagions. It Failed
, March 29 2020." After getting ambushed by the SARS contagion in 2002, China planned to never get pandemic bushwhacked again. The country built a reporting system to quickly and easily pull reports from localities about disease and make it visible and so actionable at regional and national levels. Sounds great in theory. See and solve small local problems when the cost to fix is less and lead times are generous.
So, what happened? Not wanting to be bearers of bad news, local officials surpassed reports until the deluge was uncontainable. It's a tragedy. Given everything the public has learned about “
flattening the curve
” and
exponential growth/spread rates
, one can assume that even a little more lead-time would have had significant affect on the disease’s spread in China and beyond.
Before taking chip shots at “the Chinese” for being peculiarly unwilling to surface bad tidings, let’s recognize that such an aversion is too typical. HBS doctoral graduate
Marcelo Pancotto
did
a study across plants
that had “andon cords,” simple devices for shop floor associates to call out problems that were making work difficult. In one example, associates were pulling the cord regularly, more than once an hour all day everyday. In the other, hardly a cord was pulled not a problem was reported. You’ve guessed the irony. The plant with the frenzied cord pulling was the high quality high productivity one. The plant with the least cord activity? Awful by about every measure.
Dr. Pancotto asked why.
In the high performing plant, associates knew that when they called attention to themselves, a cascade of help was triggered, they first and typical responses being: “What’s difficult and what can we do to help?” In contrast, in the dregs plant, associates realized that _at_best_ there was no response. At worst, someone did showed up with the accusatory “What’s _your_ problem!” followed by the insistent “Let’s get back to work.”
|
|
Another colleague, working a Summer job years earlier in the same system, was tasked with putting “A OK” stickers on rear windows after final inspection unless he had problems to call out. When he let a few cars pass without stickers because of visible issues, he got called out by his supervisor for making trouble. When another car came by without a sticker, and he got chewed out again, he had to point out that the car actually had no rear windows onto which to affix the sticker. Finally, this colleague realized his job wasn’t calling out problems; it was putting stickers on the window. So, he found where in receiving the widow crates came in, he opened them them and stickers all the available windows and spent the remainder of the week catching up on his reading.
|
|
Why does this problem suppression/ reporting aversion occur? Here’s a theory.
When we first start an undertaking, we’re full of unanswered questions, problems that have yet to be resolved? What value are we actually trying to create, whose needs are we trying to meet, what combination of science technology and routines will be effective and efficient?
We're happily in an exploratory and experimental mindset.
|
|
Once we converge on reasonable answers to those problems, our challenge is less to discover our way out of the darkness. It’s to ensure consistency in action to ensure we stay reliable and predictable. And once the institutional norms shift to reliability and predictability, then rewards accrue to those who “get it right.”
All well and good until our operating environment changes and we need to reengage those withered muscles once tuned for seeing what’s wrong as a precursor for making it better.
|
|
What's the real problem? With all systems, brand new and "experimental" or long standing and "operational" things go wrong all the time. Most of those "everything all the time" problems are small, distributed, and largely inconsequential. Those'd be exactly the ones we pay attention to since correcting for them is less effort, there's ample lead time to act before they matter.
What happens instead? In the operational "keep it predictable" mindset, the little problems, slips and mistakes and close calls, those get swept under the rug. It's not pathological. It's wanting to prove that things are still stable and reliable, despite the aberration.
|
|
What's the leadership call for action? It means making it safe for people to call out problems when and where they are seen, and responding with a "what went wrong; what can we do?" rather than a "keep it to yourself approach."
|
|
All the best!
Steve Spear DBA MS MS
|
|
For more on leading for high speed crisis recovery, see
chapter 10
of The High Velocity Edge. For the Navy's experience building it's nuclear propulsion program, please see
chapter 5
.
|
|
|
|
|
|
|