|
AI is here—in life, in work, in education. So much so that, I confess, I get tired of hearing about it everywhere. And while we may not be as close to some of the most extraordinary predictions about its capabilities, it's hard to deny we are in a new chapter.
There are many who understand AI much better than we do—and can guide you to how and where to use it. Nevertheless, we are interested in some of the bigger questions AI provokes about the capabilities and success of our minds, our organizations, and society more broadly. Decisions about AI are not “set it and forget it”—they require an ongoing commitment to actively shaping the future for ourselves and for future generations. As leaders, we are not just exploring a new technology and questioning “business as usual,” we are re-examining our humanness, how we learn, and what matters most.
This Blueprint explores AI in school and at work—and offers questions to consider and ways to cross pollinate these insights as you make wise decisions for yourself and your organization.
- Stephanie
| | |
As we engage more broadly with AI, one decision framework that helps us think critically about why and how to use it might be:
- What can be outsourced to AI?
- What is best co-created with AI?
- What work remains uniquely human?
Offloading certain tasks to AI quickly and efficiently can be highly beneficial—unless the goal is human learning. Recent studies show that AI use has the potential to negatively affect learning and cognitive development. The recent paper “Generative AI Without Guardrails Can Harm Learning” illustrates that students with unstructured “out-of-the-box” access to GPT-4 performed significantly worse on an unassisted math exam than students who only had access to their course book and class notes when studying for the test.
The authors of another paper, “GenAI as an Exoskeleton,” examine the results of training management consultants at BCG to code in Python and perform data science tasks. Half of the nearly 1,000 consultants were allowed to use ChatGPT. Although this access substantially improved their performance, the skill advantage disappeared when they tried to answer data science questions without the use of ChatGPT.
In “Using Generative AI to Learn Is Like Odysseus Untying Himself from the Mast,” David Deming examines these two studies and the implications for learning. “Learning is hard work. And there is now lots of evidence that people will offload it if given the chance, even if it isn’t in their long-run interest,” writes Deming. To avoid that temptation, he says, we must, like Odysseus, find a way to implement constraints:
| | “The Sirens offer Odysseus the promise of unlimited knowledge and wisdom without effort. He survives not by resisting his curiosity, but by restricting its scope and constraining his own ability to operate. The Sirens possess all the knowledge that Odysseus seeks, but he realizes he must earn it. There are no shortcuts. This is the perfect metaphor for learning in the age of superintelligence.” | | | Deming points to a potential way forward, highlighting a study where AI assistance is only available after you submit a solution: “AI can help with long-run learning if you first do the work yourself.” | | First Do the Work Yourself | | |
So let's expand our decision-making criteria with constraints to guide how and when to use AI by asking:
- When is it best to wait and not allow AI use?
- When is it best to provide access to AI that is structured or constrained (such as requiring the user to do the work first)?
- What (if any) skills or knowledge are not important to develop and are best outsourced to AI right away?
Answers will be informed both by an awareness of the strengths/weaknesses of AI and by an examination of factors that support human cognitive development. Some of the answers might be surprising! For example, in “The Case for Sharpening Your Math Skills in the Age of AI,” Harsha Misra provides a helpful analysis of a skill that many think is perfect for outsourcing to AI: math. He argues that being fluent in math is more important than ever for business leaders:
| | “AI is particularly good at finding exact answers to exactly stated questions, an ability Sanjoy Mahajan calls ‘academic’ math. Business math, however, is different. It requires practical, approximate, adaptable solutions to the fuzzy, fluid, squishy problems real-life actually hands out. Such problems expose AI’s weak points, and reveal the strength of human reasoning, creativity, and common-sense.” | | | |
Similar to Deming’s advice to “do the work first,” Misra recommends building numerical intuition by using an “unusual calculator which ‘thinks only if you think too.’ When you enter a calculation, the calculator first asks for your rough, best-guess answer. If your guesstimate is in the approximately right ballpark, it will oblige with the exact solution. Otherwise: Think harder, try again.”
To develop the math fluency managers need to “structure and solve real-world problems in practical and flexible ways,” he offers three recommendations in the format of TRY, DO, WIN.
-
TRY = Think and Reason for Yourself “To borrow from a famous saying: It is better to be approximately right (using your own mind) than precisely wrong (using flawed model outputs).” A well-known example is the dot-com boom of the late 1990s when many focused on measuring “eyeballs” rather than attending to cash flow.
-
DO = Decisions vs. Outcomes “We make decisions. The world observes outcomes. These two things are related. But they are not the same. … [U]sing probabilistic logic, like decision-trees, with the best inputs possible remains the best way to make good decisions.”
-
WIN = When It’s Non-Linear “In some situations, the theoretical average outcome over a hypothetical population (an 'ensemble'), may be very different than the actual expected outcome for any individual person in that same population.” This is where building numerical intuition is helpful.
In the process of asking, “What skills shouldn’t be outsourced to AI?” you will often uncover additional questions. For example, reading the article about math skills also reveals a key question: are schools actively helping students develop the math skills needed to solve real-life problems or are we overly focused on teaching the kind of “academic math” that AI excels at?
| | |
|
In addition to being able to make clear decisions based on imperfect information, thriving in the age of AI also requires the human edge that may be diminished or entirely at risk when AI is used without guardrails: emotional IQ. Consider the ability to “read the room,” to pick up on nuance, to inspire others to pursue an ambitious goal, to create conditions where others can succeed, or to debate and engage with different perspectives when emotions run high and the context is complicated.
One of the greatest potential strengths of AI for assisting learning is also one of its greatest risks to learning: its capacity for personalization. As Deming notes, unlike previous technologies, with AI both the inputs and the outputs are personalized. AI tailors to your preferences in ways that don’t provide the friction we experience when interacting with other humans in real time—and in the messiness that is part of life. AI might be the “siren song” of escaping into a frictionless world of AI—so how can we instead challenge ourselves to embrace the complexity of interacting and working with real people?
Similarly, while we can enhance our use of AI by prompting it to question our thinking—we need not be overly quick to agree with what AI provides us. As “How AI Can Help Managers Think Through Problems” advises:
| | “Challenge the AI: As in a dialogue with a human, some friction in the thinking process is valuable. Ask the AI to provide different perspectives, ideas, or overlooked options. Don’t stop at the first generated output or conform too quickly to what the AI gives you.” | | | Paradoxically, the changes that result from AI can make human skills even more important—even as the prevalence of AI can make those skills harder to develop. As noted in “Building Leaders in the Age of AI,” there is a shift from “command and control” leadership structures to leading through influence and networks: | | “Recent McKinsey Global Institute research on skill partnerships in the age of AI suggests that people, agents, and robots will increasingly be working side by side to facilitate workflows. In this environment, CEOs and other C-suite leaders will not always be the smartest people in the room. As a result, traditional command-and-control approaches are likely to fall flat. It will be much more important, instead, for these leaders to create the context in which their teams can successfully navigate AI-informed process changes, role changes, and other internal and external business disruptions.” | | | In a world with AI, learning must be intentionally designed to ensure we are cultivating emotional IQ and practicing our enduring human skills: recognizing and managing our own emotions and constructively responding to others. This HBR article emphasizes that empathy is a non-negotiable leadership skill—and that it’s essential to learn and practice. | | “Choosing to bypass empathy can feel like an efficient shortcut, but when it comes to navigating problems and building engagement, research tells us that empathy is essential to effective connection, communication, and collaboration in the workplace. … Particularly in times of uncertainty and crisis, empathy is proven to be crucial for leading and safeguarding organizations out of them.” | | | |
So how can leaders and organizations develop and practice empathy? The author encourages developing an empathy protocol:
- Discuss what empathy means in your particular context.
- Identify how empathy will be expressed in behavior.
- Define how empathy, performance standards, and accountability work together.
- Commit to empathy for one another.
In the October Blueprint, we highlighted the importance of building the systems so that your team can excel. Practicing empathy is one of the ways you create the circumstances that help people thrive. It's messy and necessary work, and it doesn't happen automatically in the way AI-generated solutions might.
"Our physiology and psychology don’t change as quickly as technology does. As leaders, we need to focus on what stays constant—authenticity, trust, and human needs,” notes Tim Brown, Chair of IDEO & Leading for Creativity Instructor.
When shaping AI use within your organization, how might we consider not just the capabilities of the technology but also human needs and the kind of culture and policies that will support both good work and wellbeing? In education, we often hear of teachers questioning whether the work submitted by students was actually done by AI. The authors of the Brookings article “AI’s Future for Students Is in Our Hands” note that the flip side also needs consideration: “Many teachers distrust the authenticity of student work, while students increasingly question whether their teachers’ materials and feedback are genuinely their own.”
What is the cost to learning when trust breaks down? What are the parallel implications for managers and assessment processes at work? The ability to learn, debate, create, and collaborate are all built upon environments with psychological safety and trust. In a world with AI (in addition to societal fragmentation), recommitment to human relationships will become even more crucial to cultivate trust.
| | Portrait of an AI-Ready Learner | | Many schools have created frameworks to clarify the skills students should have when they graduate. In “Profile of an AI-Ready Graduate,” Richard Culatta uses this approach to identify six core roles students should be comfortable with that go beyond AI-literacy skills, and frankly, these are the same skills professionals need: | | |
Learner Students know how to use AI to set learning goals, create plans for learning new skills, identify strategies to get unstuck, and seek targeted feedback to improve performance and understanding.
Researcher Students know how to use AI to investigate and analyze topics, evaluate claims, and compare sources of information.
Synthesizer Students know how to use AI to synthesize, remix, and refine information into formats and levels of complexity that best meet their unique needs and capabilities.
Problem Solver Students know how to use AI as a brainstorming partner to generate new ideas and explore a wide range of possibilities.
Connector Students know how to use AI to increase human collaboration, including overcoming language barriers and finding common ground among divergent perspectives.
Storyteller Students know how to use AI to present and communicate complex ideas through text, image, audio, video, and other media.
| | | |
Culatta emphasizes, “In addition to just knowing about AI, students need to practice using AI to think deeper, create better, and solve problems more efficiently than they could on their own.”
Though graduates certainly need to be AI-ready, the studies described above and Deming’s metaphor of Odysseus caution against being overhasty in how AI is used in situations where the primary objective is learning. We appreciate the title of this HBR article because it hints at a key mindset: “When Working with AI, Act Like a Decision-Maker—Not a Tool-User.” When you shift beyond seeing AI as a “tool,” it opens up a broader perspective, and it enables you to put at the center what is most important: helping your students (or employees) and yourself to be critical thinkers and problem solvers both with and without AI. And, like Odysseus and the constraints, in order to use AI well and in ways that amplify rather than block learning, crucial consideration must be given to when and where and why AI is incorporated. In a world of AI, we need to double down on developing how well we think and collaborate and create—skills that can easily atrophy or remain underdeveloped in a world permeated by AI.
Being a critical thinker and a great learner has always been vitally important, and we need to protect and amplify our human potential to solve novel problems. As AI accelerates the pace of change, raises new questions of ethics, and can change an individual’s life in a moment (such as by a job loss), we all need to be better at the core human skills—not just at how to use AI.
| | |
What does this mean for leadership and governance? Your role as a leader, board member, educator, CEO, head of school is more important than ever—and there is likely no context where consideration of AI can be ignored. We don't have answers, but we do think the right questions are the first step!
So before asking, “How should we use AI in our organization?" it is worth taking a step back and instead asking: What are the implications of AI for our people and for our purpose? Do we have a North Star (mission) that guides our work and helps us discern where and how AI fits? Do we have a clear strategy that guides the use of AI? Do we have the right people on the team at the board level and at the executive leadership level to shape policy and decisions in this new and unknown territory? Purposefully deciding how, when, and why to use AI in service of your mission and strategy will guard your values as you stay flexible and forward thinking; helping you adapt to, and even exploit, the challenges and opportunities ahead.
| | Please feel free to forward The Blueprint to friends and colleagues who may find it helpful. And if you're not already on our mailing list and you'd like to receive future editions of The Blueprint, click the button below to subscribe! | | | | |