|
The AI Productivity Puzzle: A Shared Challenge for Our Industry
As leaders in the pulp and paper industry, we're familiar with tech evolution. Yet, as we embrace the era of artificial intelligence, many of us are encountering a curious puzzle. We are investing heavily in AI, but are we seeing expected returns? This experience is becoming a common narrative, seen across the industry. You may have seen the headlines from a widely discussed MIT study, which suggested that a high percentage of enterprise AI pilot projects—perhaps as many as 95%—don't deliver a measurable return on investment within their initial six months.
Now, it's fair to point out that this specific figure has been examined for its narrow definition of success, focusing only on immediate profit-and-loss impact. However, the broader sentiment it captures is one that many of us can likely relate to. More established analyses from firms like Gartner and McKinsey echo this trend, showing 40–85% of AI projects fall short. One McKinsey report even found that nearly 80% of companies using generative AI have yet to see a significant bottom-line impact. Does this mean the technology is failing? Or could it be that our approach needs a new perspective?
This widespread sense of unmet potential is a well-understood phenomenon known as the "J-Curve" of technology adoption. For established firms with older systems, the introduction of a truly transformative technology like AI often leads to a temporary dip in productivity before the long-term gains can be realized. This initial dip represents the natural friction that occurs when a new way of working meets an established one. The reasons for this friction are likely familiar. Our AI projects can be hampered by inconsistent data quality and siloed information, a challenge noted by many companies. They can be constrained when we try to fit powerful AI tools into rigid, pre-existing workflows, rather than rethinking the workflows themselves. And sometimes, they are launched without the foundational architecture needed to support them, focused on short-term savings over long-term growth. Perhaps, then, the high rate of disappointment isn't an indictment of AI's potential, but rather an invitation to think differently about our implementation strategy. And for that, our own industry's rich history offers a helpful guide.
A Lesson from Our Past: The Two-Phase Electrification of the Paper Mill
To understand our future with AI, it can be helpful to look to our past with power. Before the industrial revolution, paper mills relied solely on water wheels. The first great technological leap, the steam engine, broke these geographical bonds in the late 18th century. Mills could be built anywhere and operate continuously. This was Phase 1 of a new industrial age.
However, we implemented this revolutionary power source through simple substitution. A single, massive steam engine took the place of the water wheel, but the method of distributing that power remained the same. A complex network of iron shafts, pulleys, and leather belts snaked through the factory, driving every machine. This system was mechanically rigid. The entire mill was locked into a single speed, dictated by the central engine. It was impossible to optimize one part of the process without affecting all the others. The factory layout itself had to follow the logic of the drive shaft, not the logic of an efficient workflow. The gains were real, but they were fundamentally limited by this inherited architecture.
The true transformation—what we can call Phase 2—arrived not with a more powerful engine, but with a new architecture for distributing power. The development of the individual electric motor drive, pioneered for paper machines around 1919 by companies like Westinghouse, was the breakthrough. Instead of one central motor, smaller, independent motors were placed on each section of the paper machine. Decentralizing power was transformative. It broke the rigid mechanical links of the past.
The impact was immediate and dramatic. For the first time, each section of the machine could be controlled with precision, independent of the others. This flexibility boosted production speeds, from around 5 meters per minute in the early 19th century to over 500 meters per minute by 1930. Paper quality improved, and factories could be arranged for efficiency. What this story reveals is that the greatest productivity gains were unlocked not by simply replacing the power source, but by completely redesigning the factory's operating architecture to take full advantage of the new technology's unique capabilities. The bottleneck was never the engine; it was the drivetrain.
Our Current AI Journey: Electric Motors on a Steam-Powered Architecture?
This historical parallel offers an insightful lens through which to view our current journey with AI. Are today's AI applications stuck in Phase 1? Are we attaching powerful new "electric motors" to the old, rigid "transmission belts" of our older systems and processes?
Let's consider some common AI use cases. We apply predictive maintenance algorithms to an existing machine to optimize its service schedule. This helps one part but leaves the overall process unchanged. We install sophisticated AI vision systems to detect quality defects. This is a powerful enhancement, but it's a downstream fix that doesn't alter the upstream processes that may have created the defects. These are important, incremental improvements, but they are often constrained by the architecture they are connected to, which may prevent the exponential ROI we seek.
The "transmission losses" of today are not friction and slipping belts, but the limitations of our IT and process architectures. A perfect illustration of this is the traditional nightly planning run in our ERP systems. For many, this is a familiar process: data from the entire day is collected and processed in one large, monolithic batch overnight. This is the very definition of a central, rigid system. Just like the steam engine's drive shaft set the pace for the entire factory, this nightly run dictates the rhythm for the business. Decisions are based on data that is hours old, causing delays in reacting to events. In a dynamic market, this inefficiency means we can't respond to a sudden supply chain delay or a spike in customer demand until the next cycle is complete, resulting in missed chances.
Now, imagine the Phase 2 alternative, which mirrors the shift to individual electric motors. Instead of a single nightly run, we have intelligent, AI-powered software agents operating in real-time. Each agent acts like an independent motor, focused on a specific task. One agent reacts instantly to a new customer order, another flags a supplier delay the moment it's reported, and a third adjusts a production schedule based on a machine's sensor data. They work in parallel, driven by events as they happen. This flexible setup cuts delays and enables smart decisions. These agents don't just react; they can be proactive—predicting demand, managing inventory, and running at market speed.
This powerful example shows why our focus shouldn't just be on finding discrete "use cases" for AI. That keeps us in Phase 1. The real opportunity is in building a foundational architecture that allows these intelligent agents to orchestrate entirely new, more efficient end-to-end processes. End of Part 1.
For part 2 – look out for next month’s newsletter where we will talk about the next step in this journey – to advocates for a modern, flexible foundation using cloud-based ERP, composable innovation platforms, and unified data fabrics, enabling SAP Business AI to orchestrate intelligent, end-to-end processes and unlock the full potential of AI.
For more about how SAP supports the mill products industries including paper and packaging go here.
|