Volume 7 Issue 11 November 2025

A
Paperitalo
Publication
In this Issue

Welcome to Industree 4.0 for November 2025, exclusively sponsored by SAP.

SAP

By Kai Aldinger, Global Lead, Forest Products, Paper, Packaging,

SAP AG

Is Your AI Strategy Stuck in the 19th Century? A Lesson from Our Industry's Past (Part 1 of 2)

The AI Productivity Puzzle: A Shared Challenge for Our Industry


As leaders in the pulp and paper industry, we're familiar with tech evolution. Yet, as we embrace the era of artificial intelligence, many of us are encountering a curious puzzle. We are investing heavily in AI, but are we seeing expected returns? This experience is becoming a common narrative, seen across the industry. You may have seen the headlines from a widely discussed MIT study, which suggested that a high percentage of enterprise AI pilot projects—perhaps as many as 95%—don't deliver a measurable return on investment within their initial six months.


Now, it's fair to point out that this specific figure has been examined for its narrow definition of success, focusing only on immediate profit-and-loss impact. However, the broader sentiment it captures is one that many of us can likely relate to. More established analyses from firms like Gartner and McKinsey echo this trend, showing 40–85% of AI projects fall short. One McKinsey report even found that nearly 80% of companies using generative AI have yet to see a significant bottom-line impact. Does this mean the technology is failing? Or could it be that our approach needs a new perspective?


This widespread sense of unmet potential is a well-understood phenomenon known as the "J-Curve" of technology adoption. For established firms with older systems, the introduction of a truly transformative technology like AI often leads to a temporary dip in productivity before the long-term gains can be realized. This initial dip represents the natural friction that occurs when a new way of working meets an established one. The reasons for this friction are likely familiar. Our AI projects can be hampered by inconsistent data quality and siloed information, a challenge noted by many companies. They can be constrained when we try to fit powerful AI tools into rigid, pre-existing workflows, rather than rethinking the workflows themselves. And sometimes, they are launched without the foundational architecture needed to support them, focused on short-term savings over long-term growth. Perhaps, then, the high rate of disappointment isn't an indictment of AI's potential, but rather an invitation to think differently about our implementation strategy. And for that, our own industry's rich history offers a helpful guide.


A Lesson from Our Past: The Two-Phase Electrification of the Paper Mill


To understand our future with AI, it can be helpful to look to our past with power. Before the industrial revolution, paper mills relied solely on water wheels. The first great technological leap, the steam engine, broke these geographical bonds in the late 18th century. Mills could be built anywhere and operate continuously. This was Phase 1 of a new industrial age.


However, we implemented this revolutionary power source through simple substitution. A single, massive steam engine took the place of the water wheel, but the method of distributing that power remained the same. A complex network of iron shafts, pulleys, and leather belts snaked through the factory, driving every machine. This system was mechanically rigid. The entire mill was locked into a single speed, dictated by the central engine. It was impossible to optimize one part of the process without affecting all the others. The factory layout itself had to follow the logic of the drive shaft, not the logic of an efficient workflow. The gains were real, but they were fundamentally limited by this inherited architecture.


The true transformation—what we can call Phase 2—arrived not with a more powerful engine, but with a new architecture for distributing power. The development of the individual electric motor drive, pioneered for paper machines around 1919 by companies like Westinghouse, was the breakthrough. Instead of one central motor, smaller, independent motors were placed on each section of the paper machine. Decentralizing power was transformative. It broke the rigid mechanical links of the past.


The impact was immediate and dramatic. For the first time, each section of the machine could be controlled with precision, independent of the others. This flexibility boosted production speeds, from around 5 meters per minute in the early 19th century to over 500 meters per minute by 1930. Paper quality improved, and factories could be arranged for efficiency. What this story reveals is that the greatest productivity gains were unlocked not by simply replacing the power source, but by completely redesigning the factory's operating architecture to take full advantage of the new technology's unique capabilities. The bottleneck was never the engine; it was the drivetrain.


Our Current AI Journey: Electric Motors on a Steam-Powered Architecture?


This historical parallel offers an insightful lens through which to view our current journey with AI. Are today's AI applications stuck in Phase 1? Are we attaching powerful new "electric motors" to the old, rigid "transmission belts" of our older systems and processes?


Let's consider some common AI use cases. We apply predictive maintenance algorithms to an existing machine to optimize its service schedule. This helps one part but leaves the overall process unchanged. We install sophisticated AI vision systems to detect quality defects. This is a powerful enhancement, but it's a downstream fix that doesn't alter the upstream processes that may have created the defects. These are important, incremental improvements, but they are often constrained by the architecture they are connected to, which may prevent the exponential ROI we seek.


The "transmission losses" of today are not friction and slipping belts, but the limitations of our IT and process architectures. A perfect illustration of this is the traditional nightly planning run in our ERP systems. For many, this is a familiar process: data from the entire day is collected and processed in one large, monolithic batch overnight. This is the very definition of a central, rigid system. Just like the steam engine's drive shaft set the pace for the entire factory, this nightly run dictates the rhythm for the business. Decisions are based on data that is hours old, causing delays in reacting to events. In a dynamic market, this inefficiency means we can't respond to a sudden supply chain delay or a spike in customer demand until the next cycle is complete, resulting in missed chances.


Now, imagine the Phase 2 alternative, which mirrors the shift to individual electric motors. Instead of a single nightly run, we have intelligent, AI-powered software agents operating in real-time. Each agent acts like an independent motor, focused on a specific task. One agent reacts instantly to a new customer order, another flags a supplier delay the moment it's reported, and a third adjusts a production schedule based on a machine's sensor data. They work in parallel, driven by events as they happen. This flexible setup cuts delays and enables smart decisions. These agents don't just react; they can be proactive—predicting demand, managing inventory, and running at market speed.


This powerful example shows why our focus shouldn't just be on finding discrete "use cases" for AI. That keeps us in Phase 1. The real opportunity is in building a foundational architecture that allows these intelligent agents to orchestrate entirely new, more efficient end-to-end processes. End of Part 1.


For part 2 – look out for next month’s newsletter where we will talk about the next step in this journey – to advocates for a modern, flexible foundation using cloud-based ERP, composable innovation platforms, and unified data fabrics, enabling SAP Business AI to orchestrate intelligent, end-to-end processes and unlock the full potential of AI. 


For more about how SAP supports the mill products industries including paper and packaging go here.

The End of my Career?

By Pat Dixon, PE, PMP


President of DPAS, (DPAS-INC.com)

A friend shared a paper from some people at ABB entitled “Spec2Control: Automating PLC/DCS Control-Logic Engineering from Natural Language Requirements with LLMs - A Multi-Plant Evaluation”. After consuming its content, I might have to consider whether Walmart needs more greeters.


The paper described an application named Spec2Control developed by ABB that can produce control system logic using large language models (LLMs). The LLM converts natural language to logic that can be translated into files for the desired control system platform. The paper says Spec2Control “can generate 98.6% of correct control strategy connections autonomously, and can save between 94-96% of human labor.”


If this is true, have I lost 96% of my value in the marketplace?


I will start by saying I am very impressed by this work, but being an engineer I am skeptical until I use it. It appears I will have an opportunity to do so because ABB is making Spec2Control available “as an open-source variant for independent validation.” When I get the means of accessing Spec2Control and the free time, it will be fun to play with.


One the one hand, if it reduces the mundane effort of coding it can make my life much more enjoyable. I have taken advantage of ChatGPT to code for me and it handles most of the syntax issues that cause me to tear my hair out. I hate having to figure out how Pascal differs from Java which differs from C++ which differs from Python. I am one of many that have saved a lot of time having an LLM produce code that compiles.


However, this is not a fool proof method. For example, right now I am recertifying my Ignition Gold credentials. Ignition is an HMI platform from Inductive Automation, and the gold test is rather daunting. It presents very challenging requirements for an application that you have to build, and there is a lot of scripting involved. ChatGPT can be helpful, but often it simply does not know enough about Ignition to give the right answer. Some of the scripts may look like correct Python syntax, but the code it produces may be calling functions that don’t exist in Ignition or just doesn’t do what you want. This can be very frustrating, and if a skilled automation doesn’t catch the problem it could lead to an HMI that doesn’t work or dangerously misleads an operator.


Of course as LLMs improve they can become more accurate, but since the data source is whatever is found on the internet that depends on what information is disclosed. 


As for my career, I do not regard coding as my unique value add. A lot of the work in automation is in the design. Spec2Control requires a control narrative to produce the desired code. That control narrative is a design effort that requires experience and expertise. It also needs to be considered that choosing between what logic is in what PLC and what functionality is in a supervisory application or in the HMI is a design effort. Advanced control strategies require consideration of the functionality available in the control platform as well as understanding of the process and equipment. Data analytics requires knowing what data should be used, how to pre-process to make a usable dataset, and some understanding of first principles to produce models that are sensible. Current LLMs do not replace that design experience and expertise, and it is possible that it never will.


I will continue to try growing my automation business. I don’t expect that my value will be replaced anytime soon. But if you see me at Walmart, I will try to give you a kind greeting.

Al evolution will be fast and long

When I was a boy, my Dad owned a small machine shop. It was caught in the technology transition of line shaft drives and discrete electric motors. Since it was all I knew (most eight year olds don't spend much time in machine shops), I thought it was state of the art. Kai's description is spot on.


I think many people today think that AI will be "one and done." That is not what Kai is saying and I fully agree with him.


When it comes to AI, we are early in the first inning. Once we capture the coarse, large wins in AI, we will expose the next layer and the next layer and the next layer. We don't know how many layers there will be because we cannot see that far nor can we imagine what else can be "AI'd." It will be exciting to watch.

Real-Time Data Plumbing: The Hidden Cost of Scaling Predictive Analytics

By IIoT World

When manufacturers discuss predictive analytics, the focus often jumps straight to machine learning models or AI.

IIoT Sensors

By David Greenfield

Find out how manufacturers are using IIoT-enabled sensors to reduce downtime, optimize energy consumption, and achieve measurable ROI through improved diagnostics and real-time monitoring.

Power Management System Market to Achieve 8.72% CAGR Through 2035

By Industry Today

Increasing demand due to digitalization, grid automation, and energy-efficient industrial control.

What's the Value of Combining IIoT with Blockchain?

By Henry Costa

I’ve been around manufacturing tech for a while — long enough to see a lot of “next big things” come and go. Lately, I keep hearing more people ask: “What if we combine IIoT with Blockchain?” Is it just hype, or is there real value here? Let’s break it down, using real-world examples and what I’ve actually seen in plants, not just what’s in the brochures.

X Share This Email
LinkedIn Share This Email
Industree 4.0 is exclusively sponsored by SAP