The evolution of digital twins: Current capabilities
We discussed the design origins of digital twins in an earlier blog, which captured the “as-designed” state of the equipment. It then moved into discussing the low-volume unit testing phase of design—with sensors capturing the performance—and finally to the “as-built” state for each item manufactured from the design. Let’s first step back to design. The […]
We discussed the design origins of digital twins in an earlier blog, which captured the “as-designed” state of the equipment. It then moved into discussing the low-volume unit testing phase of design—with sensors capturing the performance—and finally to the “as-built” state for each item manufactured from the design.
Let’s first step back to design. The equivalent of the crash test is a time-consuming and costly exercise that still needs to be performed to understand how the system works as a whole. However, tweaks to the design need to be evaluated, and it is difficult to predict how changes to a component will affect the performance of the system.
All the telemetry captured facilitated the development of detailed digital simulation models. While simulation had been used to some extent during the design phase, the collection of real performance telemetry represented an incredible increase in the fidelity of the digital simulations.
As you can imagine, it is a lot less expensive to test a design using digital simulation than a physical simulation, such as a crash test. However, the initial digital simulations were limited by computing power and software to component analysis, usually of the physical characteristics rather than system analysis, which is more about how the components interact
Each of the components have to operate independently as expected. However, in something as complex as a manufacturing line, it is really important to understand how the different components interact.
As an example, I once came across the case in the ceramics industry in which a molding machine was followed by an oven. The issue was that the molding machine operated in a batch mode, meaning lots of material appeared at once with long intervals between the batches, and the oven operated in a continuous mode, meaning a small number of formed shapes were fed into the oven at short intervals. On paper, the molding machine and oven should have worked perfectly together because their hourly throughput was the same, but their behavior was very different. No one thought to put a buffer between the molding machine and the oven, leading to absolute chaos.
Digital simulation is even more important in supply chains because supply chains are very complex systems. The more important point is that supply chains are very dynamic given the dynamic nature of demand (volume, mix, location), the performance of equipment (rate, quality, up-time), product portfolio (new products, end-of-life products, launch dates), inventory (warehouse capacity, inventory turnover) and suppliers (lead times, costs, …). While each of these aspects of a supply chain are dynamic in themselves, the more complex issue is how these influence the others.
From Physical to Policy Digital Twins
Once we get to complex systems made up of many components, such as supply chain, we get into the need to design and capture policies—operating modes, inventory levels, preferred suppliers, market segmentation, revenue targets, etc. These policies are just as important to the achievement of operational goals as is the physical nature of the supply chain.
The important point about this is that the supply chain design, specifically the policies, needs to be under constant review based upon the actual performance. The physical supply chain is a lot more difficult to change in a short period of time. However, the internal policies can be changed in seconds, while external policies, such as customer delivery lead time or supplier delivery lead time, require more time. And some policies require the approval of external bodies, such as the EDA or FDA approval in pharmaceutical manufacturing and distribution.
Simulation is of huge value in evaluating the impact of policies. Of course there is always the question of the fidelity of the simulation model. The fidelity can be split into the ‘physics’ of the model as well as the input values. In other words are the policies and operational capabilities of the supply chain represented correctly from a mathematical perspective, and do the input parameter values represent realistic values?
All too often simulations are based on the design values. They should be based on demonstrated performance.
I want to point out that supply chain planning is a form of simulation. Unfortunately, all planning systems use static, single-value Master Data extracted from ERPs as input parameters. Master data is design data, and, all too often, many of these values are best guesses, and almost never based on demonstrated performance. And yet it is clear that all the input parameters are in fact variable, best represented by a distribution rather than a single value.
So far we have established the use of a Digital Twin to capture both the physical design and operating characteristics, and feeding these to simulation models to test the efficacy of the design and operating characteristics, supply chain planning being a form a simulation.
In the next blog we will discuss using demonstrated performance in the planning/simulations and the importance of including variability in planning.