Subscribe to our blog
Posted 24 October 2023 by
Koen Cobbaert
Lead Solution Scientist for Axon Technology

Explainable AI: Busting the black box, starting with AI transparency

In parts one and two of our three-part blog series on Explainable Artificial Intelligence (AI), I’ve explained the promise of AI in supply chain management business processes and how to build trust in the technology. I’ve explored the shortfalls and limitations of today’s AI modeling techniques and made a case for the promise of explainable AI. Now, in the final installment of the series, I want to delve further into the explainability factor—from techniques to make an AI model explainable and the critical factor of AI transparency to the role that a digital supply chain twin like Axon plays in all of this.

How explainable should AI be?

You might be wondering exactly how far one needs it go in pursing this goal of explainability when it comes to AI. In scenarios in which AI is leveraged to tailor advertising toward consumers, facilitate investment choices or propose policies to optimize your supply chain, the necessary degrees of interpretability will undoubtedly differ. We maintain that three pivotal factors warrant consideration when delineating the contexts demanding interpretability and the specific extent thereof:

  • Type of AI being used: When using a simple rule-based AI system, the explainability is by definition very high. When using machine learning techniques, this may differ strongly from high interpretability (e.g. decision trees) to very low (e.g. artificial neural networks).
  • Type of interpretability: One can make a distinction between explainability (explaining how the model came to a particular decision) and transparency (explaining how it works). 
  • Type of impact: Low-impact decisions, like a recommendation for what TV show to watch, do not require high levels of interpretability. Redefining what the supplier should be for one of your key components requires far more explanations. To get some feeling about the impact that a decision may have, think about the revenue impact that it may have, the frequency with which the decision is to be made, the regulatory context around it and the overall risk that is associated with the decision.

How to make your model explainable

Explainable AI introduces the potential to unveil the inner workings of the “black box” and highlight elements within the decision-making process that offer insights to humans. Nevertheless, this endeavor necessitates the integration of supplementary software components and the meticulous consideration of application design.

As with the majority of engineering undertakings, your system’s requisite capabilities should be contemplated at the early phases of the design process. It demands forethought and integration into the very fabric of the AI application’s design. It profoundly influences the selection of the machine learning algorithm and could even impact how data is pre-processed. Often, this involves navigating a series of design compromises.

Systems designed for explainability typically involve the integration of a model interpreter. In essence, interpretation can be visualized as a technique that maps a conceptual notion (e.g., “cat”) to discernible input features for human comprehension (e.g., a cluster of pixels depicting whiskers). The explanation encompasses a collection of intelligible features contributing to a decision (e.g., whiskers + tail = cat).

Consequently, the realm of explainability is intrinsically linked to model interpretability—the capacity of the interpreter to assign intelligible features to a model’s predictions. When looking purely at numeric values as it is often the case in supply chain decision-making, this can be far more difficult. Why does a model suggest increasing inventory in a specific location? There may be a very direct reason (e.g., high variability that must be countered), but the reason can be far more sophisticated (e.g., it is cheaper to increase inventory on this specific node in the network to counter variability in another part of the system).

Interpretability is a quality commonly viewed as entailing trade-offs. A general guideline is that the more intricate a model, the more accurate it tends to be, yet simultaneously, the less interpretable.

Certain models, such as decision trees, lend themselves well to explanation. It is conceivable to construct commercially valuable models in which the entire decision-making process can be visually represented in diagrammatic form.

If a model surpasses practical graphical representation due to its size, the tree structure inherent to the model enables interpreter software to delineate clear decision pathways and extract the important determinants of a prediction. Conversely, neural networks, though amenable to graphical analysis, encompass a multitude of connections and intricate properties regarding node interactions that inherently pose interpretability challenges. Also, data preprocessing can make explanations far more difficult—a typical example being principal component analysis (PCA) that transforms the feature space completely.

What techniques can be used to explain a model?

Sensitivity analysis is an approach that can be used for any model, whatever its complexity. It essentially comes down to slightly modifying a single input to the model and observing the change in output. The advantages of employing this approach lie in its straightforwardness and intuitive interpretation. This method shines particularly in cases of simple models exhibiting gradual changes in behavior and well-defined, distinct features. However, its straightforward nature implies that its applicability extends even to intricate models such as Deep Neural Networks (DNNs), where it proves effective in delineating the significance of pixels within image recognition explanations.

Nevertheless, this approach harbors certain limitations. It doesn’t inherently capture interactions between features, and the simplicity of sensitivity metrics can yield approximations that may not be precise enough. This drawback can pose challenges, particularly in instances involving discontinuous features like categorical information and the frequently employed one-hot encoding.

Surrogate models offer a more sophisticated type of sensitivity analysis in which not only individual changes on features are evaluated but also the interaction between changes on multiple features simultaneously. They fit the results of these perturbations to a surrogate model to gain deeper insights in sensitivity.

Interpreters can play a pivotal role in deciphering complex models. When it comes to decision trees, interpreters excel in revealing the rationale behind these models’ decisions, offering both a global and local perspective. Random Forest, an ensemble of decision trees, marries high interpretability with impressive accuracy, making it a preferred choice in various commercial applications. On the other hand, Neural Network Interpreters, particularly for intricate Deep Neural Networks (DNNs), confront the challenge of complexity. While not inherently inscrutable, comprehending DNNs demands a case-specific approach and a more substantial investment of expertise and effort compared to interpreting models like random forests.

Worker in hard hat stands among robotic arms.

Model transparency as the minimum

Model transparency is about understanding a model’s inner workings, information about the training data and the evaluation metrics that provide insights into its likely behavior. Regardless of whether there’s a specific demand for explainable AI, this information constitutes fundamental knowledge essential for making informed use of a machine learning algorithm.

When engineers are tasked with rendering another party’s model explainable, such as when dealing with a commercially available off-the-shelf model, users of the model should consider the level of transparency it offers. Choosing the most suitable approach for explainable AI becomes considerably simpler when you have a clear understanding of the model’s internal mechanisms, as opposed to dealing with an opaque black box.

The role of a digital supply chain twin

In an era in which businesses rely heavily on data-driven decision-making, the introduction of Explainable AI has emerged as a game-changer. Explainable AI bridges the gap between complex machine learning models and human understanding, ensuring that AI-powered systems can not only provide valuable insights but also justify their recommendations in a transparent and interpretable manner. This newfound transparency is not only transforming the way we perceive AI but also revolutionizing industries like supply chain management.

One pioneering solution that exemplifies the synergy between explainable AI and supply chain management is Axon, a cutting-edge digital twin solution. Axon doesn’t just mimic your supply chain; it empowers you with unparalleled insights into its performance, both historical and predictive. What sets Axon apart is its ability to leverage Explainable AI to demystify the decision-making processes within the supply chain. It takes the complexities of AI-driven predictions and simulations and translates them into clear, comprehensible explanations.

This means that supply chain professionals can now not only see what decisions Axon recommends but also understand why it makes those recommendations.

Imagine having the ability to replicate your entire supply chain, gain deep insights into its past and future performance and conduct simulations to assess the impact of various decisions—all while having the power of explainable AI to make every recommendation transparent and comprehensible. Axon makes this vision a reality, offering supply chain professionals the confidence to make informed decisions based on AI-driven insights, backed by clear explanations.

With Axon, the supply chain becomes more than just a series of processes; it becomes a well-informed, adaptive and resilient ecosystem that thrives on the synergy between human expertise and the capabilities of explainable AI. This transformative approach isn’t just about optimizing the supply chain; it’s about reinventing it for a data-driven future.

Embracing the future with explainable AI

The integration of explainable AI in supply chain management represents a pivotal shift towards achieving a harmonious blend of human intuition and machine-driven insights. By making the intricacies of AI models transparent and interpretable, explainable AI empowers supply chain professionals to harness the full potential of AI innovations while maintaining confidence in the decisions made by these systems. The level of explainability required depends on the complexity of the AI model, the impact of the decision at hand and the type of interpretability desired.

Techniques such as sensitivity analysis and surrogate models, as well as interpreters, are vital tools in rendering complex models understandable to humans. In this regard, Axon serves as a quintessential example of how a digital supply chain twin can utilize explainable AI to not only mirror the supply chain but also enhance it with insightful, transparent and interpretable recommendations. This level of clarity in decision-making processes is not merely a convenience; it is an essential component in building trust in AI systems, ensuring compliance with regulations, and ultimately, paving the way for a future where human expertise and AI innovation work in tandem to revolutionize industries like supply chain management.

And, if you missed out of the first two parts of our three-part blog series on Explainable AI, check them out below:

Explainable AI: Building trust in artificial intelligence

Explainable AI: Today’s AI modeling shortfalls

Embark on the journey to AI transparency and reliability.

Request a Demo