x
Subscribe to our blog
Posted 21 September 2023 by
Koen Cobbaert
Lead Solution Scientist for Axon Technology

Explainable AI: Building trust in Artificial Intelligence

Welcome to the first installment of a three-part blog series on Explainable Artificial Intelligence (AI). In today’s fast-paced, data-driven world, AI is no longer confined to academic corridors; it’s being integrated into various industries and changing the way we do business. However, this mainstream adoption comes with challenges that need addressing—mainly, building trust in artificial intelligence, as well as understandability. In this blog, I’ll set the stage by discussing the promise of AI in business processes, particularly in supply chain management. I’ll also dive back into the history of decision-making. Subsequent blogs in this series will explore the limitations of current models, understandable AI, methods to make a model explainable and the role of a digital supply chain twin.

The promise of AI

It feels so long ago when it first looked like AI was on an unstoppable winning streak. It was 2011 and IBM’s Watson system beat out two long-time human champions to win Jeopardy, and it was in 2016 that Google DeepMind’s AlphaGo system defeated the 18-time world-champion Go player Lee Sedol. 

And yet, setting aside the advances in OpenAI’s ChatGPT chat bot, finding concrete examples of the deployment of AI solutions in business remains somewhat scarce. There has been some process automation of very simple and standard processes, mostly through simple if-then-else best practices. There has also been some automation of help desks. Undoubtedly, these have had positive effects on business. But it is in the more complex decision processes—like that found in supply chains—that AI offers the greatest promise.

We humans have some severe restrictions in multidimensional space. Studies suggest that, at best, we can process four variables at a time. Other studies indicate that when analyzing one variable, we are able to classify information into seven categories, at most. This is a severe limitation in many business functions, particularly in supply chain management, with its many customers, local regulations, items, warehouses, manufacturing facilities, manufacturing lines, suppliers and logistics partners. Hence the attraction of AI, particularly machine learning, which can find patterns and correlations in data across many dimensions. 

As the integration of AI into businesses becomes more widespread, stakeholders are increasingly posing inquiries about AI’s implications. They inquire how to effectively utilize its potential and address associated risks. At the core of these concerns lies the issue of building trust in artificial intelligence and gaining it from a diverse array of stakeholders—ranging from customers and employees to the broader society.

Resource Scheduling: How much Lab automation is Too Much? (Robot and human hands holding gears.)

Over the past three decades, there have been several instances of AI “winters,” predominantly instigated by technology’s inability to meet the extravagant expectations set by hype. However, the present scenario indicates that technology is finally living up to its promises. This raises the question of whether another AI winter could emerge due to technologists’ narrow focus on constructing exceptionally potent tools without adequately considering how to cultivate trust within our broader societal framework.

The imperative for comprehensibility

This line of thought leads to a captivating inquiry—is it imperative for AI to be comprehensible (or at the very least, understandable) before it can truly become an integral part of mainstream culture? If such comprehensibility is indeed necessary, then what exactly does it entail?

Later on, I’ll delve further into the notion of comprehensibility within the realm of operations research and machine learning—the swiftest growing sector of real-world AI. But for now, I’d like to point out a discernible pattern that emerges: the significance of the use case determines the fervor for, and consequently the necessity of, comprehensibility. For instance, most users of recommender systems bestow trust in their outcomes without feeling compelled to unveil the mechanisms concealed within the “black box.” This is primarily because the foundational approach to generating recommendations is straightforward to grasp: “You might enjoy this based on your interest in that.” And any repercussions of an erroneous recommendation are minor—a small expenditure on a disappointing movie or the loss of half an hour watching a show.

Nevertheless, as intricacy and influence amplify, this inherent trust fades rapidly. How many individuals would repose confidence in an AI algorithm’s diagnosis over that of a doctor, absent any insights into the algorithm’s reasoning process? Although the AI diagnosis could potentially be more precise, the absence of comprehensibility may result in a deficit of trust. The same applies in our supply chains.

When an AI system dictates the policy to apply or a plan to follow, it may not have the same credibility as if it were created by a human. Which is why building trust in artificial intelligence is so vital. Over time, widespread acceptance might arise from the technology’s broad adoption, eventually accumulating substantial evidence showcasing its superiority over human capacities. However, until such a juncture is reached, it is highly probable that algorithmic comprehensibility will remain a prerequisite.

A brief history of decision-making

Throughout history, humanity has pursued the art of foreseeing the future, much like an ancient seafarer navigating the seas. In ancient times, we depended on skilled navigators who studied the stars, winds and currents to predict safe passage or impending storms. Over time, our tools and methods have become more sophisticated, evolving into advanced navigational instruments. The same happened in other fields in which decisions have to be made in environments that are characterized by uncertain outcomes. But the core principle of relying on experts—now known as data scientists, engineers and computer scientists—to chart our course and interpret the signs for prediction, remains constant.

It was only in the mid-1600s when Pascal and Fermat developed the theory of probability, and in the 1800s that Gauss developed the bell curve. Great improvements have been made in the meantime using mathematical modeling, operations research, statistics and machine learning in biology, physics, economics and engineering. With these improvements has come a strong belief in the efficacy and universality of mathematical models. 

However, making decisions remains an interplay between rationality, instinct and uncertainty. The concepts of “marginal utility” and the “rational man” have played such prominent roles in economics, and yet Kahneman and Tversky have shown that many decisions are made on instinct (a model based on experience) because there is insufficient evidence to make a rational decision (a model based on new evidence). In fact, in many cases, the evidence available is ignored. Kahneman and Tversky coined the term “bias” to describe the act of ignoring available data.

The publication of Newton’s Philosophiæ Naturalis Principia Mathematica in 1687 was a major turning point in model building because it described an abstract quantity—the force of gravity—which governs the motion of large bodies. This mechanical perspective has dominated science and engineering since the publication of Principia. However, Einstein’s work of relativity has shown that Newton’s laws only apply to large bodies and fail at the atomic and sub-atomic levels.  

Sir Isaac Newton

An important takeaway is that the model is only accurate at the level at which the data was collected, and the hypothesis formulated. In addition, uncertainty plays no part in Newtonian or mechanistic systems. It is assumed that the parameter values are known with certainty, and that the calculation or formula provides a single answer, with no measure of accuracy, as the confidence is 100%.  It also makes it easier to explain the observed results.

While Newton focused on the “natural” world, there have been many attempts to provide mathematical models for human behavior, typified by the game theory of von Neumann and Morgenstern, dating from 1944. As mentioned above, a key assumption in game theory and mathematical economics is that of rational actors, individuals who use rational analysis to make choices and achieve outcomes aligned with their objectives. While Kahneman and Tversky have shown that people do not make purely rational choices, their results have not disproven the optimality of marginal utility. 

Since the 1960s, when the first industrial computers were built, mechanistic models have been used to analyze many systems, including manufacturing and supply chains. However, because of the human limitation to consider and understand models with many variables, as well as the limitation in computing power required to calculate results in a reasonable time, the models were simplified, which impacts both their fidelity and universality.  

In addition, whether based on optimization techniques or heuristics, mechanistic models do not consider variability and uncertainty. 

Machine learning in supply chain management

Manufacturing and supply chains are a mixture of the mechanistic world (the manufacturing, storage and distribution of goods), the human world (the negotiations of delivery date, quantity and pricing), uncertainty (on a micro scale, like actual sales vs forecast on the demand side, but also on a macro scale, like uncertainty that being created through climate change, evolutions in ESG, and socio-political events) and variability (design/master data values vs demonstrated performance on the supply side). Traditional methods of calculating plans, or other predictive models in manufacturing and supply chain, have required simplification to create the models and to predict using these models. 

This blog series focuses on machine learning (ML) as part of the AI field. In the context of this series, we delineate ML as a category of learning algorithms typified by techniques such as Regression Models, Decision Trees, Random Forests, Support Vector Machines, Artificial Neural Networks, Q-learning and more. These algorithms possess the capacity to glean insights from instances and progressively enhance their performance with the accumulation of additional data.

Through the conduit of machine learning, diverse forms of “unstructured” data—encompassing images, spoken language, etc.—have found utility in informing medical diagnoses, crafting recommender systems, facilitating investment choices and enabling autonomous vehicles to recognize stop signs. And now, they also start to find their application in supply chain management.

Our central focus primarily gravitates toward optimization techniques and machine learning, which constitutes a specific class of AI algorithms. This emphasis is grounded in three key factors:

  1. Machine learning stands as the catalyst behind the lion’s share of recent strides and revitalized intrigue in the realm of AI.
  2. Machine learning represents a (semi-)statistical approach to AI that inherently possesses intricacies in terms of interpretation and validation. While the interpretability of a decision tree is all in all still quite high, that is extremely difficult—if not impossible—for an Artificial Neural Network model.
  3. Contrary to large language models, supply chain decision-making is highly quantitative, focusing on finding optimal trade-offs for decision-making. This decision-making is often facilitated through optimization techniques, which are very often complex in nature.

Charting AI’s next frontier

As AI continues to make strides in the supply chain world, the call for more explainable AI has never been louder. While AI can significantly optimize supply chain management, its integration is still hampered by issues of trust and comprehensibility. As we move forward, explainability is not just a “nice-to-have” but a “must-have” feature. Building trust in artificial intelligence is paramount.

Stay tuned for the upcoming parts of this blog series, in which we’ll continue to explore various aspects of explainable AI, including the role of a digital supply chain twin like Bluecrux’s Axon technology. 

Take the next step & Join us in revolutionizing AI reliability and transparency.

Request a Demo