Using Large Language Models and Knowledge Graphs to Improve the Interpretability of Machine Learning Models in Manufacturing
Thomas Bayer, Alexander Lohr, Sarah Weiß, Bernd Michelberger, Wolfram Höpken
TLDR
A novel method combines LLMs and Knowledge Graphs to generate user-friendly, domain-specific explanations for ML models, enhancing interpretability in manufacturing.
Key contributions
- Introduces a method using Knowledge Graphs to structure domain data, ML results, and explanations.
- Designs a selective retrieval process where LLMs generate user-friendly explanations from KG triplets.
- Empirically validated the approach in manufacturing, showing improved decision-making and interpretability.
Why it matters
This paper tackles the challenge of making ML models interpretable, crucial for adoption in manufacturing. It offers a novel, scalable solution by dynamically combining LLMs with Knowledge Graphs to generate context-rich, user-friendly explanations. This approach fosters better trust in AI and supports more informed decision-making in real-world industrial applications.
Original Abstract
Explaining Machine Learning (ML) results in a transparent and user-friendly manner remains a challenging task of Explainable Artificial Intelligence (XAI). In this paper, we present a method to enhance the interpretability of ML models by using a Knowledge Graph (KG). We store domain-specific data along with ML results and their corresponding explanations, establishing a structured connection between domain knowledge and ML insights. To make these insights accessible to users, we designed a selective retrieval method in which relevant triplets are extracted from the KG and processed by a Large Language Model (LLM) to generate user-friendly explanations of ML results. We evaluated our method in a manufacturing environment using the XAI Question Bank. Beyond standard questions, we introduce more complex, tailored questions that highlight the strengths of our approach. We evaluated 33 questions, analyzing responses using quantitative metrics such as accuracy and consistency, as well as qualitative ones such as clarity and usefulness. Our contribution is both theoretical and practical: from a theoretical perspective, we present a novel approach for effectively enabling LLMs to dynamically access a KG in order to improve the explainability of ML results. From a practical perspective, we provide empirical evidence showing that such explanations can be successfully applied in real-world manufacturing environments, supporting better decision-making in manufacturing processes.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.