Towards Intrinsic Interpretability of Large Language Models:A Survey of Design Principles and Architectures
Yutong Gao, Qinglin Meng, Yuan Zhou, Liangming Pan
TLDR
This survey reviews intrinsic interpretability for LLMs, categorizing approaches into five design paradigms to build transparent models.
Key contributions
- Systematically surveys intrinsic interpretability methods for LLMs.
- Categorizes approaches into five design paradigms, including functional transparency and concept alignment.
- Contrasts intrinsic interpretability with traditional post-hoc explanation methods.
- Discusses open challenges and outlines future research directions in the field.
Why it matters
This paper is crucial for advancing trustworthy and safe LLM deployment by shifting focus from post-hoc explanations to building transparency directly into models. It provides a foundational understanding of intrinsic interpretability, guiding future research and development in this critical area.
Original Abstract
While Large Language Models (LLMs) have achieved strong performance across many NLP tasks, their opaque internal mechanisms hinder trustworthiness and safe deployment. Existing surveys in explainable AI largely focus on post-hoc explanation methods that interpret trained models through external approximations. In contrast, intrinsic interpretability, which builds transparency directly into model architectures and computations, has recently emerged as a promising alternative. This paper presents a systematic review of the recent advances in intrinsic interpretability for LLMs, categorizing existing approaches into five design paradigms: functional transparency, concept alignment, representational decomposability, explicit modularization, and latent sparsity induction. We further discuss open challenges and outline future research directions in this emerging field. The paper list is available at: https://github.com/PKU-PILLAR-Group/Survey-Intrinsic-Interpretability-of-LLMs.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.