EXTree: Towards Supporting Explainability in Attribute-based Access Control
Shanampudi Pranaya Chowdary, Shamik Sural
TLDR
EXTree provides explainable and efficient attribute-based access control by structuring policies as trees, bridging complex logic and human understanding.
Key contributions
- Introduces EXTree, a tree-based representation for Attribute-based Access Control (ABAC) policies.
- Optimizes ABAC policies for both fast evaluation (efficiency) and human-centric feedback (explainability).
- Investigates Feedback Evaluation Strategies to craft actionable explanations for access denials.
- Explores Tree Construction Strategies for structuring policies for efficient yet interpretable decisions.
Why it matters
This paper addresses the critical need for transparency in digital governance by enabling explainable access control. It helps users understand why their access requests are denied, a significant improvement over traditional silent denials. EXTree bridges the gap between complex authorization logic and human understanding.
Original Abstract
With increasing emphasis on transparency in digital governance, users expect more than silence when their access requests are denied by a system. However, authorization methods are notorious for their inability to provide any form of meaningful feedback under such situations. This paper shows a direction towards how the problem of explainability can be mitigated in the context of Attribute-based Access Control (ABAC), arguably the most researched topic in access control in recent years. We introduce EXTree, which represents ABAC policies optimized for both fast evaluation (Efficiency) and human-centric feedback (Explainability) in the form of a tree. Two strategic dimensions are investigated, namely, Feedback Evaluation Strategies - how to craft actionable explanations when access is denied, and Tree Construction Strategies - how the policy trees should be structured for efficient yet interpretable decisions. Through extensive experiments, we compare entropy-based, changeability-based, and randomly generated trees across multiple configurations. Our results demonstrate that EXTree, built for efficiency and interpretability, can bridge the gap between complex authorization logic and human understanding.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.