Aitchison Embeddings for Learning Compositional Graph Representations
Nikolaos Nakis, Chrysoula Kosma, Panagiotis Promponas, Michail Chatzianastasis, Giannis Nikolentzos
TLDR
Aitchison embeddings offer interpretable compositional graph representations, using Aitchison geometry and ILR coordinates for better insight into graph structure.
Key contributions
- Introduces Aitchison embeddings for compositional graph representations based on Aitchison geometry.
- Represents nodes as simplex-valued compositions using isometric log-ratio (ILR) coordinates.
- Yields intrinsically interpretable embeddings reflecting relative trade-offs among archetypes.
- Supports subcompositional coherence, enabling principled component restriction and analysis.
Why it matters
This paper addresses the critical need for interpretable graph embeddings, offering a novel framework grounded in Aitchison geometry. It provides a principled way to understand how learned features relate to graph structure, moving beyond black-box models. This approach not only maintains competitive performance but also allows for deeper insights into network roles.
Original Abstract
Representation learning is central to graph machine learning, powering tasks such as link prediction and node classification. However, most graph embeddings are hard to interpret, offering limited insight into how learned features relate to graph structure. Many networks naturally admit a role-mixture view, where nodes are best described as mixtures over latent archetypal factors. Motivated by this structure, we propose a compositional graph embedding framework grounded in Aitchison geometry, the canonical geometry for comparing mixtures. Nodes are represented as simplex-valued compositions and embedded via isometric log-ratio (ILR) coordinates, which preserve Aitchison distances while enabling unconstrained optimization in Euclidean space. This yields intrinsically interpretable embeddings whose geometry reflects relative trade-offs among archetypes and supports coherent behavior under component restriction; we consider both fixed and learnable ILR bases. Across node classification and link prediction, our method achieves competitive performance with strong baselines while providing explainability by construction rather than post-hoc. Finally, subcompositional coherence enables principled component restriction: removing and renormalizing subsets preserves a well-defined geometry, which we exploit via subcompositional dimensionality removal to probe how archetype groups influence representations and predictions.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.