ArXiv TLDR

Toward a Functional Geometric Algebra for Natural Language Semantics

🐦 Tweet
2604.25902

James Pustejovsky

cs.CLcs.AIcs.LG

TLDR

This paper proposes Functional Geometric Algebra (FGA) as a mathematically superior foundation for natural language semantics, addressing linear algebra's limits.

Key contributions

  • Introduces Functional Geometric Algebra (FGA) for typed, compositional NLP semantics.
  • Addresses structural limitations of conventional linear algebra in interpretability and compositionality.
  • Expands n-dimensional embeddings into a 2^n multivector algebra for richer semantic organization.
  • Shows how GA-based operations, implicit in transformers, can be made explicit and extended.

Why it matters

This paper introduces a novel mathematical framework, Functional Geometric Algebra, to address fundamental structural limitations in current NLP semantic models. By offering a more organized and expressive representation, it paves the way for more interpretable and compositionally robust AI systems.

Original Abstract

Distributional and neural approaches to natural language semantics have been built almost exclusively on conventional linear algebra: vectors, matrices, tensors, and the operations that accompany them. These methods have achieved remarkable empirical success, yet they face persistent structural limitations in compositional semantics, type sensitivity, and interpretability. I argue in this paper that geometric algebra (GA) -- specifically, Clifford algebras -- provides a mathematically superior foundation for semantic representation, and that a Functional Geometric Algebra (FGA) framework extends GA toward a typed, compositional semantics capable of supporting inference, transformation, and interpretability while retaining full compatibility with distributional learning and modern neural architectures. I develop the formal foundations, identify three core capabilities that GA provides and linear algebra does not, present a detailed worked example illustrating operator-level semantic contrasts, and show how GA-based operations already implicit in current transformer architectures can be made explicit and extended. The central claim is not merely increased dimensionality but increased structural organization: GA expands an $n$-dimensional embedding space into a $2^n$ multivector algebra where base semantic concepts and their higher-order interactions are represented within a single, principled algebraic framework.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.