ArXiv TLDR

NSFL: A Post-Training Neuro-Symbolic Fuzzy Logic Framework for Boolean Operators in Neural Embeddings

🐦 Tweet
2604.10604

Vladi Vexler, Ofer Idan, Gil Lederman, Dima Sivov

cs.IRcs.AIcs.CLcs.LG

TLDR

NSFL is a post-training neuro-symbolic fuzzy logic framework that enables boolean operations in neural embeddings, significantly boosting retrieval performance.

Key contributions

  • Introduces NSFL, a post-training framework for boolean logic in neural embeddings.
  • Uses Neuro-Symbolic Deltas to prevent representation collapse and capture domain reliance.
  • Employs Spherical Query Optimization for scalable, real-time fuzzy formula projection.
  • Achieves up to +81% mAP improvement, even boosting fine-tuned models by 47%.

Why it matters

Dense retrievers lack native boolean logic. NSFL provides a training-free neuro-symbolic fuzzy logic framework, integrating logical operations into neural embeddings. It significantly boosts retrieval accuracy, even for fine-tuned models, laying the foundation for dynamic scaling and learned manifold logic.

Original Abstract

Standard dense retrievers lack a native calculus for multi-atom logical constraints. We introduce Neuro-Symbolic Fuzzy Logic (NSFL), a framework that adapts formal t-norms and t-conorms to neural embedding spaces without requiring retraining. NSFL operates as a first-order hybrid calculus: it anchors logical operations on isolated zero-order similarity scores while actively steering representations using Neuro-Symbolic Deltas (NS-Delta) -- the first-order marginal differences derived from contextual fusion. This preserves pure atomic meaning while capturing domain reliance, preventing the representation collapse and manifold escape endemic to traditional geometric baselines. For scalable real-time retrieval, Spherical Query Optimization (SQO) leverages Riemannian optimization to project these fuzzy formulas into manifold-stable query vectors. Validated across six distinct encoder configurations and two modalities (including zero-shot and SOTA fine-tuned models), NSFL yields mAP improvements up to +81%. Notably, NSFL provides an additive 20% average and up to 47% boost even when applied to encoders explicitly fine-tuned for logical reasoning. By establishing a training-free, order-aware calculus for high-dimensional spaces, this framework lays the foundation for future dynamic scaling and learned manifold logic.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.