Improving Robustness of Tabular Retrieval via Representational Stability
Kushal Raj Bhandari, Adarsh Singh, Jianxi Gao, Soham Dan, Vivek Gupta
TLDR
This paper addresses serialization sensitivity in transformer-based table retrieval by using centroid-aligned representations and a novel adapter for robust, format-invariant results.
Key contributions
- Demonstrates that table serialization choices significantly impact embeddings and retrieval performance.
- Proposes using centroid averaging of multiple serialization embeddings to create robust, canonical representations.
- Shows centroid representations outperform individual formats across MPNet, BGE-M3, ReasonIR, and SPLADE.
- Introduces a residual bottleneck adapter to map single-serialization embeddings towards centroid targets.
Why it matters
This paper tackles a critical robustness issue in tabular retrieval, where semantically identical tables yield different results due to serialization format. By introducing centroid-aligned representations and an adapter, it significantly enhances the reliability and format-invariance of retrieval systems. This is vital for practical applications relying on accurate table understanding.
Original Abstract
Transformer-based table retrieval systems flatten structured tables into token sequences, making retrieval sensitive to the choice of serialization even when table semantics remain unchanged. We show that semantically equivalent serializations, such as $\texttt{csv}$, $\texttt{tsv}$, $\texttt{html}$, $\texttt{markdown}$, and $\texttt{ddl}$, can produce substantially different embeddings and retrieval results across multiple benchmarks and retriever families. To address this instability, we treat serialization embedding as noisy views of a shared semantic signal and use its centroid as a canonical target representation. We show that centroid averaging suppresses format-specific variation and can recover the semantic content common to different serializations when format-induced shifts differ across tables. Empirically, centroid representations outrank individual formats in aggregate pairwise comparisons across $\texttt{MPNet}$, $\texttt{BGE-M3}$, $\texttt{ReasonIR}$, and $\texttt{SPLADE}$. We further introduce a lightweight residual bottleneck adapter on top of a frozen encoder that maps single-serialization embeddings towards centroid targets while preserving variance and enforcing covariance regularization. The adapter improves robustness for several dense retrievers, though gains are model-dependent and weaker for sparse lexical retrieval. These results identify serialization sensitivity as a major source of retrieval variance and show the promise of post hoc geometric correction for serialization-invariant table retrieval. Our code, datasets, and models are available at $\href{https://github.com/KBhandari11/Centroid-Aligned-Table-Retrieval}{https://github.com/KBhandari11/Centroid-Aligned-Table-Retrieval}$.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.