Decoding Alignment without Encoding Alignment: A critique of similarity analysis in neuroscience
Johannes Bertram, Luciano Dyballa, T. Anderson Keller, Savik Kinger, Steven W. Zucker
TLDR
Decoding alignment metrics can be misleading, as similar representations may arise from small neural subsets; encoding analysis offers a more robust comparison.
Key contributions
- Decoding alignment can arise from small, non-representative neural subpopulations.
- Alignment metrics are insensitive to encoding manifold topology, a key functional signature.
- Causal evidence shows decoding metrics are stable despite encoding topology manipulation.
- Advocates using encoding manifolds as a complementary tool for comparing neural systems.
Why it matters
This paper challenges common decoding alignment metrics in neuroscience and AI, showing they can mislead by reflecting only small neural subsets and ignoring crucial functional organization. It proposes encoding analysis as a more robust complementary tool, offering deeper, more accurate insights into how systems implement computations.
Original Abstract
Decoding approaches are widely used in neuroscience and machine learning to compare stimulus representations across neural systems, such as different brain regions, organisms, and deep learning models. Popular methods include decoding (perceptual) manifolds and alignment metrics such as Representational Similarity Analysis (RSA) and Dynamic Similarity Analysis (DSA), where similarity in decoding representations is interpreted as evidence for similar computation. This paper demonstrates a fundamental weakness behind this approach: it is misleading to assume that representational geometry is representative of a neuronal population as a whole, when such representations may actually be shaped by a very small subset of neurons. We show that the complementary encoding paradigm addresses this issue directly: it characterizes how neurons are organized globally in terms of their responses to a set of data, providing insight into how the decoding representation is implemented by neurons within a population. We demonstrate across experiments in biological systems and deep learning models that (i) surprisingly, similar decoding behavior and high representational alignment can arise from small, non-representative subpopulations of neurons; and critically, (ii) alignment metrics are insensitive to encoding manifold topology (how function is distributed across neurons), despite this being a key signature of differentiation across biological systems. A controlled MNIST experiment provides causal evidence: decoding metrics remain unchanged even when encoding topology is causally manipulated via the training loss. Overall, similarity in decoding behavior, as measured by classic alignment metrics, does not imply similarity in function or computation, motivating the use of encoding manifolds as a complementary tool for comparing neural systems.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.