ArXiv TLDR

A multi-scale information geometry reveals the structure of mutual information in neural populations

🐦 Tweet
2605.06304

Simone Azeglio, Steeve Laquitaine, Ulisse Ferrari, Matthew Chalk

q-bio.NC

TLDR

A multi-scale information geometry framework reveals how neural populations encode sensory information, linking directly to mutual information.

Key contributions

  • Develops a multi-scale information geometry from first principles, extending the Fisher information metric.
  • Directly links this geometry to mutual information, showing how well-encoded stimulus directions are expanded.
  • Enables practical estimation using diffusion models for large neural populations and high-dimensional stimuli.
  • Identifies interpretable stimulus features contributing most to information transmission in visual cortex.

Why it matters

Existing representational geometry methods can yield conflicting conclusions. This paper offers a principled, information-theoretic framework for characterizing neural population codes. It provides a robust and interpretable way to understand how sensory information is encoded across multiple scales.

Original Abstract

Understanding how neural population responses represent sensory information is a central problem in systems neuroscience. One approach is to define a representational geometry on stimulus space in which distances reflect how reliably stimuli can be distinguished from neural activity. However, different constructions of these distances can lead to qualitatively different conclusions about the neural code. Here, we show that a unique Riemannian representational geometry emerges from first principles governing how distances contract as stimulus resolution is lost through coarse-graining. This results in a multi-scale extension of the Fisher information metric, capturing encoding structure from fine stimulus details to coarse global distinctions. The resulting geometry is exactly related to the mutual information encoded by the population: well encoded stimulus directions - those contributing more to mutual information - are expanded, whereas poorly encoded directions are contracted. The metric tensor can be estimated using diffusion models, making the framework practical for large neural populations and high-dimensional stimuli. Applied to visual cortical responses to natural images, the eigenvectors of the metric tensor identify stimulus variations that contribute most to information transmission, yielding interpretable features that are robust to modelling choices. Together, these results provide a principled, information-theoretic framework for characterising neural population codes.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.