ArXiv TLDR

Inverting Foundation Models of Brain Function with Simulation-Based Inference

🐦 Tweet
2604.23865

Niels Bracher, Xavier Intes, Stefan T. Radev

cs.LGcs.AIstat.ML

TLDR

This paper demonstrates inverting brain foundation models using simulation-based inference to recover stimulus properties from synthetic brain activity.

Key contributions

  • Pairs a brain emulator (TRIBEv2) with LLMs to generate stimuli from linguistic parameters.
  • Uses simulation-based inference to map synthetic brain maps to latent stimulus parameters.
  • Successfully recovers linguistic parameters from predicted brain maps, validating neural encodings.
  • Demonstrates LLMs as controllable stimulus generators for simulated brain experiments.

Why it matters

This work provides a crucial step towards decoding and inverse design with foundation brain models. It validates the quality of neural encodings and demonstrates LLMs as versatile tools for controlled simulated neuroscience experiments.

Original Abstract

Foundation models of brain activity promise a new frontier for in silico neuroscience by emulating neural responses to complex stimuli across tasks and modalities. A natural next step is to ask whether these models can also be used in reverse. Can we recover a stimulus or its properties from synthetic brain activity? We study this question in a proof-of-concept setting using TRIBEv2. We pair the brain emulator with large language models (LLMs) that generate news headlines from linguistic parameters such as valence, arousal, and dominance. We then use simulation-based inference to learn a probabilistic mapping from brain maps to latent stimulus parameters. Our results show that these parameters can be recovered from predicted brain maps, validating the quality of neural encodings. They also show that LLMs can serve as controllable stimulus generators for simulated experiments. Together, these findings provide a step toward decoding and inverse design with foundation brain models.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.