ArXiv TLDR

Topological Sensitivity in Connectome-Constrained Neural Networks

🐦 Tweet
2604.04033

Nalin Dhiman

q-bio.NCcs.LG

TLDR

This paper shows that reported advantages of biological connectome topology in neural networks often disappear under fair initialization and degree-preserving controls.

Key contributions

  • Re-evaluates claims that biological connectome topology improves neural network learning.
  • Finds connectome advantages in early loss and activity disappear with fair initialization.
  • Shows degree-preserving null models remove apparent activity benefits of connectomes.
  • Concludes reported topology benefits often stem from initialization and null-model confounds.

Why it matters

Previous research suggested biological connectome topology inherently improves neural network learning. This paper critically re-examines those claims. By demonstrating that reported advantages often vanish under stricter experimental controls, it highlights the importance of rigorous methodology in neuroscience-inspired AI.

Original Abstract

Connectome-constrained neural networks are often evaluated against sparse random controls and then interpreted as evidence that biological graph topology improves learning efficiency. We revisit that claim in a controlled flyvis-based study using a Drosophila connectome, a naive self-loop-matched random graph, and a degree-preserving rewired null. Under weak controls, in which both models were recovered from a connectome-trained checkpoint and the null matched only global graph counts, the connectome appeared substantially better in early loss, mean activity, and runtime. That picture changed under stricter controls. Training both graphs from a shared random initialization removed the early loss advantage, and replacing the naive null by a degree-preserving null removed the apparent activity advantage. A five-sample degree-preserving ensemble and a pre-training activity-scale diagnostic further strengthened this revised interpretation. We also report a descriptive mechanism analysis of the earlier weak-control comparison, but we treat it as behavioral characterization rather than proof of causal superiority. We show that previously reported topology advantages in connectome-constrained neural networks can arise from initialization and null-model confounds, and largely disappear under fair from-scratch initialization and degree-preserving controls.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.