ArXiv TLDR

Causal Learning with Neural Assemblies

🐦 Tweet
2604.26919

Evangelia Kopadi, Dimitris Kalles

cs.LGcs.AIcs.NE

TLDR

This paper shows neural assemblies can learn causal direction using a local plasticity mechanism, DIRECT, offering an auditable and explainable framework.

Key contributions

  • Demonstrates neural assemblies' inherent operations are sufficient for learning causal directionality.
  • Introduces DIRECT, a novel mechanism using local plasticity for directional edge coupling/training.
  • Validates learning through synaptic-strength asymmetry and functional propagation overlap.
  • Achieves perfect structural recovery in supervised settings, providing an explainable-by-design causal framework.

Why it matters

This work bridges biologically plausible neural dynamics with formal causal models, offering a novel 'explainable by design' framework. Unlike backpropagation, DIRECT's local plasticity makes causal claims auditable at the mechanism level, enhancing transparency and trust in AI systems.

Original Abstract

Can Neural Assemblies -- groups of neurons that fire together and strengthen through co-activation -- learn the direction of causal influence between variables? While established as a computationally general substrate for classification, parsing, and planning, neural assemblies have not yet been shown to internalize causal directionality. We demonstrate that the inherent operations of neural assemblies -- projection, local plasticity control, and sparse winner selection -- are sufficient for directional learning. We introduce DIRECT (DIRectional Edge Coupling/Training), a mechanism that co-activates source and target assemblies under an adaptive gain schedule to internalize directed relations. Unlike backpropagation-based methods, DIRECT relies solely on local plasticity, making the resulting causal claims auditable at the mechanism level. Our findings are verified through a dual-readout validation strategy: (i) synaptic-strength asymmetry, measuring the emergent weight gap between forward and reverse links, and (ii) functional propagation overlap, quantifying the reliability of directional signal flow. Across multiple domains, the framework achieves perfect structural recovery under a supervised, known-structure setting. These results establish neural assemblies as an auditable bridge between biologically plausible dynamics and formal causal models, offering an "explainable by design" framework where causal claims are traceable to specific neural winners and synaptic asymmetries.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.