Neural & Evolutionary Computing
Research on neural network architectures, evolutionary algorithms, and bio-inspired computing.
cs.NE · 188 papersSparsity Moves Computation: How FFN Architecture Reshapes Attention in Small Transformers
FFN architecture, especially sparsity, significantly reshapes how Transformers compute, shifting work from FFNs to attention mechanisms.
Evolutionary Ensemble of Agents
EvE is a decentralized framework that co-evolves coding agents and their guidance to discover algorithms, demonstrating superior adaptation and performance.
Drain-Vortex Optimization: A Population-Based Metaheuristic Inspired by Multi-Drain Free-Vortex Flow
Drain-Vortex Optimization (DVO) is a new metaheuristic inspired by multi-drain free-vortex flow, excelling in complex continuous optimization.
AHD Agent: Agentic Reinforcement Learning for Automatic Heuristic Design
AHD Agent introduces an agentic RL framework enabling LLMs to proactively design heuristics for combinatorial optimization, outperforming larger models with fewer evaluations.
Globally Optimal Training of Spiking Neural Networks via Parameter Reconstruction
This paper introduces a globally optimal parameter reconstruction algorithm for training Spiking Neural Networks, overcoming surrogate gradient limitations.
Broken-symmetry shape discrimination on a driven Duffing ring
This paper explores shape discrimination on a driven Duffing ring, identifying a broken-symmetry observable for robust signal processing.
Discovering Ordinary Differential Equations with LLM-Based Qualitative and Quantitative Evaluation
DoLQ uses an LLM-based multi-agent system to discover ordinary differential equations from data, incorporating both qualitative and quantitative evaluation.
Same Brain, Different Prediction: How Preprocessing Choices Undermine EEG Decoding Reliability
EEG decoding reliability is undermined by preprocessing choices, with up to 42% of predictions flipping, necessitating new tools for stability.
Direct-to-Event Spiking Neural Network Transfer
This paper investigates converting direct-coded Spiking Neural Networks to more energy-efficient event-based representations while preserving performance.
Every Feedforward Neural Network Definable in an o-Minimal Structure Has Finite Sample Complexity
Feedforward neural networks definable in o-minimal structures, including MLPs, CNNs, and transformers, possess finite PAC sample complexity.
A Unified Measure-Theoretic View of Diffusion, Score-Based, and Flow Matching Generative Models
This paper unifies diffusion, score-based, and flow matching generative models under a measure-theoretic framework, clarifying their shared structure.
The Causally Emergent Alignment Hypothesis: Causal Emergence Aligns with and Predicts Final Reward in Reinforcement Learning Agents
This paper proposes the Causally Emergent Alignment Hypothesis, showing that causal emergence in RL agents predicts final reward and aligns with learning.
CoupleEvo: Evolving Heuristics for Coupled Optimization Problems Using Large Language Models
CoupleEvo introduces LLM-driven evolutionary strategies to design heuristics for complex, coupled optimization problems, showing decomposition works best.
Efficient event-driven retrieval in high-capacity kernel Hopfield networks
This paper shows that asynchronous KLR Hopfield networks achieve high capacity and efficient event-driven retrieval, suitable for neuromorphic hardware.
MDN: Parallelizing Stepwise Momentum for Delta Linear Attention
MDN introduces a parallel stepwise momentum algorithm for Linear Attention, improving LLM performance and stability for long sequences.
Graph Normalization: Fast Binarizing Dynamics for Differentiable MWIS
Graph Normalization (GN) is a differentiable dynamical system that quickly approximates the NP-hard Maximum Weight Independent Set (MWIS) problem.
S-LCG: Structured Linear Congruential Generator-Based Deterministic Algorithm for Search and Optimization
S-LCG is a novel deterministic optimization algorithm using a structured Linear Congruential Generator, outperforming competitors on benchmarks.
Direct From Darwin: Deriving Advanced Optimizers From Evolutionary First Principles
This paper unifies Fisher's and Wright's evolutionary theories to derive advanced gradient optimizers, showing many existing algorithms are evolutionarily compliant.
On the Influence of the Feature Computation Budget on Per-Instance Algorithm Selection for Black-Box Optimization
PIAS for black-box optimization remains viable even when a significant budget is spent on feature computation, though optimal budget varies.
DALight-3D: A Lightweight 3D U-Net for Brain Tumor Segmentation from Multi-Modal MRI
DALight-3D is a lightweight 3D U-Net for brain tumor segmentation, achieving better accuracy-efficiency than baselines.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.