ArXiv TLDR
← All categories

Neural & Evolutionary Computing

Research on neural network architectures, evolutionary algorithms, and bio-inspired computing.

cs.NE · 188 papers

Sparsity Moves Computation: How FFN Architecture Reshapes Attention in Small Transformers

FFN architecture, especially sparsity, significantly reshapes how Transformers compute, shifting work from FFNs to attention mechanisms.

2605.09403May 10, 2026Gabriel Smithline, Chris Mascioli

Evolutionary Ensemble of Agents

EvE is a decentralized framework that co-evolves coding agents and their guidance to discover algorithms, demonstrating superior adaptation and performance.

2605.09018May 9, 2026Zongmin Yu, Liu Yang

Drain-Vortex Optimization: A Population-Based Metaheuristic Inspired by Multi-Drain Free-Vortex Flow

Drain-Vortex Optimization (DVO) is a new metaheuristic inspired by multi-drain free-vortex flow, excelling in complex continuous optimization.

2605.08883May 9, 2026Mohsen Omidi, Brian Vaughan

AHD Agent: Agentic Reinforcement Learning for Automatic Heuristic Design

AHD Agent introduces an agentic RL framework enabling LLMs to proactively design heuristics for combinatorial optimization, outperforming larger models with fewer evaluations.

2605.08756May 9, 2026Haoze Lv, Ning Lu, Ziang Zhou +1

Globally Optimal Training of Spiking Neural Networks via Parameter Reconstruction

This paper introduces a globally optimal parameter reconstruction algorithm for training Spiking Neural Networks, overcoming surrogate gradient limitations.

2605.08022May 8, 2026Himanshu Udupi, Xiaocong Yang, ChengXiang Zhai

Broken-symmetry shape discrimination on a driven Duffing ring

This paper explores shape discrimination on a driven Duffing ring, identifying a broken-symmetry observable for robust signal processing.

2605.07475May 8, 2026Kaspar Anton Schindler

Discovering Ordinary Differential Equations with LLM-Based Qualitative and Quantitative Evaluation

DoLQ uses an LLM-based multi-agent system to discover ordinary differential equations from data, incorporating both qualitative and quantitative evaluation.

2605.07323May 8, 2026Sum Kyun Song, Bong Gyun Shin, Jae Yong Lee

Same Brain, Different Prediction: How Preprocessing Choices Undermine EEG Decoding Reliability

EEG decoding reliability is undermined by preprocessing choices, with up to 42% of predictions flipping, necessitating new tools for stability.

2605.07212May 8, 2026Dengzhe Hou, Zihao Wu, Lingyu Jiang +3

Direct-to-Event Spiking Neural Network Transfer

This paper investigates converting direct-coded Spiking Neural Networks to more energy-efficient event-based representations while preserving performance.

2605.07207May 8, 2026Nhan Trong Luu, Duong Trung Luu, Pham Ngoc Nam +1

Every Feedforward Neural Network Definable in an o-Minimal Structure Has Finite Sample Complexity

Feedforward neural networks definable in o-minimal structures, including MLPs, CNNs, and transformers, possess finite PAC sample complexity.

2605.07097May 8, 2026Anastasis Kratsios, Gregory Cousins, Haitz Sáez de Ocáriz Borde +2

A Unified Measure-Theoretic View of Diffusion, Score-Based, and Flow Matching Generative Models

This paper unifies diffusion, score-based, and flow matching generative models under a measure-theoretic framework, clarifying their shared structure.

2605.06829May 7, 2026Aditya Ranganath, Mukesh Singhal

The Causally Emergent Alignment Hypothesis: Causal Emergence Aligns with and Predicts Final Reward in Reinforcement Learning Agents

This paper proposes the Causally Emergent Alignment Hypothesis, showing that causal emergence in RL agents predicts final reward and aligns with learning.

2605.06746May 7, 2026Federico Pigozzi, Michael Levin

CoupleEvo: Evolving Heuristics for Coupled Optimization Problems Using Large Language Models

CoupleEvo introduces LLM-driven evolutionary strategies to design heuristics for complex, coupled optimization problems, showing decomposition works best.

2605.06341May 7, 2026Thomas Bömer, Bastian Amberg, Max Disselnmeyer +1

Efficient event-driven retrieval in high-capacity kernel Hopfield networks

This paper shows that asynchronous KLR Hopfield networks achieve high capacity and efficient event-driven retrieval, suitable for neuromorphic hardware.

2605.05978May 7, 2026Akira Tamamori

MDN: Parallelizing Stepwise Momentum for Delta Linear Attention

MDN introduces a parallel stepwise momentum algorithm for Linear Attention, improving LLM performance and stability for long sequences.

2605.05838May 7, 2026Yulong Huang, Xiang Liu, Hongxiang Huang +5

Graph Normalization: Fast Binarizing Dynamics for Differentiable MWIS

Graph Normalization (GN) is a differentiable dynamical system that quickly approximates the NP-hard Maximum Weight Independent Set (MWIS) problem.

2605.05330May 6, 2026Laurent Guigues

S-LCG: Structured Linear Congruential Generator-Based Deterministic Algorithm for Search and Optimization

S-LCG is a novel deterministic optimization algorithm using a structured Linear Congruential Generator, outperforming competitors on benchmarks.

2605.05198May 6, 2026Ahmed Qasim Mohammed, Haider Banka, Anamika Singh

Direct From Darwin: Deriving Advanced Optimizers From Evolutionary First Principles

This paper unifies Fisher's and Wright's evolutionary theories to derive advanced gradient optimizers, showing many existing algorithms are evolutionarily compliant.

2605.05284May 6, 2026Daniel Grimmer

On the Influence of the Feature Computation Budget on Per-Instance Algorithm Selection for Black-Box Optimization

PIAS for black-box optimization remains viable even when a significant budget is spent on feature computation, though optimal budget varies.

2605.04954May 6, 2026Koen van der Blom, Diederick Vermetten

DALight-3D: A Lightweight 3D U-Net for Brain Tumor Segmentation from Multi-Modal MRI

DALight-3D is a lightweight 3D U-Net for brain tumor segmentation, achieving better accuracy-efficiency than baselines.

2605.04518May 6, 2026Nand Kumar Mishra, Dhruv Mishra, Dr Manu Pratap Singh
PreviousPage 2 of 10Next

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.