Neural & Evolutionary Computing
Research on neural network architectures, evolutionary algorithms, and bio-inspired computing.
cs.NE · 188 papersInterpreting V1 Population Activity via Image-Neural Latent Representation Alignment
DINA aligns image and V1 neural representations to interpret visual computations, revealing decoding relies on coarse, low-level visual structure.
QUIVER: Cost-Aware Adaptive Preference Querying in Surrogate-Assisted Evolutionary Multi-Objective Optimization
QUIVER is a cost-aware, adaptive multi-objective optimizer that intelligently balances objective evaluations and heterogeneous preference queries to minimize regret.
phys-MCP: A Control Plane for Heterogeneous Physical Neural Networks
phys-MCP enables unified control and orchestration of diverse physical neural networks across edge and cloud environments.
Exact and Evolutionary Algorithms for Sequential Multi-Objective Transmission Topology Planning
This paper introduces exact and evolutionary algorithms for sequential multi-objective transmission topology planning, providing a fast, exact solution and a benchmark.
Unifying Dynamical Systems and Graph Theory to Mechanistically Understand Computation in Neural Networks
Unifying dynamical systems and graph theory, this paper shows neural network computation relies on multi-hop pathways, introducing R-RNNs for improved temporal sparsity.
Symmetry-Protected Lyapunov Neutral Modes in Equivariant Recurrent Networks
This paper proves equivariant recurrent networks have symmetry-protected neutral modes, ensuring stable long-term memory for continuous variables.
Neuromorphic Control for 3D Navigation in Minecraft Using Genetic Algorithms
This paper uses a genetic algorithm to train a neural network for autonomous 3D navigation and parkour in Minecraft.
MPCS: Neuroplastic Continual Learning via Multi-Component Plasticity and Topology-Aware EWC
MPCS is a neuroplastic continual learning system integrating 11 mechanisms, achieving high efficiency and demonstrating critical component insights.
Combining Trained Models in Reinforcement Learning
A systematic review of DRL model reuse reveals patterns in transfer, ensemble, and federated learning, noting limitations in current empirical evidence.
HERCULES: Hardware-Efficient, Robust, Continual Learning Neural Architecture Search
HERCULES introduces a new framework and taxonomy for Neural Architecture Search, integrating hardware efficiency, robustness, and continual learning for deployable AI.
SNNF: An SNN-based Near-Sensor Noise Filter for Dynamic Vision Sensors
SNNF is a near-sensor Spiking Neural Network filter that efficiently removes background noise from Dynamic Vision Sensors, enabling low-power edge AI.
Training Non-Differentiable Networks via Optimal Transport
PolyStep is a gradient-free optimizer using optimal transport to train non-differentiable neural networks, outperforming existing methods significantly.
ShiftLIF: Efficient Multi-Level Spiking Neurons with Power-of-Two Quantization
ShiftLIF is a new multi-level spiking neuron that uses power-of-two quantization for efficient, high-accuracy SNNs in edge sensing.
Probe-Geometry Alignment: Erasing the Cross-Sequence Memorization Signature Below Chance
This paper introduces Probe-Geometry Alignment (PGA) to surgically erase hidden memorization traces in LLMs, making them unrecoverable without affecting capabilities.
Benchmarking local Hebbian learning rules for memory storage and prototype extraction
This paper benchmarks seven Hebbian learning rules for associative memory, finding Bayesian-Hebbian rules offer the highest capacity.
Robust volatility updates for Hierarchical Gaussian Filtering
This paper introduces a robust method for updating volatility in Hierarchical Gaussian Filtering, preventing negative posterior precision errors.
Learning to Act and Cooperate for Distributed Black-Box Consensus Optimization
This paper introduces LACMAS, a trajectory-driven framework using LLMs to self-design agent actions and cooperation for distributed black-box consensus optimization.
Spiking Sequence Machines and Transformers
This paper reveals that Spiking Sequence Machines and Transformers independently implement the same five functional operations using cosine similarity.
Affinity Is Not Enough: Recovering the Free Energy Principle in Mixture-of-Experts
Novel gating mechanisms, inspired by the Free Energy Principle, significantly improve Mixture-of-Experts routing at domain transitions.
Scalable Learning in Structured Recurrent Spiking Neural Networks without Backpropagation
This paper introduces a structured recurrent SNN architecture with local plasticity and neuromodulatory learning for scalable, backpropagation-free training.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.