ArXiv TLDR
← All categories

Machine Learning

Papers on learning algorithms, neural networks, deep learning, and optimization.

cs.LG · 1353 papers

Scaling Laws and Tradeoffs in Recurrent Networks of Expressive Neurons

ELM Networks demonstrate optimal resource allocation in recurrent networks, favoring more complex neurons as scale increases, challenging simple-unit defaults.

2605.12049May 12, 2026Aaron Spieler, Georg Martius, Anna Levina

Approximation Theory of Laplacian-Based Neural Operators for Reaction-Diffusion System

This paper shows Laplacian-based neural operators efficiently approximate reaction-diffusion systems with polynomial complexity.

2605.12025May 12, 2026Takashi Furuya, Ryo Ozawa, Jenn-Nan Wang

SkillSafetyBench: Evaluating Agent Safety under Skill-Facing Attack Surfaces

SkillSafetyBench evaluates how reusable skills in LLM agents create new attack surfaces, revealing vulnerabilities beyond model-level alignment.

2605.12015May 12, 2026Chang Jin, An Wang, Zeming Wei +7

Random-Set Graph Neural Networks

This paper introduces Random-Set Graph Neural Networks (RS-GNNs) to model node-level epistemic uncertainty using belief functions for improved predictions.

2605.11987May 12, 2026Tommy Woodley, Shireen Kudukkil Manchingal, Matteo Tolloso +2

QDSB: Quantized Diffusion Schrödinger Bridges

QDSB introduces quantized diffusion Schrödinger bridges to efficiently learn generative models from unpaired data, significantly reducing training time.

2605.11983May 12, 2026Tobias Fuchs, Florian Kalinke, Nadja Klein

LOFT: Low-Rank Orthogonal Fine-Tuning via Task-Aware Support Selection

LOFT is a low-rank orthogonal fine-tuning framework that separates adaptation subspace and transformation, improving PEFT efficiency via task-aware support selection.

2605.11872May 12, 2026Lanxin Zhao, Bamdev Mishra, Pratik Jawanpuria +4

Multi-Timescale Conductance Spiking Networks: A Sparse, Gradient-Trainable Framework with Rich Firing Dynamics for Enhanced Temporal Processing

Multi-timescale conductance SNNs offer rich dynamics, sparse activity, and direct gradient training, outperforming SOTA in temporal processing.

2605.11835May 12, 2026Alex Fulleda-Garcia, Saray Soldado-Magraner, Josep Maria Margarit-Taulé

One-Step Generative Modeling via Wasserstein Gradient Flows

W-Flow introduces a novel one-step generative model using Wasserstein gradient flows, achieving state-of-the-art image generation 100x faster than diffusion models.

2605.11755May 12, 2026Jiaqi Han, Puheng Li, Qiushan Guo +3

Persona-Conditioned Adversarial Prompting: Multi-Identity Red-Teaming for Adversarial Discovery and Mitigation

PCAP uses diverse personas for red-teaming LLMs, significantly boosting attack success and generating robust defense data for improved safety.

2605.11730May 12, 2026Cristian Morasso, Anisa Halimi, Muhammad Zaid Hameed +1

Learning U-Statistics with Active Inference

An active inference framework for U-statistics improves estimation efficiency by selectively querying informative labels under budget constraints.

2605.11638May 12, 2026Xiaoning Wang, Yuyang Huo, Liuhua Peng +1

Exact Stiefel Optimization for Probabilistic PLS: Closed-Form Updates, Error Bounds, and Calibrated Uncertainty

Introduces an end-to-end framework for Probabilistic PLS using exact Stiefel optimization, offering calibrated uncertainty and improved accuracy.

2605.11607May 12, 2026Haoran Hu, Xingce Wang

EpiCastBench: Datasets and Benchmarks for Multivariate Epidemic Forecasting

EpiCastBench introduces 40 diverse multivariate epidemic datasets and a standardized benchmark for evaluating forecasting models.

2605.11598May 12, 2026Madhurima Panja, Danny D'Agostino, Huitao Li +2

A Composite Activation Function for Learning Stable Binary Representations

Introduces HTAF, a smooth composite activation function enabling stable gradient-based training of neural networks with binary representations.

2605.11558May 12, 2026Seokhun Park, Choeun Kim, Kwanho Lee +3

The Evaluation Differential: When Frontier AI Models Recognise They Are Being Tested

This paper introduces the Evaluation Differential, showing AI models behave differently when tested, challenging safety claims from current evaluations.

2605.11496May 12, 2026Varad Vishwarupe, Nigel Shadbolt, Marina Jirotka +1

LPDP: Inference-Time Reward Control for Variable-Length DNA Generation with Edit Flows

LPDP enables training-free, inference-time reward control for variable-length DNA generation using biologically plausible edit flows.

2605.11368May 12, 2026Jeongchan Kim, Yunkyung Ko, Jong Chul Ye

Beyond Manual Curation: Augmenting Targeted Protein Degradation Databases via Agentic Literature Extraction Workflows

A new expert-in-the-loop LLM workflow automates targeted protein degradation data extraction, significantly expanding databases with high accuracy.

2605.11221May 11, 2026Yaochen Rao, Farzaneh Jalalypour, N. M. Anoop Krishnan +1

Decomposing Evolutionary Mixture-of-LoRA Architectures: The Routing Lever, the Lifecycle Penalty, and a Substrate-Conditional Boundary

This paper decomposes an evolutionary Mixture-of-LoRA system, finding that router improvements, not the evolutionary lifecycle, drive performance gains.

2605.11153May 11, 2026Ramchand Kumaresan

ELF: Embedded Language Flows

ELF proposes a continuous diffusion model for language, leveraging flow matching in embedding space to achieve superior generation quality with fewer steps.

2605.10938May 11, 2026Keya Hu, Linlu Qiu, Yiyang Lu +5

Variational Inference for Lévy Process-Driven SDEs via Neural Tilting

This paper introduces a neural exponential tilting framework for variational inference in Lévy-driven SDEs, addressing challenges in modeling extreme events.

2605.10934May 11, 2026Yaman Kindap, Manfred Opper, Benjamin Dupuis +2

DECO: Sparse Mixture-of-Experts with Dense-Comparable Performance on End-Side Devices

DECO is a sparse MoE model matching dense performance on end-side devices, offering 3x speedup and reduced storage overhead.

2605.10933May 11, 2026Chenyang Song, Weilin Zhao, Xu Han +3
PreviousPage 5 of 68Next

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.