OOM-RL: Out-of-Money Reinforcement Learning Market-Driven Alignment for LLM-Based Multi-Agent Systems
TLDR
OOM-RL aligns multi-agent systems using financial market losses as an objective penalty, preventing sycophancy and test evasion.
Key contributions
- OOM-RL aligns multi-agent systems by using real financial market losses as an objective, un-hackable negative gradient.
- Overcomes sycophancy (RLHF/RLAIF) and "Test Evasion" in execution-based environments.
- Implements Strict Test-Driven Agentic Workflow (STDAW) with RO-Lock and 95% code coverage.
- Achieved a stable equilibrium and 2.06 Sharpe ratio in a 20-month financial market study.
Why it matters
This paper offers a novel, objective alignment method for autonomous agents in high-stakes environments. By using economic penalties, it provides a robust alternative to subjective feedback, paving the way for more reliable and generalizable agent systems.
Original Abstract
The alignment of Multi-Agent Systems (MAS) for autonomous software engineering is constrained by evaluator epistemic uncertainty. Current paradigms, such as Reinforcement Learning from Human Feedback (RLHF) and AI Feedback (RLAIF), frequently induce model sycophancy, while execution-based environments suffer from adversarial "Test Evasion" by unconstrained agents. In this paper, we introduce an objective alignment paradigm: \textbf{Out-of-Money Reinforcement Learning (OOM-RL)}. By deploying agents into the non-stationary, high-friction reality of live financial markets, we utilize critical capital depletion as an un-hackable negative gradient. Our longitudinal 20-month empirical study (July 2024 -- February 2026) chronicles the system's evolution from a high-turnover, sycophantic baseline to a robust, liquidity-aware architecture. We demonstrate that the undeniable ontological consequences of financial loss forced the MAS to abandon overfitted hallucinations in favor of the \textbf{Strict Test-Driven Agentic Workflow (STDAW)}, which enforces a Byzantine-inspired uni-directional state lock (RO-Lock) anchored to a deterministically verified $\geq 95\%$ code coverage constraint matrix. Our results show that while early iterations suffered severe execution decay, the final OOM-RL-aligned system achieved a stable equilibrium with an annualized Sharpe ratio of 2.06 in its mature phase. We conclude that substituting subjective human preference with rigorous economic penalties provides a robust methodology for aligning autonomous agents in high-stakes, real-world environments, laying the groundwork for generalized paradigms where computational billing acts as an objective physical constraint
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.