RunAgent: Interpreting Natural-Language Plans with Constraint-Guided Execution
Arunabh Srivastava, Mohammad A., Khojastepour, Srimat Chakradhar, Sennur Ulukus
TLDR
RunAgent is a multi-agent platform that reliably executes natural-language plans using constraint-guided execution and explicit control constructs.
Key contributions
- Interprets natural-language plans via an agentic language with explicit control constructs.
- Autonomously derives and validates execution constraints for each step.
- Dynamically selects among LLM reasoning, tool usage, and code generation/execution.
- Incorporates error correction and filters context history for relevant information.
Why it matters
Large language models often struggle with reliable, structured workflow execution from natural language. RunAgent addresses this by bridging natural language expressiveness with programming determinism. This offers a robust solution for complex task automation, significantly enhancing LLM reliability.
Original Abstract
Humans solve problems by executing targeted plans, yet large language models (LLMs) remain unreliable for structured workflow execution. We propose RunAgent, a multi-agent plan execution platform that interprets natural-language plans while enforcing stepwise execution through constraints and rubrics. RunAgent bridges the expressiveness of natural language with the determinism of programming via an agentic language with explicit control constructs (e.g., \texttt{IF}, \texttt{GOTO}, \texttt{FORALL}). Beyond verifying syntactic and semantic verification of the step output, which is performed based on the specific instruction of each step, RunAgent autonomously derives and validates constraints based on the description of the task and its instance at each step. RunAgent also dynamically selects among LLM-based reasoning, tool usage, and code generation and execution (e.g., in Python), and incorporates error correction mechanisms to ensure correctness. Finally, RunAgent filters the context history by retaining only relevant information during the execution of each step. Evaluations on Natural-plan and SciBench Datasets demonstrate that RunAgent outperforms baseline LLMs and state-of-the-art PlanGEN methods.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.