Modeling Co-Pilots for Text-to-Model Translation
Serdar Kadioglu, Karthik Uppuluri, Akash Singirikonda
TLDR
Text2Model co-pilots and Text2Zinc dataset enable unified, solver-agnostic text-to-model translation for optimization and satisfaction problems.
Key contributions
- Introduce Text2Model, a suite of LLM-based co-pilots with an online leaderboard for text-to-model translation.
- Present Text2Zinc, a novel cross-domain dataset for natural language optimization and satisfaction problems.
- Develop a unified, solver-agnostic architecture for translating both satisfaction and optimization problems.
- Evaluate various LLM strategies, showing competitive performance and areas for improvement in combinatorial modeling.
Why it matters
This paper addresses the growing interest in LLMs for text-to-model translation and optimization. It provides open-source tools (Text2Model, Text2Zinc) and a unified, solver-agnostic approach, significantly advancing research. The work highlights LLMs' potential while acknowledging current limitations, guiding future combinatorial modeling.
Original Abstract
There is growing interest in leveraging large language models (LLMs) for text-to-model translation and optimization tasks. This paper aims to advance this line of research by introducing \textsc{Text2Model} and \textsc{Text2Zinc}. \textsc{Text2Model} is a suite of co-pilots based on several LLM strategies with varying complexity, along with an online leaderboard. \textsc{Text2Zinc} is a cross-domain dataset for capturing optimization and satisfaction problems specified in natural language, along with an interactive editor with built-in AI assistant. While there is an emerging literature on using LLMs for translating combinatorial problems into formal models, our work is the first attempt to integrate \textit{both} satisfaction and optimization problems within a \textit{unified architecture} and \textit{dataset}. Moreover, our approach is \textit{solver-agnostic} unlike existing work that focuses on translation to a solver-specific model. To achieve this, we leverage \textsc{MiniZinc}'s solver-and-paradigm-agnostic modeling capabilities to formulate combinatorial problems. We conduct comprehensive experiments to compare execution and solution accuracy across several single- and multi-call strategies, including; zero-shot prompting, chain-of-thought reasoning, intermediate representations via knowledge-graphs, grammar-based syntax encoding, and agentic approaches that decompose the model into sequential sub-tasks. Our co-pilot strategies are competitive, and in parts improve, recent research in this domain. Our findings indicate that while LLMs are promising they are not yet a push-button technology for combinatorial modeling. We contribute \textsc{Text2Model} co-pilots and leaderboard, and \textsc{Text2Zinc} and interactive editor to open-source to support closing this performance gap.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.