PlayCoder: Making LLM-Generated GUI Code Playable
Zhiyuan Peng, Wei Tao, Xin Yin, Chenhao Ying, Yuan Luo + 1 more
TLDR
PlayCoder is a new framework that significantly improves LLM generation and repair of logically correct GUI application code, evaluated by the PlayEval benchmark.
Key contributions
- Introduces PlayEval, a new benchmark for 43 multilingual GUI applications across six categories.
- Proposes Play@k metric and PlayTester agent for automated, end-to-end GUI playthrough evaluation.
- Reveals SOTA LLMs achieve near-zero Play@3, indicating major weaknesses in GUI logic generation.
- Presents PlayCoder, a multi-agent framework that iteratively repairs GUI code, boosting Play@3 to 20.3%.
Why it matters
LLMs struggle with generating logically correct interactive GUI applications. This work introduces a robust benchmark and evaluation method, PlayEval and Play@k, to accurately assess LLM performance. PlayCoder then significantly improves LLM capabilities, paving the way for more reliable LLM-generated interactive software.
Original Abstract
Large language models (LLMs) have achieved strong results in code generation, but their ability to generate GUI applications, especially games, remains insufficiently studied. Existing benchmarks mainly evaluate correctness through test cases, which are inadequate for GUI applications because these systems are interactive, event-driven, and require correct state transitions across sequences of user actions. Their evaluation therefore should consider interaction flows and UI logic rather than only pass/fail outcomes. To study this problem, we introduce PlayEval, a repository-aware benchmark built from 43 multilingual GUI applications in Python, TypeScript, and JavaScript. Unlike prior GUI benchmarks that are difficult to adapt to desktop environments, PlayEval covers six major GUI application categories and directly supports code-generation evaluation. We further propose Play@k, a metric that measures whether at least one of *k* generated candidates can be played end-to-end without logical errors. To support reliable evaluation, we develop PlayTester, an LLM-based agent that performs task-oriented GUI playthroughs and detects logic violations automatically. Experiments on 10 state-of-the-art code LLMs show that, despite high compilation rates, they achieve near-zero Play@3, revealing major weaknesses in generating logically correct GUI applications. To address this limitation, we present PlayCoder, a multi-agent, repository-aware framework that generates, evaluates, and iteratively repairs GUI application code in a closed loop. PlayCoder substantially improves both functional correctness and semantic alignment for open-source and closed-source models, reaching up to 38.1% Exec@3 and 20.3% Play@3. Case studies further show that it can uncover silent logic bugs missed by traditional metrics and fix them through targeted edits.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.