ArXiv TLDR

Nemobot Games: Crafting Strategic AI Gaming Agents for Interactive Learning with Large Language Models

🐦 Tweet
2604.21896

Chee Wei Tan, Yuchen Wang, Shangxin Guo

cs.AI

TLDR

Nemobot is an interactive environment leveraging LLMs to create and deploy strategic AI game agents, advancing self-programming AI through diverse game types.

Key contributions

  • Nemobot provides an interactive environment for crafting and deploying LLM-powered strategic game agents.
  • Leverages LLMs to operationalize Shannon's game taxonomy across four distinct game classes.
  • Employs LLMs for efficient state compression, optimal strategy computation, and heuristic synthesis.
  • Refines strategies using RLHF, self-critique, and integrates crowd-sourced learning for self-programming.

Why it matters

This paper introduces a novel paradigm for AI game programming using LLMs, extending Shannon's work. Nemobot enables users to interactively develop and refine AI agents, pushing towards self-programming AI by integrating human creativity and learning.

Original Abstract

This paper introduces a new paradigm for AI game programming, leveraging large language models (LLMs) to extend and operationalize Claude Shannon's taxonomy of game-playing machines. Central to this paradigm is Nemobot, an interactive agentic engineering environment that enables users to create, customize, and deploy LLM-powered game agents while actively engaging with AI-driven strategies. The LLM-based chatbot, integrated within Nemobot, demonstrates its capabilities across four distinct classes of games. For dictionary-based games, it compresses state-action mappings into efficient, generalized models for rapid adaptability. In rigorously solvable games, it employs mathematical reasoning to compute optimal strategies and generates human-readable explanations for its decisions. For heuristic-based games, it synthesizes strategies by combining insights from classical minimax algorithms (see, e.g., shannon1950chess) with crowd-sourced data. Finally, in learning-based games, it utilizes reinforcement learning with human feedback and self-critique to iteratively refine strategies through trial-and-error and imitation learning. Nemobot amplifies this framework by offering a programmable environment where users can experiment with tool-augmented generation and fine-tuning of strategic game agents. From strategic games to role-playing games, Nemobot demonstrates how AI agents can achieve a form of self-programming by integrating crowdsourced learning and human creativity to iteratively refine their own logic. This represents a step toward the long-term goal of self-programming AI.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.