ArXiv TLDR

Autonomous LLM-generated Feedback for Student Exercises in Introductory Software Engineering Courses

🐦 Tweet
2604.20803

Andreas Metzger

cs.SE

TLDR

NAILA uses LLMs to provide autonomous, 24/7 feedback for student exercises in introductory software engineering courses, addressing high enrollment.

Key contributions

  • Introduces NAILA, an LLM-powered tool providing 24/7 autonomous feedback for student SE exercises.
  • Evaluates student code against teacher-defined model solutions using specialized prompt templates.
  • Empirical study with 900+ students assessed motivations, acceptance, engagement, and academic impact.
  • Aims to overcome challenges of high student-to-teacher ratios and diverse student backgrounds.

Why it matters

This paper addresses the critical challenge of providing timely, personalized feedback in large introductory software engineering courses. By introducing and empirically validating NAILA, an LLM-based autonomous feedback system, it offers a scalable solution to improve student learning and support educators. This has significant implications for future educational practices.

Original Abstract

Introductory Software Engineering (SE) courses face rapidly increasing student enrollment numbers, participants with diverse backgrounds and the influence of Generative AI (GenAI) solutions. High teacher-to-student ratios often challenge providing timely, high-quality, and personalized feedback a significant challenge for educators. To address these challenges, we introduce NAILA, a tool that provides 24/7 autonomous feedback for student exercises. Utilizing GenAI in the form of modern LLMs, NAILA processes student solutions provided in open document formats, evaluating them against teacher-defined model solutions through specialized prompt templates. We conducted an empirical study involving 900+ active students at the University of Duisburg-Essen to assess four main research questions investigating (1) the underlying motivations that drive students to either adopt or reject NAILA, (2) user acceptance by measuring perceived usefulness and ease of use alongside subjective learning progress, (3) how often and how consistently students engage with NAILA, and (4) how using NAILA to receive AI feedback impacts on academic performance compared to human feedback.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.