Code for All: Educational Applications of the "Vibe Coding" Hackathon in Programming Education across All Skill Levels
Ashley J. Chen, Yijia Cao, Minghao Shao, Ramesh Karri, Muhammad Shafique
TLDR
A hackathon explored "vibe coding" (LLM-generated code) across all skill levels, revealing its educational potential and implications for AI-assisted programming.
Key contributions
- Investigated "vibe coding" (LLM-generated code) through a month-long hackathon for all skill levels.
- Participants developed projects using *only* LLM-generated code, without manual edits, across three difficulty tracks.
- Assessed educational effectiveness via project evaluations, surveys, and thematic analysis of participant feedback.
Why it matters
This paper explores how "vibe coding" with LLMs can broaden programming access and enhance learning. It offers critical insights into integrating AI-assisted development into educational and competitive environments, shaping future pedagogical strategies.
Original Abstract
The emergence of large language models has enabled vibe coding, a natural language approach to programming in which users describe intent and AI generates or revises code, potentially broadening access to programming while preserving meaningful learning outcomes. We investigate its educational value through a month-long online hackathon that welcomed participants from multiple countries, ranging from complete beginners to experienced developers. The hackathon offered three tracks with increasing technical demands. Spark emphasized basic frontend functionality and dynamic features such as buttons, forms, and API calls. Build required backend or database integration. Launch targeted production ready web applications, including deployment. Participants were required to develop projects using only LLM generated code without manual edits and submitted complete chat histories, source code, demo videos, and functionality reports. We assessed educational effectiveness with a mixed methods design that combined standardized project evaluations across functionality, user interface and user experience design, impact, prompt quality, and code readability, along with post-hackathon surveys of perceived learning outcomes and thematic analysis of open-ended feedback. Our findings describe how participants with different backgrounds engage with vibe coding as task complexity increases, how the no manual editing constraint shapes prompting and debugging practices, and what these patterns imply for integrating AI assisted development into programming education and competitive learning environments.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.