ArXiv TLDR

Raven: Rethinking Automated Assessment for Scratch Programs via Video-Grounded Evaluation

🐦 Tweet
2604.17820

Donglin Li, Daming Li, Hanyuan Shi, Jialu Zhang

cs.SE

TLDR

Raven is a new automated assessment framework for Scratch programs that uses video analysis and LLMs to evaluate visual and interactive behaviors.

Key contributions

  • Replaces program-specific assertions with task-level video generation rules for Scratch assessment.
  • Integrates LLMs and video analysis to evaluate visual and interactive program behaviors.
  • Enables consistent evaluation despite diverse student implementation strategies.
  • Significantly outperforms prior automated assessment tools in accuracy and robustness.

Why it matters

Traditional Scratch assessment is brittle and hard to scale due to program diversity, leading to manual grading and inconsistent feedback. Raven provides a scalable, consistent solution by evaluating visual and interactive behaviors, improving assessment in introductory computing education.

Original Abstract

Block-based programming environments such as Scratch are widely used in introductory computing education, yet scalable and reliable automated assessment remains elusive. Scratch programs are highly heterogeneous, event-driven, and visually grounded, which makes traditional assertion-based or test-based grading brittle and difficult to scale. As a result, assessment in real Scratch classrooms still relies heavily on manual inspection and delayed feedback, introducing inconsistency across instructors and limiting scalability. We present Raven, an automated assessment framework for Scratch that replaces program-specific state assertions with instructor-specified, task-level video generation rules shared across all student submissions. Raven integrates large language models with video analysis to evaluate whether a program's observed visual and interactive behaviors satisfy grading criteria, without requiring explicit test cases or predefined outputs. This design enables consistent evaluation despite substantial diversity in implementation strategies and interaction sequences. We evaluate Raven on 13 real Scratch assignments comprising over 140 student submissions with ground-truth labels from human graders. The results show that Raven significantly outperforms prior automated assessment tools in both grading accuracy and robustness across diverse programming styles. A classroom study with 30 students and 10 instructors further demonstrates strong user acceptance and practical applicability. Together, these findings highlight the effectiveness of task-level behavioral abstractions for scalable assessment of open-ended, event-driven programs.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.