Human-in-the-Loop Benchmarking of Heterogeneous LLMs for Automated Competency Assessment in Secondary Level Mathematics
Jatin Bhusal, Nancy Mahatha, Aayush Acharya, Raunak Regmi
TLDR
This paper introduces a human-in-the-loop framework to benchmark LLMs for automated math competency assessment, finding architectural design crucial.
Key contributions
- Developed a "Human-in-the-Loop" framework for benchmarking LLMs in secondary math assessment.
- Created a multi-dimensional rubric for Grade 10 math, covering four topics and four core competencies.
- Benchmarked diverse LLMs against human experts, identifying an "Architecture-compatibility gap".
- Showed architectural compliance, not just parameter scale, is key for LLM performance in rubric tasks.
Why it matters
Competency-Based Education needs automated assessment tools. This paper provides a framework to evaluate LLMs for this, highlighting critical architectural considerations. It suggests LLMs can assist educators by extracting preliminary evidence, improving efficiency in a human-supervised workflow.
Original Abstract
As Competency-Based Education (CBE) is gaining traction around the world, the shift from marks-based assessment to qualitative competency mapping is a manual challenge for educators. This paper tackles the bottleneck issue by suggesting a "Human-in-the-Loop" benchmarking framework to assess the effectiveness of multiple LLMs in automating secondary-level mathematics assessment. Based on the Grade 10 Optional Mathematics curriculum in Nepal, we created a multi-dimensional rubric for four topics and four cross-cutting competencies: Comprehension, Knowledge, Operational Fluency, and Behavior and Correlation. The multi-provider ensemble, consisted of open-weight models -- Eagle (Llama 3.1-8B) and Orion (Llama 3.3-70B) -- and proprietary frontier models Nova (Gemini 2.5 Flash) and Lyra (Gemini 3 Pro), was benchmarked against a ground truth defined by two senior mathematics faculty members (kappa_w = 0.8652). The findings show a marked "Architecture-compatibility gap". Although the Gemini-based Mixture-of-Experts (Sparse MoE) models achieved "Fair Agreement" (kappa_w ~ 0.38), the larger Orion (70B) model exhibited "No Agreement" (kappa_w = -0.0261), suggesting that architectural compliance with instruction constraints outweighs the scale of raw parameters in rubric-constrained tasks. We conclude that while LLMs are not yet suitable for autonomous certification, they provide high-value assistive support for preliminary evidence extraction within a "Human-in-the-Loop" framework.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.