mcdok at SemEval-2026 Task 13: Finetuning LLMs for Detection of Machine-Generated Code
Adam Skurla, Dominik Macko, Jakub Simko
TLDR
This paper describes mcdok's approach to SemEval-2026 Task 13, finetuning LLMs to detect machine-generated code across various languages and subtasks.
Key contributions
- Addresses SemEval-2026 Task 13: multi-domain detection of machine-generated code.
- Adapts the mdok text detection approach for specific code understanding tasks.
- Explores finetuned LLMs to detect generated, hybrid, and adversarially modified code.
- Achieves competitive results in all three subtasks, with room for further improvement.
Why it matters
This paper addresses the critical challenge of detecting machine-generated code, vital for code quality, security, and IP. It presents a competitive LLM-based approach for multi-domain detection, including hybrid and adversarial code, showing both current capabilities and areas for future improvement.
Original Abstract
Multi-domain detection of the machine-generated code snippets in various programming languages is a challenging task. SemEval-2026 Task~13 copes with this challenge in various angles, as a binary detection problem as well as attribution of the source. Specifically, its subtasks also cover generator LLM family detection, as well as a hybrid code co-generated by humans and machines, or adversarially modified codes hiding its origin. Our submitted systems adjusted the existing mdok approach (focused on machine-generated text detection) to these specific kinds of problems by exploring various base models, more suitable for code understanding. The results indicate that the submitted systems are competitive in all three subtasks. However, the margins from the top-performing systems are significant, and thus further improvements are possible.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.