Can Code Evaluation Metrics Detect Code Plagiarism?
TLDR
Code Evaluation Metrics (CEMs) can effectively detect code plagiarism across modification levels, often outperforming dedicated tools, especially with preprocessing.
Key contributions
- Evaluated five Code Evaluation Metrics (CEMs) against SOTA plagiarism tools (JPlag, Dolos) on three datasets.
- Found CrystalBLEU, CodeBLEU, and RUBY outperformed JPlag without preprocessing.
- CrystalBLEU, with preprocessing, surpassed Dolos in overall plagiarism detection performance.
- Showed CEMs are comparable to dedicated tools for detecting code plagiarism across various modification levels.
Why it matters
This research is crucial for maintaining academic integrity in software engineering education by validating new methods for plagiarism detection. It shows that metrics designed for code generation can be repurposed, potentially offering more flexible and robust tools for educators and developers.
Original Abstract
Source Code Plagiarism Detection (SCPD) plays an important role in maintaining fairness and academic integrity in software engineering education. Code Evaluation Metrics (CEMs) are developed for assessing code generation tasks. However, it remains unclear whether such metrics can reliably detect plagiarism across different levels of modification (L1-L6), increasing in complexity. In this paper, we perform a comparative empirical study using two open-source labelled datasets, ConPlag (raw and template-free versions) and IRPlag. We evaluate five CEMs, namely CodeBLEU, CrystalBLEU, RUBY, Tree Structured Edit Distance (TSED), and CodeBERTScore. The performance is evaluated using threshold-free ranking-based measures to assess overall, per dataset, and per-level plagiarism performance. The results are compared against state-of-the-art (SOTA) Source Code Plagiarism Detection Tools (SCPDTs), JPlag and Dolos. Our findings show that without preprocessing, Dolos achieves the highest overall ranking performance, while among the individual metrics, CrystalBLEU, CodeBLEU, and RUBY outperform JPlag. Performance is strongest at L1 and drops from L4 onward, while CrystalBLEU remains competitive on L6. With preprocessing, CrystalBLEU surpasses Dolos overall. Per dataset, Dolos achieved the best ranking on the ConPlag raw dataset, while CrystalBLEU was the best-performing metric on the remaining datasets. At the plagiarism levels, Dolos remains strongest on L4, while Crystal-BLEU leads most of the remaining difficult levels. These results indicate that CEMs are comparable to dedicated tools in terms of ranking metrics.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.