From Theory to Practice: Code Generation Using LLMs for CAPEC and CWE Frameworks
Murtuza Shahzad, Joseph Wilson, Ibrahim Al Azher, Hamed Alhoori, Mona Rahimi
TLDR
This paper introduces a new dataset of 615 CAPEC/CWE vulnerable code snippets, generated by LLMs, to improve vulnerability research and ML training.
Key contributions
- Created a novel dataset of 615 vulnerable code snippets linked to CAPEC/CWE descriptions.
- Developed a robust methodology using GPT-4o, Llama, and Claude for code generation.
- Dataset covers Java, Python, and JavaScript, offering diverse examples for research.
- Preliminary evaluations show high accuracy (0.98 cosine similarity) for generated code.
Why it matters
This paper addresses the critical gap in comprehensive vulnerability datasets by providing a large, LLM-generated resource. It significantly aids in understanding security vulnerabilities and offers a valuable tool for training advanced machine learning models for automated detection and remediation, accelerating security research.
Original Abstract
The increasing complexity and volume of software systems have heightened the importance of identifying and mitigating security vulnerabilities. The existing software vulnerability datasets frequently fall short in providing comprehensive, detailed code snippets explicitly linked to specific vulnerability descriptions, reducing their utility for advanced research and hindering efforts to develop a deeper understanding of security vulnerabilities. To address this challenge, we present a novel dataset that provides examples of vulnerable code snippets corresponding to Common Attack Pattern Enumerations and Classifications (CAPEC) and Common Weakness Enumeration (CWE) descriptions. By employing the capabilities of Generative Pre-trained Transformer (GPT) models, we have developed a robust methodology for generating these examples. Our approach utilizes GPT-4o, Llama and Claude models to generate code snippets that exhibit specific vulnerabilities as described in CAPEC and CWE documentation. This dataset not only enhances the understanding of security vulnerabilities in code but also serves as a valuable resource for training machine learning models focused on automatic vulnerability detection and remediation. Preliminary evaluations suggest that the dataset generated by Large Language Models demonstrates high accuracy and can serve as a reliable reference for vulnerability identification systems. We found consistent results across the three models, with 0.98 cosine similarity among codes. The final dataset comprises 615 CAPEC code snippets in three programming languages: Java, Python, and JavaScript, making it one of the most extensive and diverse resources in this domain.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.