RedShell: A Generative AI-Based Approach to Ethical Hacking
Ricardo Bessa, Rui Claro, João Trindade, João Lourenço
TLDR
RedShell uses generative AI to create malicious PowerShell code for ethical hacking, supported by a new dataset for training and evaluation.
Key contributions
- Proposes RedShell, a generative AI tool for malicious PowerShell code generation.
- Introduces a ground truth dataset for fine-tuning offensive code generators.
- Demonstrates RedShell generates syntactically valid PowerShell with fewer than 10% parse errors.
- Achieves competitive semantic consistency (Edit Distance >50%, METEOR >40%) with reference snippets.
Why it matters
This paper addresses the data scarcity challenge in training offensive code generators, advancing Generative AI application in ethical hacking. It provides a practical tool and dataset, highlighting LLMs' potential in controlled cybersecurity environments for future automation.
Original Abstract
The application of Machine Learning techniques in code generation is now a common practice for most developers. Tools such as ChatGPT from OpenAI leverage the natural language processing capabilities of Large Language Models to generate machine code from natural language descriptions. In the cybersecurity field, red teams can also take advantage of generative models to build malicious code generators, providing more automation to Pentest audits. However, the application of Large Language Models in malicious code generation remains challenging due to the lack of data to train and evaluate offensive code generators. In this work, we propose RedShell, a tool that allows ethical hackers to generate malicious PowerShell code. We also introduce a ground truth dataset, combining publicly available code samples to fine-tune models in malicious PowerShell generation. Our experiments demonstrate the strong capabilities of RedShell in generating syntactically valid PowerShell, with fewer than 10% of the generated samples resulting in parse errors. Furthermore, our specialized model was able to produce samples that were semantically consistent with reference snippets, achieving a competitive performance on standard output similarity metrics such as Edit Distance and METEOR, with their mean similarity scores exceeding 50% and 40%, respectively. This work sheds light on the state-of-the-art research in the field of Generative AI applied to Pentesting, and also serves as a steppingstone for future advancements, highlighting the potential benefits these models hold within such controlled environments.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.