Threat Modelling using Domain-Adapted Language Models: Empirical Evaluation and Insights
Saba Pourhanifeh, AbdulAziz AbdulGhaffar, Ashraf Matrawy
TLDR
An evaluation of domain-adapted LLMs for STRIDE threat modeling reveals inconsistent performance and fundamental limitations, urging task-specific reasoning.
Key contributions
- Systematically evaluated domain-adapted LLMs/SLMs against general models for STRIDE threat modeling in 5G security.
- Found domain-adapted models do not consistently outperform general ones; decoding strategies significantly impact output.
- Observed larger models show inconsistent performance gains, insufficient for reliable threat modeling.
- Highlights fundamental LLM limitations for structured threat modeling, suggesting a need for task-specific reasoning.
Why it matters
This paper provides a crucial empirical evaluation of LLMs for structured threat modeling, a critical cybersecurity task. It challenges assumptions about domain adaptation and model scaling, revealing current LLMs' fundamental limitations. The findings guide future research towards incorporating task-specific reasoning and stronger security grounding.
Original Abstract
Large Language Models(LLMs) are increasingly explored for cybersecurity applications such as vulnerability detection. In the domain of threat modelling, prior work has primarily evaluated a number of general-purpose Large Language Models under limited prompting settings. In this study, we extend the research area of structured threat modelling by systematically evaluating domain-adapted language models of different sizes to their general counterparts. We use both LLMs and Small Language Models(SLMs) that were domain adapted to telecommunications and cybersecuirty. For the structured threat modelling, we selected the widely used STRIDE approach and the application area is 5G security. We present a comprehensive empirical evaluation using 52 different configurations (on 8 different language models) to analyze the impact of 1) domain adaptation, 2) model scale, 3) decoding strategies (greedy vs. stochastic sampling), and 4) prompting technique on STRIDE threat classification. Our results show that domain-adapted models do not consistently outperform their general-purpose counterparts, and decoding strategies significantly affect model behavior and output validity. They also show that while larger models generally achieve higher performance, these gains are neither consistent nor sufficient for reliable threat modelling. These findings highlight fundamental limitations of current LLMs for structured threat modelling tasks and suggest that improvements require more than additional training data or model scaling, motivating the need for incorporating more task-specific reasoning and stronger grounding in security concepts. We present insights on invalid outputs encountered and present suggestions for prompting tailored specifically for STRIDE threat modelling.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.