Assertain: Automated Security Assertion Generation Using Large Language Models
Shams Tarek, Dipayan Saha, Khan Thamid Hasan, Sujan Kumar Saha, Mark Tehranipoor + 1 more
TLDR
Assertain automates security assertion generation for hardware designs using LLMs and threat intelligence, boosting verification accuracy.
Key contributions
- Combines RTL analysis, CWE mapping, and threat models to auto-generate security assertions.
- Uses large language models with self-reflection to ensure syntactic and semantic correctness.
- Outperforms GPT-5 by over 60% in assertion accuracy and vulnerability coverage.
- Reduces manual effort and expands detection of architectural hardware flaws.
Why it matters
Manual security property specification is a bottleneck in hardware verification. Assertain automates this, improving accuracy and coverage, thus enhancing hardware security assurance.
Original Abstract
The increasing complexity of modern system-on-chip designs amplifies hardware security risks and makes manual security property specification a major bottleneck in formal property verification. This paper presents Assertain, an automated framework that integrates RTL design analysis, Common Weakness Enumeration (CWE) mapping, and threat model intelligence to automatically generate security properties and executable SystemVerilog Assertions. Assertain leverages large language models with a self-reflection refinement mechanism to ensure both syntactic correctness and semantic consistency. Evaluated on 11 representative hardware designs, Assertain outperforms GPT-5 by 61.22%, 59.49%, and 67.92% in correct assertion generation, unique CWE coverage, and architectural flaw detection, respectively. These results demonstrate that Assertain significantly expands vulnerability coverage, improves assertion quality, and reduces manual effort in hardware security verification.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.