De Jure: Iterative LLM Self-Refinement for Structured Extraction of Regulatory Rules
Keerat Guliani, Deepkamal Gill, David Landsman, Nima Eshraghi, Krishna Kumar + 1 more
TLDR
De Jure is an iterative LLM self-refinement pipeline for automated, structured extraction of regulatory rules from legal documents.
Key contributions
- Presents De Jure, an automated, domain-agnostic pipeline for structured regulatory rule extraction.
- Utilizes a four-stage process: normalization, semantic decomposition, multi-criteria evaluation, and iterative repair.
- Employs an LLM-as-a-judge for 19-dimensional evaluation and self-correction without human annotation.
- Demonstrates monotonic improvement and generalization across finance, healthcare, and AI governance domains.
Why it matters
Converting dense legal text into machine-readable rules is a costly, expert-intensive process. De Jure automates this with LLM self-refinement, removing the need for human annotation and domain-specific prompting. This offers a scalable, auditable path for aligning LLMs with complex regulatory obligations.
Original Abstract
Regulatory documents encode legally binding obligations that LLM-based systems must respect. Yet converting dense, hierarchically structured legal text into machine-readable rules remains a costly, expert-intensive process. We present De Jure, a fully automated, domain-agnostic pipeline for extracting structured regulatory rules from raw documents, requiring no human annotation, domain-specific prompting, or annotated gold data. De Jure operates through four sequential stages: normalization of source documents into structured Markdown; LLM-driven semantic decomposition into structured rule units; multi-criteria LLM-as-a-judge evaluation across 19 dimensions spanning metadata, definitions, and rule semantics; and iterative repair of low-scoring extractions within a bounded regeneration budget, where upstream components are repaired before rule units are evaluated. We evaluate De Jure across four models on three regulatory corpora spanning finance, healthcare, and AI governance. On the finance domain, De Jure yields consistent and monotonic improvement in extraction quality, reaching peak performance within three judge-guided iterations. De Jure generalizes effectively to healthcare and AI governance, maintaining high performance across both open- and closed-source models. In a downstream compliance question-answering evaluation via RAG, responses grounded in De Jure extracted rules are preferred over prior work in 73.8% of cases at single-rule retrieval depth, rising to 84.0% under broader retrieval, confirming that extraction fidelity translates directly into downstream utility. These results demonstrate that explicit, interpretable evaluation criteria can substitute for human annotation in complex regulatory domains, offering a scalable and auditable path toward regulation-grounded LLM alignment.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.