ArXiv TLDR

Frugal Knowledge Graph Construction with Local LLMs: A Zero-Shot Pipeline, Self-Consistency and Wisdom of Artificial Crowds

🐦 Tweet
2604.11104

Pierre Jourlin

cs.AIcs.IRcs.LGcs.NE

TLDR

A frugal, zero-shot pipeline for knowledge graph construction and exploitation uses local LLMs on consumer hardware, achieving strong results with novel diversity mechanisms.

Key contributions

  • Presents a zero-shot knowledge graph construction pipeline using local LLMs on consumer-grade hardware.
  • Achieves 0.70 F1 for relation extraction and 0.46 EM for multi-hop reasoning in zero-shot.
  • Proposes self-consistency and confidence-routing, boosting multi-hop EM to 0.55.
  • Identifies an "agreement paradox" where high consensus among LLM samples may indicate hallucination.

Why it matters

This paper is significant for demonstrating powerful knowledge graph construction and reasoning can be achieved frugally using local LLMs. It democratizes advanced NLP tasks and offers crucial insights for improving LLM reliability and reducing hallucinations in complex reasoning.

Original Abstract

This paper presents an empirical study of a multi-model zero-shot pipeline for knowledge graph construction and exploitation, executed entirely through local inference on consumer-grade hardware. We propose a reproducible evaluation framework integrating two external benchmarks (DocRED, HotpotQA), WebQuestionsSP-style synthetic data, and the RAGAS evaluation framework in an automated pipeline. On 500 document-level relations, our system achieves an F1 of 0.70 $\pm$ 0.041 in zero-shot, compared to 0.80 for supervised DREEAM. Text-to-query achieves an accuracy of 0.80 $\pm$ 0.06 on 200 samples. Multi-hop reasoning achieves an Exact Match (EM) of 0.46$\pm$0.04 on 500 HotpotQA questions, with a RAGAS faithfulness of 0.96 $\pm$ 0.04 on 50 samples. Beyond the pipeline, we study diversity mechanisms for difficult multi-hop reasoning. On 181 questions unsolvable at zero temperature, self-consistency (k=5, T =0.7) recovers up to 23% EM with a single Mixture-of-Experts (MoE) model, but the cross-model oracle (3 architectures x 5 samples) reaches 46.4%. We highlight an agreement paradox: strong consensus among samples signals collective hallucination rather than a reliable answer, echoing the work of Moussa{ï}d et al. on the wisdom of crowds. Extending to the full pipeline (500 questions), self-consistency (k=3) raises EM from 0.46 to 0.48 $\pm$ 0.04. A confidence-routing cascade mechanism (Phi-4 $\rightarrow$ GPT-OSS, k=5) achieves an EM of 0.55 $\pm$ 0.04, the best result obtained, with 45.4% of questions rerouted. Finally, we show that V3 prompt engineering applied to other models does not reproduce the gains observed with Gemma-4, confirming the specific prompt/model interaction. The entire system runs in $\sim$5 h on a single RTX 3090, without any training, for an estimated carbon footprint of 0.09 kg CO2 eq.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.