Beyond the Parameters: A Technical Survey of Contextual Enrichment in Large Language Models: From In-Context Prompting to Causal Retrieval-Augmented Generation
Prakhar Bansal, Shivangi Agarwal
TLDR
This survey reviews LLM augmentation strategies like RAG, categorizing them by structured context, and provides a decision framework.
Key contributions
- Provides a unified survey of LLM augmentation strategies, from in-context learning to CausalRAG.
- Categorizes methods by the degree of structured context supplied during inference.
- Introduces a claim-audit framework and structured evidence synthesis for robust findings.
- Offers a deployment-oriented decision framework and research priorities for RAG.
Why it matters
This survey offers a critical roadmap for enhancing LLMs by systematically reviewing contextual enrichment strategies. It provides practical tools, including a claim-audit framework and a deployment decision guide, invaluable for building trustworthy retrieval-augmented NLP systems.
Original Abstract
Large language models (LLMs) encode vast world knowledge in their parameters, yet they remain fundamentally limited by static knowledge, finite context windows, and weakly structured causal reasoning. This survey provides a unified account of augmentation strategies along a single axis: the degree of structured context supplied at inference time. We cover in-context learning and prompt engineering, Retrieval-Augmented Generation (RAG), GraphRAG, and CausalRAG. Beyond conceptual comparison, we provide a transparent literature-screening protocol, a claim-audit framework, and a structured cross-paper evidence synthesis that distinguishes higher-confidence findings from emerging results. The paper concludes with a deployment-oriented decision framework and concrete research priorities for trustworthy retrieval-augmented NLP.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.