ArXiv TLDR

Architecture Matters More Than Scale: A Comparative Study of Retrieval and Memory Augmentation for Financial QA Under SME Compute Constraints

🐦 Tweet
2604.17979

Jianan Liu, Jing Yang, Xianyou Li, Weiran Yan, Yichao Wu + 2 more

cs.IR

TLDR

This paper compares LLM architectures for financial QA under SME constraints, finding structured memory excels in deterministic tasks and RAG in conversational ones.

Key contributions

  • Evaluates LLM architectures for financial QA under strict SME compute constraints.
  • Compares baseline LLM, RAG, structured memory, and memory-augmented conversational reasoning.
  • Reveals structured memory excels in deterministic tasks, while RAG is better for conversational QA.
  • Proposes a hybrid framework for dynamic strategy selection in resource-constrained financial AI.

Why it matters

This research is crucial for SMEs adopting AI, demonstrating that architectural choices significantly impact performance under severe resource limitations. It provides practical guidance by proposing a hybrid framework that dynamically optimizes for accuracy and efficiency, making advanced financial AI accessible without large cloud budgets.

Original Abstract

The rapid adoption of artificial intelligence (AI) and large language models (LLMs) is transforming financial analytics by enabling natural language interfaces for reporting, decision support, and automated reasoning. However, limited empirical understanding exists regarding how different LLM-based reasoning architectures perform across realistic financial workflows, particularly under the cost, accuracy, and compliance constraints faced by small and medium-sized enterprises (SMEs). SMEs typically operate within severe infrastructure constraints, lacking cloud GPU budgets, dedicated AI teams, and API-scale inference capacity, making architectural efficiency a first-class concern. To ensure practical relevance, we introduce an explicit SME-constrained evaluation setting in which all experiments are conducted using a locally hosted 8B-parameter instruction-tuned model without cloud-scale infrastructure. This design isolates the impact of architectural choices within a realistic deployment environment. We systematically compare four reasoning architectures: baseline LLM, retrieval-augmented generation (RAG), structured long-term memory, and memory-augmented conversational reasoning across both FinQA and ConvFinQA benchmarks. Results reveal a consistent architectural inversion: structured memory improves precision in deterministic, operand-explicit tasks, while retrieval-based approaches outperform memory-centric methods in conversational, reference-implicit settings. Based on these findings, we propose a hybrid deployment framework that dynamically selects reasoning strategies to balance numerical accuracy, auditability, and infrastructure efficiency, providing a practical pathway for financial AI adoption in resource-constrained environments.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.